内容简介:This post assumes that the reader has some notion of entities extraction from texts and want to further understand what state-of-the-art techniques exist for new custom entity recognition and how to use them. However, if you are new to NER problem then ple
This post assumes that the reader has some notion of entities extraction from texts and want to further understand what state-of-the-art techniques exist for new custom entity recognition and how to use them. However, if you are new to NER problem then please do read about it here .
Having said that, the purpose of this post is to delineate using of a pretrained natural language processing (NLP) core model from spaCy for learning to recognise new entities. The existing core NLP models from spacy are trained to recognise various entities as given in Figure 2.
Nonetheless, a user might want to construct its own entities to solve problem needs. In such a case, preexisting entities render themselves insufficient, thus, one needs to train NLP model do the job. Thanks to spaCy for its documentation and pretrained models this is not very difficult.
If you do not want to read further, and would rather learn how to use it then please go to this jupyter notebook - it is self-contained. Regardless, I would recommend to read it as well.
Data Preprocessing
Like any supervised learning algorithm which requires input and output to learn, similarly, here- the input is text and output is encoded according to BILUO as shown in Figure 3. While there exists a different scheme, however, Ratinov and Roth showed that the minimal Begin, In, Out ( IOB ) scheme is more difficult to learn than the BILUO scheme which explicitly marks boundary tokens. An example of IOB encoded is provided by spaCy that I found in consonance with the provided argument. Thus, from here on any mention of an annotation scheme will be BILUO.
A short example of BILUO encoded entities is shown in the following figure.
To encode your with BILUO scheme there are three possible ways. First one is to create a spaCy doc and then label each token that is saved in a text file.
The above snippet makes it easier to annotate, and transform the data into input-output for spaCy NLP model in an accepted format. To read data from the file and convert it into an accepted object form by the model is as follows:
Another method is using offset indices, where the indices of start and end (i.e. begin, inside, last part of entity are clubbed together) are given along with label, for example, as shown here:
The third one is similar to the first one, except, here we can fix our own tokens and label them, instead of generating tokens with the NLP model and then labelling them. While this can also work, but in my experiments, I found this to rather degrade the performance. Nevertheless, here is how you can do that:
Training
After preprocessing the data and prepared it to train, we need to add the vocabulary of new entities in the model NER pipeline. The core spaCy models have three pipelines Tagger , Parser , and NER . Furthermore, we need to disable tagger and parser pipelines, since we will be only training the NER pipe, although, one can train all the other pipelines simultaneously. Find more here .
Here while training the dropout is assigned 0.0- to deliberately overfit the model and show that it can learn to recognise all the new entities. Result with the trained model on the text:
spaCy also provides a way to generate visualize colour encoded entities (as in Figure 1) to be viewed in web-browser or notebook using the following snippet:
Caveats
The process provided here to train for new entities may seem a bit easy, however, it does come with a warning. While training it is possible that the newly trained model can forget to recognise old entities, therefore, it is highly recommended to mix some text with entities from previously trained entities, unless, the old entities are not essential for a solution of the problem. Secondly, it might better to learn more specific entity than a generalized entity.
Conclusion
We saw that it is not very hard to get started with learning new entities but one does need to experiment with different annotating techniques and choose what works best for the given problem.
Additional Notes
- This post is further an extension to the example provided by spaCy here .
- An entire code block can be accessed at this jupyter notebook . The readme also contains how to install the spaCy library and debug error issues during installation and loading of pretrained model.
- Read this paper by Akbik et al. It should help in understanding the algorithm behind the sequence labelling i.e. multiple word entities.
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
白帽子讲Web安全(纪念版)
吴翰清 / 电子工业出版社 / 2014-6 / 69.00元
互联网时代的数据安全与个人隐私受到前所未有的挑战,各种新奇的攻击技术层出不穷。如何才能更好地保护我们的数据?《白帽子讲Web 安全(纪念版)》将带你走进Web 安全的世界,让你了解Web 安全的方方面面。黑客不再神秘,攻击技术原来如此,小网站也能找到适合自己的安全道路。大公司如何做安全,为什么要选择这样的方案呢?在《白帽子讲Web 安全(纪念版)》中都能找到答案。详细的剖析,让你不仅能“知其然”,......一起来看看 《白帽子讲Web安全(纪念版)》 这本书的介绍吧!