Text Preprocessing With NLTK

栏目: IT技术 · 发布时间: 3年前

Text Preprocessing With NLTK

Photo by Carlos Muza on Unsplash

Intro

Almost every Natural Language Processing (NLP) task requires text to be preprocessed before training a model. Deep learning models cannot use raw text directly, so it is up to us researchers to clean the text ourselves. Depending on the nature of the task, the preprocessing methods can be different. This tutorial will teach the most common preprocessing approach that can fit in with various NLP tasks using NLTK (Natural Language Toolkit) .

Why NLTK?

  • Popularity : NLTK is one of the leading platforms for dealing with language data.
  • Simplicity : Provides easy-to-use APIs for a wide variety of text preprocessing methods
  • Community : It has a large and active community that supports the library and improves it
  • Open Source : Free and open-source available for Windows, Mac OSX, and Linux.

Now you know the benefits of NLTK, let’s get started!

Tutorial Overview

  1. Lowercase
  2. Removing Punctuation
  3. Tokenization
  4. Stopword Filtering
  5. Stemming
  6. Part-of-Speech Tagger

All code displayed in this tutorial can be accessed in my Github repo .

Import NLTK

Before preprocessing, we need to first download the NLTK library .

pip install nltk

Then, we can import the library in our Python notebook and download its contents.

Lowercase

As an example, we grab the first sentence from the book Pride and Prejudice as the text. We convert the sentence to lowercase via text.lower() .

Removing Punctuation

To remove punctuation, we save only the characters that are not punctuation, which can be checked by using string.punctuation .

Tokenization

Strings can be tokenized into tokens via nltk.word_tokenize .

Stopword Filtering

We can use nltk.corpus.stopwords.words(‘english’) to fetch a list of stopwords in the English dictionary. Then, we remove the tokens that are stopwords.

Stemming

We stem the tokens using nltk.stem.porter.PorterStemmer to get the stemmed tokens.

POS Tagger

Lastly, we can use nltk.pos_tag to retrieve the part of speech of each token in a list.

The full notebook can be seen here .

Combining all Together

We can combine all the preprocessing methods above and create a preprocess function that takes in a .txt file and handles all the preprocessing. We print out the tokens, filtered words (after stopword filtering), stemmed words, and POS, one of which is usually passed on to the model or for further processing. We use the Pride and Prejudice book (accessible here ) and preprocess it.

This notebook can be accessed here .

Conclusion

Text preprocessing is an important first step for any NLP application. In this tutorial, we discussed several popular preprocessing approaches using NLTK: lowercase, removing punctuation, tokenization, stopword filtering, stemming, and part-of-speech tagger.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

领域特定语言

领域特定语言

Martin Fowler / ThoughtWorks中国 / 机械工业出版社华章公司 / 2013-3 / 89.00元

本书是DSL领域的丰碑之作,由世界级软件开发大师和软件开发“教父”Martin Fowler历时多年写作而成,ThoughtWorks中国翻译。全面详尽地讲解了各种DSL及其构造方式,揭示了与编程语言无关的通用原则和模式,阐释了如何通过DSL有效提高开发人员的生产力以及增进与领域专家的有效沟通,能为开发人员选择和使用DSL提供有效的决策依据和指导方法。 全书共57章,分为六个部分:第一部分介......一起来看看 《领域特定语言》 这本书的介绍吧!

JS 压缩/解压工具
JS 压缩/解压工具

在线压缩/解压 JS 代码

随机密码生成器
随机密码生成器

多种字符组合密码

XML、JSON 在线转换
XML、JSON 在线转换

在线XML、JSON转换工具