Information

Description
Tokenization of raw text is a standard pre-processing step for many NLP tasks. For English, tokenization usually involves punctuation splitting and separation of some affixes like possessives. Other languages require more extensive token pre-processing, which is usually called segmentation.
Version
3.8.0.0
Project Url
http://sergey-tihon.github.io/Stanford.NLP.NET/
View on NuGet: http://www.nuget.org/packages/Stanford.NLP.Segmenter

Dependencies

Here are the packages that version 3.8.0.0 of Stanford.NLP.Segmenter depends on.

net.sf.mpxj-ikvm : (7.7.1)

Authors

 

Installing with NuGet

PM> Install-Package Stanford.NLP.Segmenter -Version 3.8.0.0

Packages that Depend on Stanford.NLP.Segmenter

No packages were found that depend on version 3.8.0.0.