We are living in the era of Information Revolution. Everyday wast amount of information is being created and disseminated over World Wide Web(WWW). Even though each piece of information published in the web is useful in some way; we may require to identify and extract relevant/useful information.Such kind of information extraction includes identifying Person Names, Organization Names etc.. ,finding category of a text, identifying sentiment of a tweet etc ... Processing large amount text data from web is a challenging task, because there is an information overflow. As more information appears there is a demand for smart and intelligent processing and text data. The very field of text analytics has been attracted attention of developers around the glob. Many practical as well as theoretical books has been published on the topic.
This book, "Taming Text", written by Grant S. Ingersoll, Thomas S. Morton and Andrew L. Farris is an excellent source for Text Analytics Developers and Researchers who is interested to learn Text Analytics. The book focuses on practical Text Analytics techniques like Classification,Clustering, String Matching, Searching and Entity Identification. The book provides easy-to follow examples in using well-known Open Source Text Analytics tools like Apache Mahout, Apache Lucece, Apache Solr, OpenNLP etc.. The entire book is based on the author's experience in contributing to relevant Open Source tools, hands on experience and their industry exposure. It is a must-read for Text Analytics developers and Researchers. Given the increasing importance of Text Analytics this book can be served as a hand book for budding Text Analytics Developers and Industry People. Definitely it can be used in Natural Language Processing, Machine Learning and Computational Linguistics courses.
Chapter 1: Getting Started Taming Text
The first chapter of the book introduces what is Taming Text? The authors gives list of challenges in text processing with brief explanations. The chapter is mostly an introductory stuff.
Chapter 2: Foundations of Taming Text
This chapter gives a quick warm up of your high school English grammar. Starting from words, the authors presents essential linguistic concepts required for text processing. I think "Taming Text" will be the first technical book which gives a good warm up on basics of Language and grammar. The chapter gives a detailed introduction to words, parts of speech, phrases and morphology. This introduction is sufficient enough to capture the essential linguistic aspects of Text Processing for a developer. The second part of this chapter deals with basic text processing tasks like, tokenization, sentence splitting, Part of Speech Tagging (POS Tagging) and Parsing. Code snippets for each of the task has been given in the chapter. All the code examples are narrated with the tool OpenNLP . The chapter gives some basic of handling different file formats using Apache Tika. This chapter gives a step by step intro to the preliminaries of Text Processing.
Chapter 3: Searching
This chapter introduces the art of Search. It gives a brief but narrative description of the Search mechanism and scene behind the curtains. The chapter discusses the basics of Search with the help of Apache Solr. There is an interesting discussion on search evaluation and search performance enhancements and page rank too. The chapter gives a detailed list of Open Source search engines. But I think the authors forgot to add the "Elasticsearch" library to the list. I hope that it may be added in the final print version of the book.
Chapter 4: Fuzzy String Matching
Everybody might have wondered how the "Did you mean:" feature in Google or any other search engine works. Long ago I saw a question in Stackoverflow; querying about the availability of source code for "Did you mean:" feature !!! (something similar I think). If you wonder how this feature is working this chapter will give you enough knowledge to implement something similar. There is a simple discussion on different fuzzy string matching algorithms with code samples. There is practical examples on how to implement the "Did you Mean" and type ahead (auto suggest) utility on Apache Solr. Over all this chapter gives a solid introduction and hands on experience on Fuzzy String Matching.
Chapter 5: Identifying People, Places and Things
Diving deeper into text processing ocean, the authors narrates many deeper concepts in Text Processing starting from this chapter. The main focus of this chapter is Named Entity Identification (NER), one of the trivial tasks in Information Extraction and Retrieval. The chapter gives a good introduction to the task on Named Entity Identification along with code samples using OpenNLP. The code samples will help you to make your hands dirty. There is a section which deals with how to train OpenNLP to adopt a new domain. This will be one of the most useful tip for working professionals. The only thing which I feels to be missing is a mention about GATE and Apache UIMA. Both of the tools are famous for their capability to accomplish the NER task.
Chapter 6: Clustering Text
The sixth chapter mainly deals with Clustering. Clustering is an unsupervised (i.e. no human intervention required) task that can automatically put related content into buckets.[Taken from the book "Taming Text"]. The initial part of this chapter narrates clustering with reference to real world applications. A decent discussion on clustering techniques and clustering evaluation is also there. Code examples for clustering is given in this chapter. Apache Solr, Apache Mahout and Carrot are used to give practical examples for clustering.
Chapter 7: Classification, Categorization and Tagging
Seventh chapter deals with document classification. As like in the other chapters there is a reasonable discussion on document classification techniques. This chapter will teach you how to perform document classification with Apache Lucene, Apache Solr, Apache Mahout and OepnNLP. There is interesting project called 'tag recommender' in this chapter. The only hiccup which I faced with this chapter is the "TT_HOME" environment variable which used through out the book. I think the authors forgot to mention how to set TT_HOME. I was familiar with Apache Mahout so ther was no issue with MAHOUT_HOME environment variable. A totally newbie will find it difficult to spot the TT_HOME and MAHOUT_HOME used in the code samples. A little bit light on setting these variables may help reader a lot. I think this will be included in the final copy(I am reading a MEAP version).
Chapter 8: An Example Application: Question Answering
This chapter gives a hands on experience in Taming Text. The entire chapter is dedicated for building a Question Answering project using the techniques discussed in all the chapters. A simple make your hands dirty by Taming Text chapter. Here also you will be caught with the TT_HOME ghost.
Chapter 9: Untamed Text: Exploring the Next Frontier
The last chapter "Untamed Text: Exploring the Next Frontier" mentions other ares in Text processing such as Semantics Pragmatics and Sentiment Analysis etc.. Brief narration on each of these field are included in this chapter. There are a lots of pointers to some useful tools for advanced Text processing tasks like Text Summarisation and Relation Extraction etc ..
Grant S. Ingersoll, Thomas S. Morton and Andrew L. Farris have done a nice job by authoring this book with lucid explanations and practical examples for different Text Processing Challenges. With the help of simple and narrative examples the authors demonstrates how to solve real world text processing challenges using Free and Open Source Tools. The algorithm discussions in the book is so simple; even a newbie can follow the concepts without much hiccups. It is a good desktop reference for people who would like to start with Text Processing. It provides comprehensive and hands-on experience in Text Processing. So grab a copy soon and be ready for Big Data Analysis.
Free and Open Source Tools Discussed in the Book
Disclaimer : I received a review copy of the book from Manning
Python Testing Cookbook, by Greg L Turnquist is one of the latest books from Packt Publishing. It is the second book on Python Testing; the first one was Python Testing: Beginners Guide by Daniel. The Python Testing Cookbook is a collection of useful easy to learn tips and tricks. Even though this book is labeled as "cookbook" it is quite useful for the newbies in Python testing too. All the essential tips to play with Python testing is given in the introductory chapters with illustrative examples. The book can serve as a good resource to train Python newbies in the art of Software Testing.
The first chapter of the book deals with basics of unittest with Python. It gives step by step examples to understand unit-testing.
The second chapter helps you to put your nose in to testing with Python "nose" framework. The chapter gives insights on how to write nose-plugins to make the testers life easy. Third chapter of the book deals with doctest. The chapter gives good intro to writing docstring for doc-test. The art of doc-test is well covered in this chapter. The fourth chapter deals with testing behavior driven development. The chapter introduces Mock, mockito and Lettuce testing tools. The fifth chapter deals with Acceptance Testing with Pyccuracy and Robot tools. This chapter gives some insight on selinium too. The sixth chapter speaks about test automation with Continuous Integration(CI). The chapter introduces Jenkins and NoseXunit. The chapter is very useful for the people who follows the waterfall model in Software Development. The seventh chapter discusses about test coverage.The chapter is bit complicated for beginners. Some play with database, springpython etc are there. In some examples I feel that the element of testing missed out ;-). The eighth chapter deals with Smoke and Load testing in Python. This chapter introduces the tool Pyro too. The 9th chapter is a collection of general advices for automated testers. After making your hands dirty with testing u can relax and clear the doubts with advices in this chapter.
The book comes with extensive code samples. Even if the book is all about testing I found one bad practice in coding through out the examples; that is import * . This makes the learner to scratch his head to understand what comes from where. I think it is worth to buy and read the book to get good insights on Test Automation with Python. It is a good book for beginners to learn testing and good reference book for experienced professionals too.
disclaimer: I received a free eBook from Packt for review
Apache Mahout is an Open Source scalable Machine Learning library in Java. It is designed to handle large data set. More than a dozen of Machine Learning and Data Mining algorithms are available in Mahout. All those algorithms are implemented on top of Apache Hadoop. The framework is distributed under a commercially friendly Apache License. It helps researchers and corporate to build scalable and practical products based on Machine Learning and Data Mining Principles. A wide range of big companies as well as startups are using Apache Mahout in their products.
The Apache Mahout project is focused three interesting Machine Learning problems 1) recommendation systems 2) clustering and 3) classification. The project address real world practical problems. The tool makes life of Machine Learning Developers much enjoyable. The book "Mahout in Action" by Sean Owen,Robin Anil, Ted Dunning and Ellen Friedman introduces the wonder world of creating scalable and real world machine learning projects with Apache Mahout. It is written in a lucid language so that a beginner in Machine Learning can understand the concepts and kick start working with classification, clustering or recommendation projects. Even though the detailed algorithmic back ground of underlying algorithms in Mahout is not described the logic (common sense) behind the system is explained very well with help of code examples and practical projects. I am giving chapter wise overview of the book "Mahout in Action" below. A sample chapter is availeble for download at http://www.manning.com/free/green_owen.html
Chapter 1 of the book get you introduced to Mahout. Through this chapter you get to know the history of Mahout project, algorithms, it's capabilities and configurations.
Chapter 2 of the book introduces recommendation systems to the reader. The chapter teaches how to build a basic re commender systems with Apache Mahout. The examples given for narrating the technique is very clear and understandable to all.
Chapter 3 of the book discuss about data representation for building a recommender engine. The discussions in this chapter extends up to some naive data structure in Mahout. There is some discussion on using MySQL for storing data for building recommender engines.
Chapter 4 of the book gives more insight in to building scalable recommender systems. It introduces user based recommendation engines as well as item based recommendation engines. The examples are very clear and it helps practitioners to build better prototypes much faster. The chapter is written in such a lucid way that any body can understand the common sense behind the recommender engines.
The fifth chapter of the book deals with producing a full fledged recommender system with Apache Mahout. The discussion and examples in this chapter extends up to deploying a web based recommeder engine. Once u covered up this chapter it can be ensured that you can build a good production quality recommender engine for your client.
Chapter 6 of the book discussed how to build a scalable and distributed recommendation system with Mahout and Hadoop frame work. The chapter gives illustrative example for the task with Wikipedia data set. The author spent some pages for explaining Map Reduce concept in a much lucid way. There is a discussion on running the recommender in a cloud platform too. This chapter is definitely a helping point for professionals to kick start their recommender projects with less pain.
Starting from chapter 7 to 12 the book discusses about Clustering techniques using Apache Mahout. Chapter seven gives a brief introduction to clustering with practical examples. The chapter contains discussions on different clustering algorithms available in Mahout.
Chapter eight of the book deals with preparing and representing data for clustering task. Tips and tricks for converting raw data to vectors for clustering is discussed in a very lucid manner in this chapter.
The 9th chapter of the book discusses details on clustering algorithms in Mahout. The major algorithms covered in this chapter are K-Means clustering, Centroid generation using Canopy clustering, Fuzzy K-Means clustering, Dirichlet clustering,Topic modeling using LDA as a variant of clustering. There is a small cases study on clustering news items using Apache Mahout. One of my project student has undertaken such a project for his MSc in CS .
The 10th chapter is focused on evaluation of clustering system. The chapter discusses about clustering output inspection, quality evaluation of clustering and improving the quality of clusters.
The 11th chapter deals with producing a scalable clustering system with Mahout. It gives good insight in to the art of content clustering with two case studies. The 12th chapter discusses some use cased of clustering with code examples including twitter user clustering, playing with last.fm data and clustering.
Beginning from chapter 13 to end of chapter 16 the book discusses about the technique of classification. Chapter 13 of the book gives introduction to classification. It explains classification step by step with examples.The illustrations given in the chapter makes the content more enjoyable and understanding for the reader. Chapter 14 deals with training a classifier system. It explains the task of training with a publically available data-set called 20 newsgroups data set. There is a discussion on selecting algorithm for the classification task too. When ever I came to know about Mahout I used the classification techniques and algorithms. Chapter 16 has a wonderful discussion on deployment of classification system. The section gives practical insight on pros and cons of developing and deploying scalable classification system that can be bench marked with existing best performing systems.
The 17th Chapter needs special mention. The chapter is a case study named "Case study: Shop It To Me". The discussions in this chapter shows real power of Apache Mahout with the help of a practical project.
There are two appendix provided to the book. Appendix A deals with some JVM tuning tips and tricks for Deploying Hadoop/Mahout based projects. It is even useful for core Java programmers too. The Appendix B gives insight on "Mahout Math" and some deep math related stuff in Mahout.
The book is available from Manning MEAP site. Three excerpts are available in the web site along with sample code. This is a must-read for all Machine Learning and NLP Developers and Researchers. This is an excellent book and I am very much happy to read practice and understand the Apache Mahout in such detail. Kudos to Sean Owen,Robin Anil, Ted Dunning and Ellen Friedman.
For code samples and sample chapters visit http://www.manning.com/free/green_owen.html
Python 2.6 Text Processing Beginner's Guide by Jeff McNeil is one of the latest books by Packt Publishers. I received the review copy of this book before one and half months or so. Due to busy schedule I was not able to finish the review process. Finally I got enough time to review it. The book gives good insight to on different technical aspects and use of Python standards and third party libraries for text processing. It is filled with lots of examples and practical projects. I think I might have took almost one year to gather knowledge in the topic discussed in this book, when I started my career in Natural Language Processing domain. I am giving a bit detailed review on the book here.
The first chapter of this book gives some practical and interesting exercises like implementing cypher, some basic tricks with HTML. It also discusses how to setup a Python virtual environment for working with the examples in the book. The section of setting virtual environment is nice an well written one. It gives a clear idea of how to setup virtual environments.
The second chapter deals with Python IO module. It narrates the basic file operations with Python. The use of context manager(with function) for for file processing is discussed in this chapter. I am suing Python for text processing for lat three to four years. But after reading only I found that there is something called "fileinput" in Python programming language for multiple file access. The chapter discuss how to access remote files and StringIO() module in Python. At the end of this chapter there is a discussion about IO in Python 3 too.
The third chapter is about Python String Services. It deals with string formatting, templating, modulo formatting etc. Every concept is explained with necessary mini projects which followed from chapter two. The chapter gives a comprehensive view on advanced string services in Python.
The fourth chapter is entitled as Text Processing Using the Standard Library. This chapter deals with topic like reading wnd writing csv files(csv file processing), playing with application config files(.ini files), and working with JSON. The examples are bit long one but worth practicing for better understanding.
The fifth chapter deals with one of the key aspect in text processing "Regular Expressions". The chapter teaches basics syntax of regular expression in Python. The chapter also discusses about advanced processing like regex grouping, look ashed and look behind assertion in regular expressions. The look behind operation in regular expression is the most tricky part in dealing with regex. I think only masters in regex can do it effectively ;-) .The chapter dscuss basics of Unicode regular expressions too. The chapter is filled with enough examples for each and every concept discussed.
The sixth chapter deals with Markup Languages. The chapter discusses about XMl and HTML processing with Pytho standard libraries. xml.dom.minido, SAX,laxm and BeautifulSoup packages are discussed with illustrative examples.
The seventh chapter is entitled as Creating Templates. "Templating involves the creation of text files, or templates, that contain special markup. When a specialized parser encounters this markup, it replaces it with a computed value". The templating concept was quite new to me. But I got a good insight on the topic from this chapter. The chapter discusses some libraries like "Makeo" for templating task.
The eight chapter deals with localization (l1on) and encoding. If you are working with non-English data this chapter is a must read for you.The chapter discuses about character encoding, Unicode processing and Python3 too. Apart from mere Python stuff this chapter gives a good insight about charter encoding too.
The ninth chapter Advanced Output Formats is quite useful if you are trying to create output in PDF, CSV orExcel format. This chapter discuss about ReportLab a PDF generation library in Python.The only disadvantage which I found in ReportLab is its lack of complete Unicode Support. The chapter also discusses about creating excel files with xlwt module. Finally the chapter deals with handling OpenDocument format woth ODFPy module too. I used to read excel file from Python. But after going through this book I am able to even write Excel output too.
The tenth chapter deals with Advanced Parsing and Grammars. This is one of the key skill which required for Python text processing peoples. Creating custom grammars for parsing specific data. Through out my career I spent lot of time to train Engineers to understand parsing and BNF grammar. This time I got a good pointer for my people to start with BNF and Python programming. Also this chapter discusses about some parsing module in NLTK my favorite Python library. Some advanced topics in PyParsing also discussed in this chapter.
The eleventh and last chapter is the most interesting one in the book. The chapter deals with Searching and Indexing. PyLucene is the bset known Searching Index library in Python. But it is a wrapper to the apache Lucene. But his chapter discusses about another Python tool Nucular. Practical examples for creating search index etc are given in this chapter. This is the first time I am using the Nucular tool. I feel it as a nice and easy one compared to PyLucene. But I dont think this is superior than Lucene. I will play more with this tool and will update it in another blog post.
There are two appendix . The first appendix gives pointers to Python resources. The next one is answer o the pop quiz in the chapters.
I will give 9 out of 10 for this book. If you are dealing with rigorous text processing this book is a must have reference for you.
Packt Publishing releases new book "Python 2.6 Text Processing Beginners Guide" by Jeff McNeil. I received review copy of the book today. I will put a review of the book here soon. The book comes with lot of practical examples and tips.
Language : English
Paperback : 380 pages [ 235mm x 191mm ]
Release Date : December 2010
ISBN : 1849512124
ISBN 13 : 978-1-84951-212-1
Author(s) : Jeff McNeil
Python Text Processing with NLTK 2.0 Cookbook by Jacob Perkins is one of the latest books published by Packt in the Open Source series. The book is meant for people who started learning and practicing the Natural Language Tool Kit(NLTK).NLTK is an Open Source Python library to learn practice and implement Natural Language Processing techniques. The software is licensed under the Apache Software license. It is one of the most widely recommended tool kit for beginners in NLP to make their hands dirty. The toolkit is part of syllabus for many institutions around the globe where Natural Language Processing/ Computational Linguistics courses are offered. Perkins book work is the second book published on the toolkit NLTK. The first book is written by core developers of NLTK; Steven Bird, Ewan Klein, and Edward Loper, published by O'rielly. Steven et.all's book is a comprehensive introduction to the toolkit with basic Python lessons. People who has gone through the book may definitely like the new book by Perkin. The book is must have desktop reference for students, professionals, and faculty members interested in the area of NLP, Computational Linguistics and NLTK. Perkins handles the topic in an elegant way. Most of the people who searched for some NLTK tips might have gone through the author's blog. He maintains same simplicity and explanation style and hands-om approach throughout the book; which makes the reader to digest the topic with much easiness. The book is a collection of practical and working recipes related to NLTK.
The first chapter of the book "Tokenizing Text and WordNet Basics" deals with tokenizing text in to words sentences and paragraphs. The chapter also deals with tips and tricks with WordNet module in NLTK. Perkin discusses about Word Sense Disambiguation(WSD) techniques in this chapter. The missile part in WordNet is the use of wordnet 'ic' function. Tips for extracting collocations from a corpus is also included in the first chapter. The chapter "Replacing and Correcting Words"(IInd chapter) discusses stemming , lemmatization and spelling correction. He introduces another Python module named Python-Enchant for discussing about the spell checking technique. The chapter also discusses techniques like replaces negation with antonyms and replacement of repeating characters. The third chapter deals with Corpora. This chapter mainly discusses how to load user generated corpora in to NLTK with corpus readers implemented in NTLK. The most attracting part of this chapter is discussion about MonngoDB blackened for corpus reader in NLTK. MongoDB is a text based DB, which belongs to the NoSQL family. This part will be very useful for students in NLP and working professionals. The fourth chapter deals with POS Tagging techniques. It discusses mainly about training different POS taggers and using it. It is also quiet useful for people who would like to extend the functionality in NLTK for their projects and people who is interested to extend POS taggers in language other than English. Some part of this chapter content was published in the authors blog before one year. Chapter five of the book deals with Chunking and Chinking techniques with NLTK. Named Entity Identification and Extraction techniques are also discussed in this chapter. It gaves good insight to train NLTK chunking module for custom chunking tasks. With the help of this chapter I was able to create a small named entity extraction script with some Indian names. The sixth chapter is named as "Transforming Chunks and Trees" which deals with verb form correction, plural to singular correction, word filtering, and playing trees structures. Many time I saw that people used to raise question about handling tree data in NLTK. I think people can refer this chapter for getting good insight to play with NLTK parse tree data. The seventh chapter deals with most wanted topic of the time "Text Classification". Some part of this chapter appeared as blog post in Perkin's blog. There was many requests in freelancing web sites for text classification with NLTK. I found that some of them were not bided too. The chapter discusses the task of Text Classification in details with all the classification implementations available in NLTK. Training NLTK classifier is discussed very clearly. Apart form the classifier training, classification the chapter discusses classifier evaluation and tuning too. The eight chapter a revolutionary one which deals with Distributed data processing and handling large scale data with NLTK. I was not able to fully play with the total code in this chapter (Yes I worked out the code in other chapters and it was quite exciting. It contributed to my professional life too) . This chapter will be really helpful for industry people who is looking for to adopt NLTK in to NLP projects. Some basic insights of the contents in this chapter was also published in Perkin's blog. After Nithin Madanini's talk in US Python Conference on corpus processing with Dumbo and NLTK I think this is the only existing resource for practical large scale data processing with NLTK. The ninth and last chapter is about Parsing Scientific data with Python. This chapter deals with some Python modules rather than the NLTK tool. It discusses about URL extraction, timezone look-up, character conversion etc.. This chapter is good for people who plays with web data processing like harvesting. There is an appendix for the book which contains "Penn Treebank". It give list of all tags with its frequency in treebank corpus.
For the last three four years I am using NLTK to teach and develop prototypes of NLP applications. I was very much when I went through each of the recipes in this book. The author provides UML diagrams for the modules in NLTK which helps the reader to get good insight on the functionality of each module. This will be a good book not only for students and practitioners but also for people would like to contribute to NLTK project too. Also this book will help students in NLP and Computational Linguistics to do their projects with NLTK and Python. I give 9 out of 10 for the book. Natural Language Processing students, teachers, professional hurry and bag a copy of this book.
Thanks to Packt publihsres for the review copy of the book.
Packt Publishing releases a new book 'Python Text Processing with NLTK 2.0 Cookbook' by Jacob Perkins (http://streamhacker.com). I received review copy of the book today. I will put a review of the book here soon. The book comes with lot of practical examples and tips.
An extracted chapter from the book is available for download at https://www.packtpub.com/sites/default/files/3609-chapter-3-creating-custom-corpora.pdf
There is a new book came in the 'Head First' series with Python language examples.
Head First Programming
A learner's guide to programming using the Python language
ByDavid Griffiths, Paul BarryPublisher:O'Reilly MediaReleased: November 2009 Pages: 448 (est.)
Waiting for the book to be launched in Indian market .