BlogGalleryAbout meContact
Jaganadh's bookshelf: read

Python Text Processing with NTLK 2.0 CookbookPython 2.6 Text Processing Beginners Guide

More of Jaganadh's books »
Jaganadh Gopinadhan's  book recommendations, reviews, quotes, book clubs, book trivia, book lists
Ubuntu GNU/Linux I am nerdier than 94% of all people. Are you a nerd? Click here to take the Nerd Test, get nerdy images and jokes, and write on the nerd forum! Python

Bangalore

Experiments with NoSQL databases: CouchDB

I started reading about NoSQL databases for a long time. Occasionally  I used some NoSQL databases like Apache CouchDB and Apache Cassandra for some analytics purpose(Some minor projects) with Python. This time I just thought why can't try something on Java + NoSQL. I created a small for project to play with. The idea of this project is: store Twitter search result to CouchDB.   I used the following Operating System, Programming Languages and Libraries in this project.

        Operating System                  :     Fedora 16 (verne)
        Programming Language     :     Java (JDK 1.6.0_29)
        IDE                                            :     Eclipse 3.7.1
        Apache CouchDB                   :    1.0.
        External Libraries                   :     Couchdb4J
                                                                Twitter4J
                                                              Apache Commons httpclient, logging, codec,commons,collections, beanutils
                                                              Jsonlib, ezmorph   

Installing CouchDB
To install CouchDB fire the terminal and type the command
    $su -c 'yum -y install couchdb'

After succesful installation start the CoucbDB server by issuing the command in the terminal
    $su -c '/etc/init.d/couchdb start'

Now your CouchDB instance will be up and running. You can check this by opening CouchDB Futon in the broswer by navigating to http://localhost:5984/_utils/. If everything will fine you will see the Funton Interface.

Let's start out project.
First create a function to connect to the CouchDB instance,create and retrun a database with given name. If the database already exits it has to return the database.

    /**
     * @param strDBName
     * @return dbCouchDB
     */

    public static Database connectCouchDB(String strDBName) {
        Database dbCouchDB = null;
        Session dbCouchDBSession = new Session("localhost", 5984);
        List<String> databases = dbCouchDBSession.getDatabaseNames();
        if (databases.contains(strDBName)) {
            dbCouchDB = dbCouchDBSession.getDatabase(strDBName);
        } else {
            dbCouchDBSession.createDatabase(strDBName);
            dbCouchDB = dbCouchDBSession.getDatabase(strDBName);
        }

        return dbCouchDB;

    }

   

Now we can create a function to search in Twitter Search and return the tweets.

    /**
     * @param strQuery
     * @throws TwitterException
     * @return queryResult
     */

    public static QueryResult getTweets(String strQuery)
            throws TwitterException {
        Twitter twitter = new TwitterFactory().getInstance();
        Query query = new Query(strQuery);
        QueryResult queryResult = twitter.search(query);
        return queryResult;

    }


To insert the tweets to the CouchDB document collection(database) it has to be converted to a document. Lets create a function to convert individual tweets to CouchDB document.

    /**
     * @param tweet
     * @return couchDocument
     */

    @SuppressWarnings("deprecation")
    public static Document tweetToCouchDocument(Tweet tweet) {

        Document couchDocument = new Document();

        couchDocument.setId(String.valueOf(tweet.getId()));
        couchDocument.put("Tweet", tweet.getText().toString());
        couchDocument.put("UserName", tweet.getFromUser().toString());
        couchDocument.put("Time", tweet.getCreatedAt().toGMTString());
        couchDocument.put("URL", tweet.getSource().toString());

        return couchDocument;

    }


Now we can try to write the Twitter Search results to the CouchDB document collection with the following function.

    /**
     * @param tweetQury
     * @param dbName
     * @throws TwitterException
     */

    public static void writeTweetToCDB(String strTweetQury, String strdbName)
            throws TwitterException {
        QueryResult tweetResults = getTweets(strTweetQury);
        Database dbInstance = connectCouchDB(strdbName);
        dbInstance.getAllDocuments();
        for (Tweet tweet : tweetResults.getTweets()) {
            Document document = tweetToCouchDocument(tweet);
            dbInstance.saveDocument(document);
        }

    }

Now it is time to execute our project. Add the following lines to the main() and run the project.

        String query = "java";
        String dbName = "javatweets";
        System.out.println("Started");
        writeTweetToCDB(query, dbName);
        System.out.println("Finished");


That is all !!!!!! .

The entire code is available at my bitbucket repo

Happy Hacking !!!!!!!!

 Permalink

Lucene Index Writer API changes from 2.x to 3.x

The 3.x version of Lucene introduces lots of changes in its API. In 2.x we used IndexWriter API like this:
       

         Directory dir = FSDirectory.open(new File(indexDir));
        writer = new IndexWriter(dir,new StandardAnalyzer(Version.LUCENE_30),true,IndexWriter.MaxFieldLength.UNLIMITED);


I used the same code with 3.x version for one of my project. The tool was working fine. But my IDE(eclipse) told that some of the things are deprecated hmm...... I decided to dig the new API and I found that the above given code has to be changed to this :

        Directory indexDir =  FSDirectory.open(new File(strDirName));
        IndexWriterConfig confIndexWriter = new IndexWriterConfig(Version.LUCENE_CURRENT, analyzer);
        writer = new IndexWriter(indexDir, confIndexWriter);


If you would like to use the "IndexWriter.MaxFieldLength.UNLIMITED" the IndexWriterConfig should be like:
        IndexWriterConfig idxconfa = new IndexWriterConfig(Version.LUCENE_30, new LimitTokenCountAnalyzer(new StandardAnalyzer(Version.LUCENE_30), 1000000000));

The int '1000000000' is set as maximum limit here. max(int) is the maximum you can set in IndexWriterConfig.

 Permalink

Taming Text : Review

    We are living in the era of Information Revolution. Everyday wast amount of information is being created and disseminated over World Wide Web(WWW). Even though each piece of information published in the web is useful in some way; we may require to identify and extract relevant/useful information.Such kind of information extraction includes identifying Person Names, Organization Names etc.. ,finding category of a text, identifying sentiment of a tweet etc ... Processing large amount text data from web is a challenging task, because there is an information overflow. As more information appears there is a demand for smart and intelligent processing and text data. The very field of text analytics has been attracted attention of developers around the glob. Many practical as well as theoretical books has been published on the topic.

This book, "
Taming Text", written by Grant S. Ingersoll, Thomas S. Morton and Andrew L. Farris is an excellent source for Text Analytics Developers and Researchers who is interested to learn Text Analytics. The book focuses on practical Text Analytics techniques like Classification,Clustering, String Matching, Searching and Entity Identification. The book provides easy-to follow examples in using well-known Open Source Text Analytics tools like Apache Mahout, Apache Lucece, Apache Solr, OpenNLP etc.. The entire book is based on the author's experience in contributing to relevant Open Source tools, hands on experience and their industry exposure. It is a must-read for Text Analytics developers and Researchers. Given the increasing importance of Text Analytics this book can be served as a hand book for budding Text Analytics Developers and Industry People. Definitely it can be used in Natural Language Processing, Machine Learning and Computational Linguistics courses.

Chapter 1: Getting Started Taming Text
The first chapter of the book introduces what is Taming Text? The authors gives list of challenges in text processing with brief explanations. The chapter is mostly an introductory stuff.

Chapter 2: Foundations of Taming Text
This chapter gives a quick warm up of your high school English grammar. Starting from words, the authors presents essential linguistic concepts required for text processing.  I think "Taming Text" will be the first technical book which gives a good warm up on basics of Language and grammar. The chapter gives a detailed introduction to words, parts of speech, phrases and morphology. This introduction is sufficient enough to capture the essential linguistic aspects of Text Processing for a developer. The second part of this chapter deals with basic text processing tasks like, tokenization, sentence splitting, Part of Speech Tagging (POS Tagging) and Parsing. Code snippets for each of the task has been given in the chapter. All the code examples are narrated with the tool
OpenNLP . The chapter gives some basic of handling different file formats using Apache Tika. This chapter gives a step by step intro to the preliminaries of Text Processing.

Chapter 3: Searching
This chapter introduces the art of Search. It gives a brief but narrative description of the Search mechanism and scene behind the curtains. The chapter discusses the basics of Search with the help of
Apache Solr. There is an interesting discussion on search evaluation and search performance enhancements and page rank too. The chapter gives a detailed list of Open Source search engines. But I think the authors forgot to add the "Elasticsearch" library  to the list. I hope that it may be added in the final print version of the book.

Chapter 4: Fuzzy String Matching
Everybody might have wondered how the "Did you mean:" feature in Google or any other search engine works. Long ago I saw a question in Stackoverflow; querying about the availability of source code for  "Did you mean:" feature !!! (something similar I think). If you wonder how this feature is working this chapter will give you enough knowledge to implement something similar. There is a simple discussion on different fuzzy string matching algorithms with code samples. There is practical examples on how to implement the "Did you Mean" and type ahead (auto suggest) utility on Apache Solr. Over all this chapter gives a solid introduction and hands on experience on Fuzzy String Matching.

Chapter 5: Identifying People, Places and Things
Diving deeper into text processing ocean, the authors narrates many deeper concepts in Text Processing starting from this chapter. The main focus of this chapter is Named Entity Identification (NER), one of the trivial tasks in Information Extraction and Retrieval. The chapter gives a good introduction to the task on Named Entity Identification along with code samples using OpenNLP. The code samples will help you to make your hands dirty. There is a section which deals with how to train OpenNLP to adopt a new domain. This will be one of the most useful tip for working professionals. The only thing which I feels to be missing is a mention about
GATE and Apache UIMA. Both of the tools are famous for their capability to accomplish the NER task.

Chapter 6: Clustering Text
The sixth chapter mainly deals with Clustering. Clustering is an unsupervised (i.e. no human intervention required) task that can automatically put related content into buckets.[Taken from the book "Taming Text"]. The initial part of this chapter narrates clustering with reference to real world applications. A decent discussion on clustering techniques and clustering evaluation is also there. Code examples for clustering is given in this chapter.
Apache Solr, Apache Mahout and Carrot are used to give practical examples for clustering.

Chapter 7: Classification, Categorization and Tagging
Seventh chapter deals with document classification. As like in the other chapters there is a reasonable discussion on document classification techniques. This chapter will teach you how to perform document classification with Apache Lucene, Apache Solr, Apache Mahout and OepnNLP. There is interesting project called 'tag recommender' in this chapter. The only hiccup which I faced with this chapter is the "TT_HOME" environment variable which used through out the book. I think the authors forgot to mention how to set TT_HOME. I was familiar with Apache Mahout so ther was no issue with MAHOUT_HOME environment variable. A totally newbie will find it difficult to spot the TT_HOME and MAHOUT_HOME used in the code samples. A little bit light on setting these variables may help reader a lot. I think this will be included in the final copy(I am reading a MEAP version).

Chapter 8: An Example Application: Question Answering

This chapter gives a hands on experience in Taming Text. The entire chapter is dedicated for building a Question Answering project using the techniques discussed in all the chapters. A simple make your hands dirty by Taming Text chapter. Here also you will be caught with the TT_HOME ghost.

Chapter 9: Untamed Text: Exploring the Next Frontier

The last chapter "Untamed Text: Exploring the Next Frontier" mentions other ares in Text processing such as Semantics Pragmatics and Sentiment Analysis etc.. Brief narration on each of these field are included in this chapter. There are a lots of pointers to some useful tools for advanced Text processing tasks like Text Summarisation and Relation Extraction etc ..

Conclusion
Grant S. Ingersoll, Thomas S. Morton and Andrew L. Farris have done a nice job by authoring this book with lucid explanations and practical examples for different Text Processing Challenges. With the help of simple and narrative examples the authors demonstrates how to solve real world text processing challenges using Free and Open Source Tools. The algorithm discussions in the book is so simple; even a newbie can follow the concepts without much hiccups. It is a good desktop reference for people who would like to start with Text Processing. It provides comprehensive and hands-on experience in Text Processing. So grab a copy soon and be ready for Big Data Analysis.

Free and Open Source Tools Discussed in the Book
Apache Solr
Apache Lucene
Apache Mahout
Apache OpenNLP
Carrot2.

Disclaimer : I received a review copy of the book from Manning

Related Entries:
Mahout in Action: Review
Comments (2)  Permalink
1-3/3