Vaksent API for Sentiment Analysis

Aiaioo Labs has just released an API for fine-grained sentiment analysis.

A demonstration of the Vaksent Sentiment Analysis Engine is available here: http://www.aiaioo.com:8080/annotator-0.1/automation/demoView/1

The key features of the sentiment analysis system are: a) identification of the holder of the opinion (who holds that opinion), and b) identification of the object of the sentiment (what exactly is the sentiment expressed about).

Technology

We use a cascade of algorithms to identify sequentially 1) sentiment-conveying phrases, 2) entities (to identify objects being spoken about), 3) relations (to identify which sentiment applies to which entity) and 4) negations (to identify which relations are negated). This combination makes for a very sophisticated sentence level and entity level analysis of sentiment.

The main goal of this system was to have roughly domain independent behaviour (no imbalance in performance when used on financial data, product data or entertainment). Such a balance is pretty hard to achieve (some measurements suggest that human annotators agree with each other only 79% of the time when attempting to identify the sentiment of sentences/entities in certain types of text).

Evaluation

The accuracies that we measured for different domains are as follows.

Domain of Entertainment:

Accuracy = 0.7103

Precision = {negative=0.7222, positive=0.6997}

Recall = {negative=0.6837, positive=0.737}

F-Score = {negative=0.7027, positive=0.7181}

Tested on a total of 10662 sentences.

This was evaluated using the Bo Pang data set. As you can see the errors are roughly balanced on the positive and the negative side to get what we hope is a fairly unskewed error curve. This allows averaging to work as a strategy to cancel out noise.

Domain of Products:

Accuracy = 0.7266

Precision = {negative=0.5963, positive=0.8462}

Recall = {negative=0.7807, positive=0.6953}

F-Score = {negative=0.6823, positive=0.7671}

Tested on a total of 3731 sentences.

The data set used was the Bing Liu corpora (the first two) covering mostly electronic products. We have roughly the same performance again on products, but the curve is now slightly skewed.

Domain of Finance (evaluation incomplete):

Accuracy = 0.6896

Precision = {negative=0.7037, positive=0.6666}

Recall = {negative=0.7755, positive=0.5789}

F-Score = {negative=0.7387, positive=0.6212}

Tested on a total of 87 sentences.

We have roughly the same performance again on finance, but the evaluation data set is very small. We’re working on performing a more reliable evaluation.

Examples

Here is what Vaksent http://www.aiaioo.com:8080/annotator-0.1/automation/demoView/1 says about two sentences provided as examples:

I {- deny -} that [- it can never [+ be said that this is not [- a {!+ beautiful +!} ( car ) -] +] -] . = [ negative ]

( John ) and not [- ( Bruce ) -] said that this is not [- a {!- bad -!} ( car ) -] . = [ positive ]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s