Month: November 2011

Prior Work on Intentions

We have been exploring intention analysis for some time now and we are pleased to announce the launch of the first ever commercial API for broad-based intention analysis, called Vakintent.

Here is a demo of the Vakintent Intention Analysis API:  Demonstration of VakIntent, the Intention Analysis API from Aiaioo Labs


Intention Analysis is the identification of intentions from text, be it the intention to purchase or the intention to sell or to complain, accuse or to inquire, in incoming customer messages or in call center transcripts.


Intention Analysis has already given us some evidence of its usefulness.

In July 2011, we used intention analysis to study the GooglePlus launch.  We especially looked at quit intentions to see how frequently people were threatening to quit FB over time and saw how the number dropped sharply once people got to try GooglePlus (once the by-invite-only period ended).

This was a powerful observation, because in just four days, we could tell that GooglePlus couldn’t replace Facebook, at least not yet. Here is the study:


The work that intention analysis is based on goes as far back as 1962 when J. L. Austin noted that not all utterances are statements whose truth and falsity are at stake, and that there was a class of utterances like “I pronounce you husband and wife” that are actions [taken from Winograd, 1987].

(I recently found the Winograd paper on his website:

In 1975, Searle identified the following broad categories of illocutionary (causing an action to happen) speech acts [from Winograd, 1987]:

  • Assertive – Committing the speaker to the truth of a proposition
  • Directive – Attempting to get the listener to do something
  • Commissive – Committing the speaker to a course of action
  • Declaration – Bringing about something (eg., pronouncing someone married)
  • Expressive – Expressing a psychological state

Interestingly, the expressives include expression of opinion which corresponds to the modern day task of sentiment analysis.

Prior Work

Cognizant Technologies

There was a paper at ACL 2010 titled “Wishful Thinking – Finding suggestions and ‘buy’ wishes from product reviews” by Krishna Bhavsar et al from Cognizant Technologies .

Lampert and Dale

Another recent attempt to build computer systems capable of analysing intention was made by Robert Dale and Andrew Lampert at Macquarie University. A paper that I’d recommend to you is their work on detecting emails containing requests for action: “Andrew Lampert, Robert Dale and Cécile Paris [2010] Detecting Emails Containing Requests for Action. Pages 984–992 in Proceedings of NAACL 2010, 1st–6th June 2010, Los Angeles, USA“. Our own work leads us to believe that the difficulty of detecting directives is rather higher than for other intentions, so what they’ve done in this project is quite impressive.


WisdomTap ( has a very interesting buy intention offering. Their value proposition is “Your Customers announce their intent to buy by asking for product and service recommendations on Twitter.  We find customers who need your products and services.  We connect you to your customers at the right time.”


Twitchell et al have studied “Using Speech Act Theory to Model Conversations for Automated Classification and Retrieval”.

Carnegie Mellon

CMU has released a speech act corpus: through the Jangada and Ciranda projects.

Vakintent Demonstration Consoles

Here are some links to demos:

Name Description URL
Vakintent Intention Demo Demonstration of VakIntent, the Intention Analysis API from Aiaioo Labs
Vaksent Sentiment Dem Demonstration of VakSent, the Sentiment Analysis API from Aiaioo Labs

Case Study URL
Competitive Analysis

Vakintent API

The Vakintent API offered by Aiaioo Labs can identify 11 intentions, the objects of those intentions and their holders.

Please feel free to write me at for more information.

Vaksent API for Sentiment Analysis

Aiaioo Labs has just released an API for fine-grained sentiment analysis.

A demonstration of the Vaksent Sentiment Analysis Engine is available here:

The key features of the sentiment analysis system are: a) identification of the holder of the opinion (who holds that opinion), and b) identification of the object of the sentiment (what exactly is the sentiment expressed about).


We use a cascade of algorithms to identify sequentially 1) sentiment-conveying phrases, 2) entities (to identify objects being spoken about), 3) relations (to identify which sentiment applies to which entity) and 4) negations (to identify which relations are negated). This combination makes for a very sophisticated sentence level and entity level analysis of sentiment.

The main goal of this system was to have roughly domain independent behaviour (no imbalance in performance when used on financial data, product data or entertainment). Such a balance is pretty hard to achieve (some measurements suggest that human annotators agree with each other only 79% of the time when attempting to identify the sentiment of sentences/entities in certain types of text).


The accuracies that we measured for different domains are as follows.

Domain of Entertainment:

Accuracy = 0.7103

Precision = {negative=0.7222, positive=0.6997}

Recall = {negative=0.6837, positive=0.737}

F-Score = {negative=0.7027, positive=0.7181}

Tested on a total of 10662 sentences.

This was evaluated using the Bo Pang data set. As you can see the errors are roughly balanced on the positive and the negative side to get what we hope is a fairly unskewed error curve. This allows averaging to work as a strategy to cancel out noise.

Domain of Products:

Accuracy = 0.7266

Precision = {negative=0.5963, positive=0.8462}

Recall = {negative=0.7807, positive=0.6953}

F-Score = {negative=0.6823, positive=0.7671}

Tested on a total of 3731 sentences.

The data set used was the Bing Liu corpora (the first two) covering mostly electronic products. We have roughly the same performance again on products, but the curve is now slightly skewed.

Domain of Finance (evaluation incomplete):

Accuracy = 0.6896

Precision = {negative=0.7037, positive=0.6666}

Recall = {negative=0.7755, positive=0.5789}

F-Score = {negative=0.7387, positive=0.6212}

Tested on a total of 87 sentences.

We have roughly the same performance again on finance, but the evaluation data set is very small. We’re working on performing a more reliable evaluation.


Here is what Vaksent says about two sentences provided as examples:

I {- deny -} that [- it can never [+ be said that this is not [- a {!+ beautiful +!} ( car ) -] +] -] . = [ negative ]

( John ) and not [- ( Bruce ) -] said that this is not [- a {!- bad -!} ( car ) -] . = [ positive ]