Category: Machine Learning

Mechanical Consciousness

Mankind has attempted for a long time to explain consciousness, one’s awareness  of one’s own existence, of the world we live in, and of the passage of time.  And mankind has further believed for a very long time that consciousness extends beyond death and the destruction of the body.

Most explanations of consciousness have tended to rely on religion, and on philosophical strains associated with religion.  Possibly as a result, there has been a tendency to explain consciousness as being caused by a “soul” which lives on after death and in most traditions gets judged for its actions and beliefs during its time of residence in the body.

In this article, it is proposed that consciousness can have a purely mechanical origin.

The proposal is merely conjecture, but observations that support the conjecture (though they do not prove it) and I hope, render the conjecture plausible, are provided.  The explanatory power of the model is also somewhat explored.

It is also proposed that the working of the human mind is similar to that of many machine learning models in that they share certain limitations.

 

Preliminaries

First, let me define consciousness.  Consciousness of something is the knowledge of the presence or existence of something (of time or of our selves or of the world around us).

I argue that consciousness requires at the very least what we call “awareness” (that is, being able to sense directly or indirectly what one is conscious of).

Claim:  If I were not aware of something, I wouldn’t be conscious of it.

Argument: If all humanity lived underground for all time and never saw the sky, we would not be aware of the existence of the sky either by direct experience or by hearsay.  So, we couldn’t be conscious of it.  So, it is only when we are aware of the existence of something that we are conscious of it.

So, we have established a minimum requirement for consciousness – and that is “awareness” (being able to sense it).

But does consciousness require anything more than awareness?

The ability to reason and to predict behavior are things the human mind is capable of.

But are they required for consciousness?

Claim:  Reasoning is not required for consciousness.

Argument:  I argue that reasoning is not required because one cannot reason about something that one is not aware of the existence or presence of.  So, anything that one reasons about is something that one has registered the presence of in some manner, in other words, that one is conscious of.

Claim:  Prediction of the behavior of something is not required for consciousness.

Argument:  Prediction of the future behaviour of a thing is not possible without observation over time of how that thing behaves.  So observation (and consciousness) precedes prediction.

Yann LeCun argues that “common sense” is the ability to predict how something might behave in the future (if its future state is not completely random).  If we accept that definition, we might say that common sense builds on consciousness, not the other way around.

So, it appears that consciousness (knowledge of the existence of something) requires the bare minimum of awareness through the senses, and does not require reasoning or the ability to predict.

 

Development

The next question to consider is whether awareness constitutes consciousness or if there is more to it.

Claim:  There is more to consciousness than the signals that our senses send to the brain (awareness).

Argument:  The signals sent to the brain are analogous to signals that are present in completely inanimate things.  A camera has a sensor that records images of the outside world.  Even a pin-hole camera senses the outside world upon the wall on which the image of the sensed world is cast.  Even a shadow can be considered to be a “sensing” of the object that casts the shadow.  That does not imply consciousness.  There must be something else in animate “living” things that produces consciousness.

What is that something extra that is over and above what our senses record?

I believe that the extra thing that constitutes consciousness is the ability to create a model of what we sense and remember it (keep it in memory).

By “create a model”, I mean store a representation of what is sensed in some kind of memory so that what is sensed can be reproduced in some medium possibly at a later stage.

The model cannot be reproduced if it is not stored and remembered, so memory is also key to consciousness.

So, consciousness is the creation of a model in memory of what is sensed.

In other words, anything that can sense something in the world and actively create a model of what it senses (be able to reproduce it exactly or inexactly) is conscious.

I will attempt to justify this claim later.

 

Elaboration

So, the claim is that anything – even if it is a machine – that can actively create a model of something that it senses (is aware of) and store it in memory in such a way as to permit retrieval of the model, is conscious of it.

I am not saying that conscious beings are conscious of every aspect of what they sense as soon as they sense it. It can be possible that they sense and temporarily store a lot of things (for humans, for example, that could be every pixel of what we see outside the blind spot) but only model in a more abstract form and store in memory as an abstraction (and in a retrievable form) those parts that they pay attention to.

So it is possible that a conscious being may be conscious of the pixels of a bird outside the window but not conscious of it as a bird (model it in a more abstract form) or of its colour (model its properties) unless the conscious being pays attention to it.

For example, let us say we’re talking of a human.  Let’s say further that the human sees a mountain.

The human senses (sees) the mountain when rays of light scattered by the surface of the mountain or from things upon the mountain enter her or his eye and impinge upon the retina, triggering a chain of chemical reactions that lead to electrical potentials building up that act upon the nerves in the retinal cortex.

Subsequently, the neurons in the optical pathway of the human’s brain fire in such a manner that eventually, various parameters of the mountain come to be represented in the pattern of neural activations in the human’s brain.

We know that the human has modeled the mountain because the human can be asked to draw the mountain on a sheet of paper and will be able to do so.

Now, the human can be conscious of various parameters of the mountain as well.  For example, if the predominant colour of the mountain is represented in those neural activations, then the human is conscious of the predominant colour of the mountain.  For instance, if the human can answer, accurately or inaccurately, a question about the colour of the mountain, the human can be said to have modeled the same.

If the height of the mountain is represented in the neural patterns, then the human is conscious of the height of the mountain.  This can be tested by asking the human to state the height of the mountain.

If the shape of the mountain is vaguely capture in the neural activations so that the human identifies the same with that of a typical mountain, then the human is conscious of the mountain’s shape and that it is a mountain.

This ability to model is not present in what we typically consider an inanimate object.  A pin-hole camera would not actively create a model of what it senses (projects onto the wall) and is therefore not conscious.  Its projection is purely a result of physical phenomena external to it and it has no agency in the creation of the image within it.  So it has no consciousness.

Let’s say we use a digital camera which records the pixels of let’s say a mountain before it.  It can reproduce the mountain pixel by pixel, and so can be said to have a model in its memory of the mountain.  In other words, such a camera is conscious of the pixels of the mountain and everything else in the field of view.  It wouldn’t be conscious of the shapes or sizes or colours or even of the presence of  a mountain in the sense that a human would.

Claim:  Consciousness requires the active acquisition and storage of information from what is sensed.

Argument:  If the “model” is just the result of physical phenomena, say a projected image in a pin-hole camera, then there is no information acquired and stored by the system from what is sensed, and hence no consciousness.

Now, supposing that we were to build a machine of sand that created a representation of the mountain in sand and of the height and colour of the mountain and of the shape of the mountain and of the association of this shape with typical mountain shapes and of every other parameter that the human brain models.

Now, I would argue that this sand machine could be said to be conscious of the mountain in the same way as we are, even though it uses a completely different mechanism to create a model of the mountain.

Claim:  The hypothetical sand machine and a human brain are equivalent

Argument:  Consciousness of something is only dependent on what is modeled, and no on the method of modeling.  So, as long as the parameters of the mountain are modeled in exactly the same way in two systems, they can be said to be conscious of it in the same way.

 

Corollary

We are machines.

 

All right, so that’s a claim as well.

Here are two arguments in support of the claim.

a) Our behaviour in some sensory tasks is similar to that we would expect from machine learning tools called classifiers.

  1. The Himba colour experiment discovered that the Himba tribe of Africa were distinguishing colours differently from the rest of the world. They could not distinguish between blue and green but could distinguish between many shades of green which other humans typically had a hard time telling apart.
  2. People who speak languages that do not have vowel tones have trouble hearing differences in tone. Similarly, people who speak languages where the consonants ‘l’ and ‘r’ are conflated cannot easily tell them apart.

This is typically how a machine learning tool called a classifier behaves.  A classifier needs to be trained on labelled sounds or colours and will learn to recognize only those, and will have a hard time telling other sounds or colours apart.

b) The limitations that our brains reveal when challenged to perform some generative tasks (tasks of imagination) are identical to the limitations that the machine learning tools called classifiers exhibit.

Let me try the experiment on you.   Here’s a test of your imagination.  Imagine a colour that you have never seen before.

Not a mixture of colours, mind you, but a colour that you have never ever seen before.

If you are like most people, you’ll draw a blank.

And that is what a classifier would do too.

So, I would say that the human brain models things like colours or phonemes using some kind of classification algorithm, because it displays the limitations that such algorithms do.

So it is possible that we shall be able to discover by similar experiments on different types of human cognitive functions, that humans are merely machines capable of consciousness (of modeling a certain set of parameters related to what we perceive) and other cognitive functions that define us as human.

 

Further Discussion

People with whom I’ve discussed this sometime ask me if considering consciousness as the process of building a model of something adequately explains feelings, emotions, likes and dislikes and love and longing.

My answer is that it does, at least as far as likes and dislikes go.

A liking of something is a parameter associated with that thing and it is a single-value parameter that can be easily modeled by one or more numbers.

Neural networks can easily represent such numbers (regression models) and so can model likes and dislikes.

As for love and longing, these could result from biological processes and genetic inclinations, but as long as they are experienced, they would have had to be modeled in the human mind, possibly represented by a single number (a single point representation of intensity) or a distributed representation of intensity.  What is felt in these cases would also be modeled as an intensity (represented at a point or in a distributed manner).  One would be conscious of a feeling only when one could sense it and model it.  And the proof that one has modeled it lies in the fact that one can describe it.

So, when  the person becomes conscious of the longing, it is because it has been modeled in their brain.

 

Still Further Discussion

Again, someone asked if machines could ever possibly be capable of truth and kindness.

I suppose the assumption is that only humans are capable of noble qualities such as truth and kindness or that there is something innate in humans which gives rise to such qualities (perhaps gifted to humanity or instilled in them by the divine or the supernatural or earned by souls that attain humanity through the refinement of past lives).

However, there is no need to resort to such theories to explain altruistic qualities such as truthfulness, goodness and kindness.  It is possible to show game theoretically that noble qualities such as trustworthiness would emerge in groups competing in a typical modern economic environment involving a specialization of skills, interdependence and trading.

Essentially the groups that demonstrate less honesty and trustworthiness fail to be competitive against groups that demonstrate higher honesty and trustworthiness and therefore are either displaced by the latter or adopt the qualities that made the latter successful.  So, it is possible to show that the morals taught by religions and noble cultural norms can all be evolved by any group of competing agents.

So, truth and kindness are not necessarily qualities that machines would be incapable of (towards each other).  In fact, these would be qualities they would evolve if they were interdependent and had to trade with each other and organize and collaborate much as we do.

 

Related Work

This is a different definition than the definition used by Max Tegmark in his book “Life 3.0” but his definition of “consciousness” as “subjective experience” confuses it with “sentience” (the ability to feel).

Tegmark also talks about the work of the philosophers David Chalmers and Scott Aaronson, who seem to be approaching the question from the direction of physics – as in we are just particles from food and the atmosphere rearranged, so what arrangement of particles causes consciousness?

I think that is irrelevant.

All we need to ask is “What is the physical system, whatever it is made of, capable of modeling?”

Interestingly, in the book, Tegmark talks about a number of experiences that any theory of consciousness should explain.

Let’s look at some of those.

 

Explanatory Power of this Model

Explaining Abstraction

He talks about how tasks move from the conscious to the unconscious level as we practise them and get good at them.

He points out that when a human reads this, you do not read character by character but word by word.  Why is it that as you improve your reading skills, you are no longer conscious of the letters?

Actually, this can be explained by the theory we just put forth.

When we are learning to read (modeling the text is reading), we learn to model characters when we see a passage of text like this one and read character by character.

But with practice, we learn to model words or phrases at a higher level from passages of text, and direct our attention to the words or phrases because that facilitates reading.

We can chose to direct our attention to the letters and read letter by letter as well, if we so choose.

So, this model can explain attention too.

Attention

The brain is limited in its capacity to process and store information, so the human brain focuses its attention on the parts of the model it has built that are required for the performance of any task.

It can chose to not keep in memory more granular parts of the model once it has built a larger model.  For instance it can choose to not keep in memory the characters if it already has modeled the word.

This also explains phenomena such as “hemineglect” (patients with certain lesions in their brain miss half their field of vision but are not aware of it – so they may not eat food in the left half of their plate since they do not notice it).

We can explain it by saying that the brain has modeled a whole plate from the faulty sensory information provided to it and therefore the user is conscious of a whole plate, but minus the missing information.

Blindsight

Tegmark also talks of the work of Christof Koch and Francis Krick on the “neural correlates of consciousness”.

Koch and Krick performed an experiment where they distracted one eye with flashing images and caused the other eye to miss registering a static image presented to it.

They inferred from this that the retina is not capable of consciousness.

I would counter that by saying that the retina is conscious of the pixels of the images it sees if it constructs models of them (as it does) and stores them.

But if the brain models more abstract properties more useful to the tasks we perform, we focus our attention on those and therefore do not store in the memory the images that are not relevant to the more critical task (the distracting task).

So, I would argue that our consciousness can include models that comes from the retina (if some neural pathway from the retina creates models in memory at the pixel level).

But if our attention decides to focus on and consign to memory better things than what the retina models, it will, and then it will not necessarily model and be conscious of pixels from the retina.

 

Still Other work

Tegmark also talks extensively about the work of Giulio Tononi and his collaborators on something called “integrated information” and the objections to it by Murray Shanahan, but I’ll leave those interested in those theories to refer the work of their authors.

Advertisements

The Vanishing Information Problem – Why we switched to deep learning with neural networks

It’s been a year since my last post.  My last post was about deep (multi-layer) Bayesian classifiers capable of learning non-linear decision boundaries.

Since then, I’ve put on hold the work I was doing on deep (multi-layer) Bayesian classifiers and instead been working on deep learning using neural networks.

The reason for this was simple: our last paper revealed a limitation of deep directed graphical models that deep neural networks did not share, which allowed the latter to be of much greater depth (or to remember way more information) than the former.

The limitation turned out to be in the very equation that allowed us (read our last paper on deep (multi-layer) Bayesian classifiers for an explanation of the mathematics) to introduce non-linearity into deep Bayesian networks:

Sum of Products

The equation contains a product of feature probabilities P(f|h,c) [the part inside the big brackets in the above equation].

This product yields extreme (uncalibrated) probabilities and we had observed that those extreme probabilities were essential to the formation of non-linear decision boundaries in the deep Bayesian classifiers we’d explored in the paper.  The extremeness allowed the nearest cluster to a data point to have a greater say in the classification than all the other clusters.

We had found that when using this equation, there was no need to explicitly add non-linearities between the layers, because the above product itself gave rise to non-linear decision boundaries.

However, because of the extremeness of the product of P(f|h,c), the probability P(h|F) (the probability of a hidden node given the features) becomes a one-hot vector.

Thus a dense input vector (f) becomes transformed into a one hot vector (h), in just one layer.

Once we have a one-hot vector, we don’t gain much from the addition of more layers of neurons (which is also why you shouldn’t use the softmax activation function in intermediate layers of deep neural networks).

This is because one-hot encodings encode very little information.

There’s an explanation of this weakness of one-hot encodings in the following lecture by Hinton comparing RNNs and HMMs.

Hinton points out there that an RNN with its dense representation can encode exponentially more information than a finite state automaton (that is, an HMM) with its one-hot representation of information.

I call this tendency of deep Bayesian models to reduce dense representations of information to one-hot representations the vanishing information problem.

Since the one-hot representation is a result of overconfidence (a kind of poor calibration), it can be said that the vanishing information problem exists in any system that suffers from overconfidence.

Since Bayesian systems suffer from the overconfidence problem, they don’t scale up to lots of layers.

(We are not sure whether the overconfidence problem is an artifact of the training method that we used, namely expectation maximization, or of the formalism of directed graphical models themselves).

What our equations told us though was that the vanishing information problem was inescapable for deep Bayesian classification models trained using EM.

As a result, they would never be able to grow as deep as deep neural networks.

And that is the main reason why we switched to using deep neural networks in both our research and our consulting work at Aiaioo Labs.

Deep Bayesian Learning for NLP

Deep learning is usually associated with neural networks.

In this article, we show that generative classifiers are also capable of deep learning.

What is deep learning?

Deep learning is a method of machine learning involving the use of multiple processing layers to learn non-linear functions or boundaries.

What are generative classifiers?

Generative classifiers use the Bayes rule to invert probabilities of the features F given a class c into a prediction of the class c given the features F.

The class predicted by the classifier is the one yielding the highest P(c|F).

A commonly used generative classifier is the Naive Bayes classifier.  It has two layers (one for the features F and one for the classes C).

Deep learning using generative classifiers

The first thing you need for deep learning is a hidden layer.  So you add one more layer H between the C and F layers to get a Hierarchical Bayesian classifier (HBC).

Now, you can compute P(c|F) in a HBC in two ways:

Product of Sums
Computing P(c|F) using a Product of Sums
Sum of Products
Computing P(c|F) using a Sum of Products

The first equation computes P(c|F) using a product of sums (POS).  The second equation computes P(c|F) using a sum of products (SOP).

POS Equation

We discovered something very interesting about these two equations.

It turns out that if you use the first equation, the HBC reduces to a Naive Bayes classifier. Such an HBC can only learn linear (or quadratic) decision boundaries.

Consider the discrete XOR-like function shown in Figure 1.

hbc_figure_1

There is no way to separate the black dots from the white dots using one straight line.

Such a pattern can only be classified 100% correctly by a non-linear classifier.

If you train a multinomial Naive Bayes classifier on the data in Figure 1, you get the decision boundary seen in Figure 2a.

Note that the dotted area represents the class 1 and the clear area represents the class 0.

Multinomial NB Classifier Decision Boundary
Figure 2a: The decision boundary of a multinomial NB classifier (or a POS HBC).

It can be seen that no matter what the angle of the line is, at least one point of the four will be misclassified.

In this instance, it is the point at {5, 1} that is misclassified as 0 (since the clear area represents the class 0).

You get the same result if you use a POS HBC.

SOP Equation

Our research showed us that something amazing happens if you use the second equation.

With the “sum of products” equation, the HBC becomes capable of deep learning.

SOP + Multinomial Distribution

The decision boundary learnt by a multinomial non-linear HBC (one that computes the posterior using a sum of products of the hidden-node conditional feature probabilities) is shown in Figure 2b.

Decision boundary of a SOP HBC.
Figure 2b: Decision boundary learnt by a multinomial SOP HBC.

The boundary consists of two straight lines passing through the origin. They are angled in such a way that they separate the data points into the two required categories.

All four points are classified correctly since the points at {1, 1} and {5, 5} fall in the clear conical region which represents a classification of 0 whereas the other two points fall in the dotted region representing class 1.

Therefore, the multinomial non-linear hierarchical Bayes classifier can learn the non-linear function of Figure 1.

Gaussian Distribution

The decision boundary learnt by a Gaussian nonlinear HBC is shown in Figure 2c.

Decision Boundary of a Gaussian SOP HBC.
Figure 2c: Decision boundary learnt by a SOP HBC based on the Gaussian probability distribution.

The boundary consists of two quadratic curves separating the data points into the required categories.

Therefore, the Gaussian non-linear HBC can also learn the non-linear function depicted in Figure 1.

Conclusion

Since SOP HBCs are multilayered (with a layer of hidden nodes), and can learn non-linear decision boundaries, they can therefore be said to be capable of deep learning.

Applications to NLP

It turns out that the multinomial SOP HBC can outperform a number of linear classifiers at certain tasks.  For more information, read our paper.

Visit Aiaioo Labs

Fun with Text – Managing Text Analytics

The year is 2016.

I’m a year older than when I designed the text analytics lecture titled “Fun with Text – Hacking Text Analytics“.

Yesterday, I found myself giving a follow on lecture titled “Fun with Text – Managing Text Analytics”.

Here are the slides:

“Hacking Text Analytics” was meant to help students understand a range text analytics problems by reducing them into simpler problems.

But it was designed with the understanding that they would hack their own text analytics tools.

However, in project after project, I was seeing that engineers tended not to build their own text analytics tools, but instead rely on handy and widely available open source products, and that the main thing they needed to learn was how to use them.

So, when I was asked to lecture to an audience at the NASSCOM Big Data and Analytics Summit in Hyderabad, and was advised that a large part of the audience might be non-technical, and could I please base the talk on use-cases, I tried a different tack.

So I designed another lecture “Fun with Text – Managing Text Analytics” about:

  • 3 types of opportunities for text analytics that typically exist in every vertical
  • 3 use cases dealing with each of these types of opportunities
  • 3 mistakes to avoid and 3 things to embrace

And the take away from it is how to go about solving a typical business problem (involving text), using text analytics.

Enjoy the slides!

Visit Aiaioo Labs

A Naive Bayes classifier that outperforms NLTK’s

We found that by changing the smoothing parameters of a Naive Bayes classifier, we could get far better accuracy numbers for certain tasks.  By changing the Lidstone smoothing parameter from 0.05 to 0.5 or greater, we could go from an accuracy of about 50% to almost 70% on the task of question classification for question answering.

This is not at all surprising because, as described in an earlier post, the smoothing method used in the estimation of probabilities affects Naive Bayes classifiers greatly.

Below, we have provided an implementation of a Naive Bayes classifier which outperforms the Naive Bayes classifier supplied with NLTK 3.o by almost 10% on the task of classifying questions from the questions-train.txt file supplied with the textbook “Taming Text”.

Our Naive Bayes classifier (with a Lidstone smoothing parameter of 0.5) exhibits about 65% accuracy on the task of question classification, whereas the NLTK classifier has an accuracy of about 40% as shown below.

smoothing_graph

Finally, I’d like to say a few words about the import of this work.

Theoretically, by increasing the Lidstone smoothing parameter, we are merely compensating more strongly for absent features; we are negating the absence of a feature more vigorously;  reducing the penalty for the absence of a feature in a specific category.

Because increased smoothing lowers the penalty for feature absence, it could help increase the accuracy when a data-set has many low-volume features that do not contribute to predicting a category, but whose chance presence and absence may be construed in the learning phase to be correlated with a category.

Further investigation is required before we can say whether the aforesaid hypothesis would explain the effect of smoothing on the accuracy of classification in regard to the question classification data-set that we used.

However, this exercise shows that algorithm implementations would do well to leave the choice of Lidstone smoothing parameters to the discretion of the end user of a Naive Bayes classifier.

The source code of our Naive Bayes classifier (using Lidstone smoothing) is provided below:

This implementation of the Naive Bayes classifier was created by Geetanjali Rakshit, an intern at Aiaioo Labs.

There is a bug in the following code in that it uses calls to a dictionary’s “keys()” method in places where this is not required (resulting in poor run-time performance), as in the line “if label not in self._classes_dict.keys():”.  If you wish to use this implementation with large numbers of features, remove the “keys()” calls wherever possible.  So, the above line should be changed to “if label not in self._classes_dict:”.  Grep ‘keys’ and repeat.

import numpy as np
import random
import sys, math

class Classifier:
	def __init__(self, featureGenerator):
		self.featureGenerator = featureGenerator
		self._C_SIZE = 0
		self._V_SIZE = 0
		self._classes_list = []
		self._classes_dict = {}
		self._vocab = {}

	def setClasses(self, trainingData):
		for(label, line) in trainingData:
			if label not in self._classes_dict.keys():
				self._classes_dict[label] = len(self._classes_list)
				self._classes_list.append(label)
		self._C_SIZE = len(self._classes_list)
		return
		
	def getClasses(self):
		return self._classes_list

	def setVocab(self, trainingData):
		index = 0;
		for (label, line) in trainingData:
			line = self.featureGenerator.getFeatures(line)
			for item in line:
				if(item not in self._vocab.keys()):
					self._vocab[item] = index
					index += 1
		self._V_SIZE = len(self._vocab)
		return

	def getVocab(self):
		return self._vocab

	def train(self, trainingData):
		pass

	def classify(self, testData, params):
		pass

	def getFeatures(self, data):
		return self.featureGenerator.getFeatures(data)
		

class FeatureGenerator:
	def getFeatures(self, text):
		text = text.lower()
		return text.split()


class NaiveBayesClassifier(Classifier):
	def __init__(self, fg, alpha = 0.05):
		Classifier.__init__(self, fg)
		self.__classParams = []
		self.__params = [[]]
		self.__alpha = alpha

	def getParameters(self):
		return (self.__classParams, self.__params)

	def train(self, trainingData):
		self.setClasses(trainingData)
		self.setVocab(trainingData)
		self.initParameters()

		for (cat, document) in trainingData:
			for feature in self.getFeatures(document):
				self.countFeature(feature, self._classes_dict[cat])

	def countFeature(self, feature, class_index):
		counts = 1
		self._counts_in_class[class_index][self._vocab[feature]] = self._counts_in_class[class_index][self._vocab[feature]] + counts
		self._total_counts[class_index] = self._total_counts[class_index] + counts
		self._norm = self._norm + counts

	def classify(self, testData):
		post_prob = self.getPosteriorProbabilities(testData)
		return self._classes_list[self.getMaxIndex(post_prob)]

	def getPosteriorProbabilities(self, testData):
		post_prob = np.zeros(self._C_SIZE)
		for i in range(0, self._C_SIZE):
			for feature in self.getFeatures(testData):
				post_prob[i] += self.getLogProbability(feature, i)
			post_prob[i] += self.getClassLogProbability(i)
		return post_prob

	def getFeatures(self, testData):
		return self.featureGenerator.getFeatures(testData)

	def initParameters(self):
		self._total_counts = np.zeros(self._C_SIZE)
		self._counts_in_class = np.zeros((self._C_SIZE, self._V_SIZE))
		self._norm = 0.0

	def getLogProbability(self, feature, class_index):
		return math.log(self.smooth(self.getCount(feature, class_index),self._total_counts[class_index]))

	def getCount(self, feature, class_index):
		if feature not in self._vocab.keys():
			return 0
		else:
			return self._counts_in_class[class_index][self._vocab[feature]]

	def smooth(self, numerator, denominator):
		return (numerator + self.__alpha) / (denominator + (self.__alpha * len(self._vocab)))

	def getClassLogProbability(self, class_index):
		return math.log(self._total_counts[class_index]/self._norm)

	def getMaxIndex(self, posteriorProbabilities):
		maxi = 0
		maxProb = posteriorProbabilities[maxi]
		for i in range(0, self._C_SIZE):
			if(posteriorProbabilities[i] >= maxProb):
				maxProb = posteriorProbabilities[i]
				maxi = i
		return maxi


class Dataset:
	def __init__(self, filename):
		fp = open(filename, "r")
		i = 0
		self.__dataset = []
		for line in fp:
			if(line != "\n"):
				line = line.split()
				cat = line[0]
				sent = ""
				for word in range(1, len(line)):
					sent = sent+line[word]+" "
				sent = sent.strip()
				self.__dataset.append([cat, str(sent)])
				i = i+1
		random.shuffle(self.__dataset)	
		self.__D_SIZE = i
		self.__trainSIZE = int(0.6*self.__D_SIZE)
		self.__testSIZE = int(0.3*self.__D_SIZE)
		self.__devSIZE = 1 - (self.__trainSIZE + self.__testSIZE)

	def setTrainSize(self, value):
		self.__trainSIZE = int(value*0.01*self.__D_SIZE)
		return self.__trainSIZE

	def setTestSize(self, value):
		self.__testSIZE = int(value*0.01*self.__D_SIZE)
		return self.__testSIZE

	def setDevelopmentSize(self):
		self.__devSIZE = int(1 - (self.__trainSIZE + self.__testSIZE))
		return self.__devSIZE

	def getDataSize(self):
		return self.__D_SIZE
	
	def getTrainingData(self):
		return self.__dataset[0:self.__trainSIZE]

	def getTestData(self):
		return self.__dataset[self.__trainSIZE:(self.__trainSIZE+self.__testSIZE)]

	def getDevData(self):
		return self.__dataset[0:self.__devSIZE]



#============================================================================================

if __name__ == "__main__":
	
	# This Naive Bayes classifier implementation 10% better accuracy than the NLTK 3.0 Naive Bayes classifier implementation
	# at the task of classifying questions in the question corpus distributed with the book "Taming Text".

	# The "questions-train.txt" file can be found in the source code distributed with the book at https://www.manning.com/books/taming-text.
	
	# To the best of our knowledge, the improvement in accuracy is owed to the smoothing methods described in our blog:
	# https://aiaioo.wordpress.com/2016/01/29/in-a-naive-bayes-classifier-why-bother-with-smoothing-when-we-have-unknown-words-in-the-test-set/
	
	filename = "questions-train.txt"
	
	if len(sys.argv) > 1:
		filename = sys.argv[1]
	
	data = Dataset(filename)
	
	data.setTrainSize(50)
	data.setTestSize(50)
	
	train_set = data.getTrainingData()
	test_set = data.getTestData()
	
	test_data = [test_set[i][1] for i in range(len(test_set))]
	actual_labels = [test_set[i][0] for i in range(len(test_set))]
	
	fg = FeatureGenerator()
	alpha = 0.5 #smoothing parameter
	
	nbClassifier = NaiveBayesClassifier(fg, alpha)
	nbClassifier.train(train_set)
	
	correct = 0;
	total = 0;
	for line in test_data:
		best_label = nbClassifier.classify(line)
		if best_label == actual_labels[total]:
			correct += 1
		total += 1
	
	acc = 1.0*correct/total
	print("Accuracy of this Naive Bayes Classifier: "+str(acc))

Visit Aiaioo Labs

Naive Bayes Classifier in OpenNLP

The OpenNLP project of the Apache Foundation is a machine learning toolkit for text analytics.

For many years, OpenNLP did not carry a Naive Bayes classifier implementation.

OpenNLP has finally included a Naive Bayes classifier implementation in the trunk (it is not yet available in a stable release).

Naive Bayes classifiers are very useful when there is little to no labelled data available.

Labelled data is usually needed in large quantities to train classifiers.

However, the Naive Bayes classifier can sometimes make do with a very small amount of labelled data and bootstrap itself over unlabelled data.  Unlabelled data is usually easier to get your hands on or cheaper to collect than labelled data – by far.  The process of bootstrapping Naive Bayes classifiers over unlabelled data is explained in the paper “Text Classification from Labelled and Unlabelled Documents using EM” by Kamal Nigam et al.

So, whenever I get clients who are using OpenNLP, but have only very scanty labelled data available to train a classifier with, I end up having to teach them to build a Naive Bayes classifier and bootstrap it by using an EM procedure over unlabelled data.

Now that won’t be necessary any longer, because OpenNLP provides a Naive Bayes classifier that can be used for that purpose.

Tutorial

Training a Naive Bayes classifier is a lot like training a maximum entropy classifier.  In fact, you still have to use the DocumentCategorizerME class to do it.

But you pass in a special parameter to tell the DocumentCategorizerME class that you want a Naive Bayes classifier instead.

Here is some code for training a classifier (from the OpenNLP manual) in this case, the Maximum Entropy classifier.

DoccatModel model = null;
InputStream dataIn = null;
try {
  dataIn = new FileInputStream("en-sentiment.train");
  ObjectStream<String> lineStream =
		new PlainTextByLineStream(dataIn, "UTF-8");
  ObjectStream<DocumentSample> sampleStream = new DocumentSampleStream(lineStream);

  // Training a maxent model by default!!!
  model = DocumentCategorizerME.train("en", sampleStream);
}
catch (IOException e) {
  // Failed to read or parse training data, training failed
  e.printStackTrace();
}

Now, if you want to invoke the new Naive Bayes classifier instead, you just have to pass in a few training parameters, as follows.

						
DoccatModel model = null;
InputStream dataIn = null;
try {
  dataIn = new FileInputStream("en-sentiment.train");
  ObjectStream<String> lineStream =
		new PlainTextByLineStream(dataIn, "UTF-8");
  ObjectStream<DocumentSample> sampleStream = new DocumentSampleStream(lineStream);

  TrainingParameters params = new TrainingParameters();
  params.put(TrainingParameters.CUTOFF_PARAM, Integer.toString(0));
  params.put(TrainingParameters.ALGORITHM_PARAM, NaiveBayesTrainer.NAIVE_BAYES_VALUE);

  // Now the parameter TrainingParameters.ALGORITHM_PARAM ensures
  // that we train a Naive Bayes model instead
  model = DocumentCategorizerME.train("en", sampleStream, params);
}
catch (IOException e) {
  // Failed to read or parse training data, training failed
  e.printStackTrace();
}

Evaluation

I ran some tests on the Naive Bayes document categorizer in OpenNLP built from the trunk (you can also get the latest build using Maven).

Here are the numbers.

1. Subjectivity Classification

I ran the experiment on the 5000 movie reviews dataset (used in the paper “A Sentimental Education” by Bo Pang and Lillian Lee) with a 50:50 split into training and test:

Accuracies
Perceptron: 57.54% (100 iterations)
Perceptron: 59.96% (1000 iterations)
Maxent: 91.48% (100 iterations)
Maxent: 90.68% (1000 iterations)
Naive Bayes: 90.72%

2. Sentiment Polarity Classification

Cornell movie review dataset v1.1 (700 positive and 700 negative reviews).

With 350 of each as training and the rest as test, I get:

Accuracies
Perceptron: 49.70% (100 iterations)
Perceptron: 49.85% (1000 iterations)
Maxent: 77.11% (100 iterations)
Maxent: 77.55% (1000 iterations)
Naive Bayes: 75.65%

The data used in this experiment was taken from http://www.cs.cornell.edu/people/pabo/movie-review-data/

The OpenNLP Jira details for this feature are available at: https://issues.apache.org/jira/browse/OPENNLP-777

From Naive to Perplexed

We recently came up with three proofs that taken together suggest that the naive independence assumptions made in the Naive Bayes classifier are quite unnecessary for achieving the accuracy it is capable of.

A new classifier based on this math (called the Perplexed Bayes Classifier) is described in our recent research paper.

This blog post is to announce that we’ve also provided a mathematics companion to the paper.

The mathematics companion may be downloaded from:  http://www.aiaioo.com/publications/icon2015_mathematics_companion.pdf

What is the Perplexed Bayes Classifier?

The perplexed bayes classifier is a classifier that resembles the naive bayes classifier but does not suffer from one of the shortcomings of the naive bayes classifier (a tendency to be unjustifiably overconfident about its predictions).

The following diagram shows how the reliability curves of the perplexed bayes classifier look alongside the naive bayes classifier’s.

research_paper_graph