Uncategorized

Readability for the Web

I just came across this great feature in Safari (having downloaded it for Windows 7) that’s called ‘Safari Reader’. It allows you to read articles on any webpage in an extremely readable, uncluttered pane.

Having seen the great utility of this tool, I immediately searched for an equivalent for chrome. Turns out, there’s an extension provided by readability.com that does just that. Reading one of my previous posts using readability, I found it to be a great tool that enhances readability of any page; it’s not only the uncluttered interface but also beautiful typography that will make reading long passages/blogs much much better.

The default pane looks pretty good. The font is modern, yet highly readable on-screen. Click the image below to see full-size snap.

The default, large-font display provided by the Readability.com extension

You can also customize it to give a much more book-like feel. It can turn hyperlinks into footnotes, allows you to re-define the text size and the paragraph width. It also has a couple of pre-defined themes that work really well.

Changing the theme is very easy on Readability extension. Pick a different theme, check the option to convert hyperlinks to footnotes and change the font-size to suit your preferences.

There’s also a wordpress plugin that allows you to let your readers view your posts in readability pane but I didn’t really like that. An alternative to that is to insert custom hyperlinks using the URL shortener rdd.me. You can read this post in readability using this link.

Advertisements
Geek stuff, Machine Learning, research, Tutorials

A Basic Naive Bayes classifier in Matlab

This is the second in my series of implementing low-level machine learning algorithms in Matlab. We first did linear regression with gradient descent and now we’re working with the more popular naive bayes classifier. As is evident from the name, NB it is a classifier i.e. it sorts data points into classes based on some features. We’ll be writing code for NB using low-level matlab (meaning we won’t use matlab’s implementation of NB). Here’s the example we’ve taken (with a bit of modification) from here.

Consider the following vector:

(likes shortbread, likes lager, eats porridge, watched England play football, nationality)T

A vector x = (1, 0, 1, 0, 1)^T would describe that a person likes shortbread, does not like lager, eats porridge, has not watched England play football and is a national of Scottland. The final point is the class that we want to predict and takes two values: 1 for Scottish, 0 for English.

Here’s the data we’re given:


X = [ 0 0 1 1 0 ;
1 0 1 0 0 ;
1 1 0 1 0 ;
1 1 0 0 0 ;
0 1 0 1 0 ;
0 0 1 0 0 ;
1 0 1 1 1 ;
1 1 0 1 1 ;
1 1 1 0 1 ;
1 1 1 0 1 ;
1 1 1 1 1 ;
1 0 1 0 1 ;
1 0 0 0 1 ];

Notice that usually when we represent data, we write features in columns, instances in rows. If this is the case, we need to get the data in proper orientation: features in rows, instances in columns. That’s the convention. Also, we need to separate the class from the feature set:

Y = X(:,5);
X = X(:,1:4)'; % X in proper format now. 

Alright. Now, that we have the data, let’s hear some theory. As always, this isn’t a tutorial on statistics. Go read about the theory somewhere else. This is just a refresher:

In order to predict the class from a feature set, we need to find out the probability of Y given X (where

X = ( x_1, x_2, ldots x_n )

with n being the number of features. We denote the number of instances given to us as m. In our example, n = 4, m = 13. The probability of Y given X is:

P(Y=1|X) = P(X|Y=1) * P(Y=1) / P(X)

Which is called the Bayes rule. Now, we make the NB assumption: All features in the feature set are independant of each other! Strong assumption but usually works. Given this assumption, we need to find P(X|Y=1), P(Y) and P(X).

(The weird braces notation that follows is the indicator notation. 1{ v } means use 1 only if condition v holds, 0 otherwise.)

P(X) = P(X|Y=1) + P(X|Y=0)

P(X|Y=1) = prod_j{P(x_i|Y=1)}

To find P(X|Y=1), you just have to find P(x_i|Y=1) for all features and multiply them together. This is where the assumption comes in. You need the assumption of independence here for this.

P(x_i|Y=1) = sum_j{1{x_i^j = 1, y^j = 1}} / sum_j{1{y^j = 1}}

This equation basically means count the number of instances for which both x_i and Y are 1 and divide by the count of Y being 1. That’s the probability of x_i appearing with Y. Fairly straight forward if you think about it.

P(Y=1) = sum_j{1{y^j = 1 }} / sum_j{1{y^j = 1, y^j = 0 }}

Same as above. Count the ratio of Y=1 with the total number of Ys. Notice that we need to calculate all these for both Y=0 and Y=1 because we need both in the first equation. Let’s begin from the bottom up. For all of below, consider E as 0 and S as 1 since we consider being Scottish as being in class 1 (positive example).

P(Y):

pS = sum (Y)/size(Y,1);     % all rows with Y = 1 
pE = sum(1 - Y)/size(Y,1);  % all rows with Y = 0

P(x_i|Y):

phiS = X * Y / sum(Y);  % all instances for which attrib phi(i) and Y are both 1
              % meaning all Scotts with attribute phi(i)  = 1 
phiE = X * (1-Y) / sum(1-Y) ;  % all instances for which attrib phi(i) = 1 and Y =0
              % meaning all English with attribute phi(i) = 1 

PhiS and PhiE are vectors that store the probabilities for all attributes. Now that we have the probabilities, we’re ready to make a prediction. Let’s get a test datapoint:

x=[1 0 1 0]';  % test point 

And calculate the probabilities P(X|Y=1) and P(X|Y=0)

pxS = prod(phiS.^x.*(1-phiS).^(1-x));
pxE = prod(phiE.^x.*(1-phiE).^(1-x));

And finally, the probabilities of P(Y=1|X) and P(Y=0|X)

pxSF = (pxS * pS ) / (pxS + pxE)
pxEF = (pxE * pS ) / (pxS + pxE)

They should add upto 1 since there are only two classes. Now you can define a threshold for deciding whether the class should be considered 1 or 0 based on these probabilities. In this case, we can consider this test point to belong to class 1 since the probability pxSF > 0.5.

And there you have it!