Practical Deep Learning with Keras and Python (New Video Course)

I’ve just finished creating a new video course on Udemy about Practical Deep Learning with Keras and Python. It’s aimed at two types of people:

  1. Those who are just coming to machine learning and deep learning and want a soft (code-based introduction) as opposed to the mathematical treatment typically given to the subject.
  2. Those who have had ML/DL before but have trouble applying the concepts in code.

For the dedicated readers of my blog, I’m making it available at the minimum price of just $9.99. Please use the following coupon link to access it at this price.

If you want to receive updates about content uploads, coupons and promotions, please subscribe to my mailing list here on mailchimp.



Deep Learning Experiments on Google’s GPU Machines for Free

Update: If you are interested in getting a running start to machine learning and deep learning, I have created a course that I’m offering to my dedicated readers for just $9.99. Practical Deep Learning with Keras and Python .

So you’ve been working on Machine Learning and Deep Learning and have realized that it’s a slow process that requires a lot of compute power. Power that is not very affordable. Fear not! We have a way of using a playground for running our experiments on Google’s GPU machines for free. In this little how-to, I will share a link with you that you can copy to your Google Drive and use it to run your own experiments.

BTW, if you would like to receive updates when I post similar content, please signup below:

Signup for promotions, course coupons and new content notifications through a short form here.


First, sign in to an account that has access to Google Drive (this would typically be any Google/Gmail account). Then, click on this link over here that has my playground document and follow the instructions below to get your own private copy.Read More »

Backup Using rsync

Here’s a mini howto on backing up files  on a remote machine using rsync. It shows the progress while it does its thing and updates any remote files while keeping files on the remote end that were deleted from your local folder.

rsync -v -r --update --progress -e ssh /media/nam/Documents/ nam@

Here,  /media/nam/Documents/ is the local folder and /media/nam/backup/documents/ is the backup folder on the machine with IP

How to Access Google Adsense Reports

So, Admob was acquired a while ago by Google and it was recently announced that the publisher reports by Admob would no longer be available through the old APIs. Instead, they now have to be retrieved through the AdSense API — which is based on OAuth 2.0 and thus a real pain for those just getting started.

Turns out, the process is quite straight-forward but extremely poorly documented. You can go through the AdSense reporting docs, the Google API library and the OAuth 2.0 specs but you would soon be lost. After spending a couple of days decoding the requirements, I found out the bare-metal approach to accessing the stats. And here is how.

Read More »

Hadoop 2.2.0 – Single Node Cluster

We’re going to use the the Hadoop tarball we compiled earlier to run a pseudo-cluster. That means we will run a one-node cluster on a single machine. If you haven’t already read the tutorial on building the tarball, please head over and do that first.

Geting started with Hadoop 2.2.0 — Building

Start up your (virtual) machine and login as the user ‘hadoop’. First, we’re going to setup the essentials required to run Hadoop. By the way, if you are running a VM, I suggest you kill the machine used for building Hadoop and re-start from a fresh instance of Ubuntu to avoid any issues with compatibility later. For reference, the OS we are using is 64-bit Ubuntu 12.04.3 LTS.

Read More »

Geting started with Hadoop 2.2.0 — Building

I wrote a tutorial on getting started with Hadoop back in the day (around mid 2010). Turns out that the distro has moved on quite a bit with the latest versions. The tutorial is unlikely to work. I tried setting up Hadoop on a single-node “cluster” using Michael Knoll’s excellent tutorial but that too was out of date. And of course, the official documentation on Hadoop’s site is lame.

Having struggled for two days, I finally got the steps smoothed out and this is an effort to document it for future use.

Read More »