Geek stuff, Tutorials

Backup Using rsync

Here’s a mini howto on backing up files  on a remote machine using rsync. It shows the progress while it does its thing and updates any remote files while keeping files on the remote end that were deleted from your local folder.

rsync -v -r --update --progress -e ssh /media/nam/Documents/ nam@192.168.0.105:/media/nam/backup/documents/

Here,  /media/nam/Documents/ is the local folder and /media/nam/backup/documents/ is the backup folder on the machine with IP 192.168.0.105.

Advertisements
Hadoop, Tutorials

Hadoop 2.2.0 – Single Node Cluster

We’re going to use the the Hadoop tarball we compiled earlier to run a pseudo-cluster. That means we will run a one-node cluster on a single machine. If you haven’t already read the tutorial on building the tarball, please head over and do that first.

Geting started with Hadoop 2.2.0 — Building

Start up your (virtual) machine and login as the user ‘hadoop’. First, we’re going to setup the essentials required to run Hadoop. By the way, if you are running a VM, I suggest you kill the machine used for building Hadoop and re-start from a fresh instance of Ubuntu to avoid any issues with compatibility later. For reference, the OS we are using is 64-bit Ubuntu 12.04.3 LTS.

Continue reading “Hadoop 2.2.0 – Single Node Cluster”

Geek stuff, Hadoop, Linux, Tutorials

Geting started with Hadoop 2.2.0 — Building

I wrote a tutorial on getting started with Hadoop back in the day (around mid 2010). Turns out that the distro has moved on quite a bit with the latest versions. The tutorial is unlikely to work. I tried setting up Hadoop on a single-node “cluster” using Michael Knoll’s excellent tutorial but that too was out of date. And of course, the official documentation on Hadoop’s site is lame.

Having struggled for two days, I finally got the steps smoothed out and this is an effort to document it for future use.

Continue reading “Geting started with Hadoop 2.2.0 — Building”