Hakuna MapData! » 2012 » August
rss

Ganglia configuration for a small Hadoop cluster and some troubleshooting

| Posted in Monitoring, Software, Troubleshooting |

1

Ganglia is an open-source, scalable and distributed monitoring system for large clusters. It collects, aggregates and provides time-series views of tens of machine-related metrics such as CPU, memory, storage, network usage. You can see Ganglia in action at UC Berkeley Grid.

Ganglia is also a popular solution for monitoring Hadoop and HBase clusters, since Hadoop (and HBase) has built-in support for publishing its metrics to Ganglia. With Ganglia you may easily see the number of bytes written by a particular HDSF datanode over time, the block cache hit ratio for a given HBase region server, the total number of requests to the HBase cluster, time spent in garbage collection and many, many others.

Basic Ganglia overview

Ganglia consists of three components:

  • Ganglia monitoring daemon (gmond) – a daemon which needs to run on every single node that is monitored. It collects local monitoring metrics and announce them, and (if configured) receives and aggregates metrics sent to it from other gmonds (and even from itself).
  • Ganglia meta daemon (gmetad) – a daemon that polls from one or more data sources (a data source can be a gmond or other gmetad) periodically to receive and aggregate the current metrics. The aggregated results are stored in database and can be exported as XML to other clients – for example, the web frontend.
  • Ganglia PHP web frontend – it retrieves the combined metrics from the meta daemon and displays them in form of nice, dynamic HTML pages containing various real-time graphs.

If you want to learn more about gmond, gmetad and the web frontend, a very good description is available at Ganglia’s wikipedia page. Hope, that following picture (showing an exemplary configuration) helps to understand the idea:

How to build Hadoop cluster on Amazon Elastic MapReduce using Karashpere Studio for EMR

| Posted in Presentations, Software |

0

Here is my video (15 minute long, in Polish), that shows how to create Hadoop cluster on Amazon Elastic MapReduce and use Karashpere Studio for EMR (a plugin for Eclipse). It demonstrates how to rent 10 of EC2 small instances to run exemplary calculation that process ~220GB of data in less then one hour, what costs $1.25.

Pigitos in action – Reading HBase column family content in a real-world application

| Posted in Programming |

0

In this post I will demonstrate how you can use Pigitos (a library that contains tiny, but highly useful UDFs for Apache Pig) to implement “friends you may have” feature (inspired by “people you may know”) for a simple real-world application.

Problem definition

Assume that we are launching a social website called “CloseFriendBook” and we have to design a basic HBase table that stores information about the users and their close friends.

Our access pattern is either:

  • read a user profile (information like first name, email, age), or
  • read the full list of friends of a given user (theoretically, a user may have unlimited number of close friends, but in reality it has no more than tens or hundreds of them).

A couple of basic, but useful tricks when working with Apache HBase shell

| Posted in Programming |

0

I would like to share some basic tricks to use with Apache HBase shell that I have learned by reading HBase: The Definitive Guide by L.George, HBase in Action by N.Dimiduk and A.Khurana and taking part in Cloudera Training for Apache HBase.

In this post, I will create HBase table, populate it with sample data and scan it. Each step will demonstrate a different technique to achieve the goal.

Create the User table

You can pipe commands to the hbase shell and easily create the table using one single command (without the need to explicitely launch the HBase shell first).

$ echo "create 'user', 'info'" | hbase shell
 
create 'user', 'info'
0 row(s) in 1.7610 seconds