HDFS becoming the defacto standard for Big Data

With growing use of applications using in-memory data grids,  the need for  the place where data is stored does not need to be fast.   It does however need to be fault-tolerant and scalable.  HDFS nicely fills this requirement.

Good explanation on details of this at http://tinyurl.com/ag82suv

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: