May 5, 2014
The Ingestion Box in the reference architecture is displayed as the smallest box. However, this is the component that integrates with all the available data sources. This tends to be among the most complex and time consuming task, but tends to be relegated to a lower priority which is a big mistake.
One needs to prioritize the data sources that generate maximum value and ensure we can ingest the data into the Big Data platform for subsequent “cool” analytics.
In my experience, it is also extremely important to have a robust User Interface for the ingestion section. Otherwise, there could be a series of manual steps leading to errors and ingestion of “bad” data that will minimize impact of subsequent analytics.
November 3, 2013
The NoSQL solutions available today provide distributed architectures with fault tolerance and scalability. However, to provide these benefits many NoSQL solutions have given up the strong data consistency and isolation guarantees provided by relational databases, coining a new term – “eventually consistent” – to describe their weak data consistency guarantees.
Is this acceptable? Shouldn’t we be demanding at least close to real time consistency?
A must read article by Dave Rosenthal http://gigaom.com/2013/11/02/next-gen-nosql-the-demise-of-eventual-consistency/
October 2, 2013
Are you evaluating Object storage? Have you converted existing applications to utilize Object storage? What is your experience doing this?
If you are not familiar with Object storage check this out.
Here is a basic overview of the difference between block and object storage – http://www.openstack.org/software/openstack-storage/
It seems like the time has come to consider Block storage
October 2, 2013
While HBase provides a lot of in-built functionality to manage region splits, it is not sufficient to ensure optimal performance. The number of regions in a table, and how those regions are split are crucial factors in understanding, and tuning your HBase cluster load. You should monitor the load distribution across the regions at all times, and if the load distribution changes over time, use manual splitting, or set more aggressive region split sizes.
Good article from HortonWorks to help you get started on Region splitting.
May 5, 2013
This article does an excellent job of detailing the business use cases where Hadoop is useful and where traditional Database Management systems might be more appropriate.
April 29, 2013
While users have access to many tools that assist in performing large scale data analysis tasks, understanding the performance characteristics of their parallel computations, such as MapReduce jobs, remains difﬁcult. Step #1 is to create a test suite that you can reliably run after every change.
April 12, 2013
Excellent blog by Guident – http://tinyurl.com/bne6t8k
comparing a Hadoop versus Traditional High Performance Computing. A specific use case of reading large log files is compared and Hadoop is the winner in terms of performance.
What happens if we have access to traditional HPC hardware? Should we use Hadoop on HPC? Check out excellent article by S.Krishnan on this – http://tinyurl.com/cwbzvof . Results are not conclusive, but an interesting read.
Bottom line, it appears to be dependent on the use case? Has anyone done a more detailed comparison?