I am total believer in the “Design-First” approach where we think about the UI from user perspective before we engage Technical architects. I have led many such successful project over the past several years and believe it is a recipe for success. Looks like Microsoft has taken the same approach. Check out http://gigaom.com/2013/10/13/a-peek-inside-microsofts-new-design-first-development-strategy/
Are you evaluating Object storage? Have you converted existing applications to utilize Object storage? What is your experience doing this?
If you are not familiar with Object storage check this out.
Here is a basic overview of the difference between block and object storage – http://www.openstack.org/software/openstack-storage/
It seems like the time has come to consider Block storage
While HBase provides a lot of in-built functionality to manage region splits, it is not sufficient to ensure optimal performance. The number of regions in a table, and how those regions are split are crucial factors in understanding, and tuning your HBase cluster load. You should monitor the load distribution across the regions at all times, and if the load distribution changes over time, use manual splitting, or set more aggressive region split sizes.
Good article from HortonWorks to help you get started on Region splitting.
for proposed steps. The steps might appear logical, but involving all stakeholders in committee meetings as a first step is a sure way to bring the project to a screeching halt.
I would reverse the steps as follows to be a more agile approach to implementation of moving E-discovery to the cloud
1 – Evaluate the e-discovery platform first and the cloud options second – This is to ensure you have the right e-discovery platform
2 – Assess potential – and realistic – risks associated with security, data privacy and data loss prevention – This is homework to ensure you are prepared to answer questions that will surely come up in regards to data security. What you will find in most cases is that the solution is going to be more secure.
3 – Learn the differences between public and private clouds – This is part of security assessment and this step is to reach a decision on which cloud to utilize for your needs.
4 – Run a pilot on a small project before moving to larger, mission-critical matters – this puts the project in action mode
5 – Benchmark your existing e-discovery processes including data upload, processing, review and export. – This is necessary to arm you with information as you sell the move to the cloud. Once again, you will find that processes in general work faster than behind the firewall if implemented correctly.
6 – Document and define areas of potential cost savings – This is necessary step and required to make the case for transition to the cloud
7 – Leverage the success or adoption of other SaaS solutions in the organization to lessen resistance – This is the start of the sales pitch. We have done it before & we are now going to adopt similar approach for E-Discovery
8 -Actively involve all stakeholders across multiple departments – Now get into a meeting with all parties. At this stage you are armed with success on a small project, have information on data security, benchmarks and cost to face the “committee”
9 – Develop an implementation plan, including an internal communication strategy – You have the OK, now go Execute!!
10 – Understand you are still the ultimate custodian of all electronically stored information.
Recent conversations with major clients indicates a trend where large enterprises want the benefits of scaling of the cloud within their firewalls. Nebula One has come out with a product to satisfy this need – very interesting. check out nebula.com
Excellent article describing embedded objects.
We should ignore the cons listed in the article as there is no real excuse for not extracting embedded objects.
Why should email archiving solutions be tightly integrated with eDiscovery ?
It enables enterprises to have a smooth workflow across all aspects of e-discovery, from collection/preservation to analysis/review to production/presentation. This saves a lot of time that would otherwise be wasted on importing/exporting data from different systems, and reduces the risk that something gets lost in the shuffle.
It is not ready according to Greg Buckles – http://ediscoveryjournal.com/2013/08/exchange%E2%80%99s-litigation-hold/
The following critical information is not held:
- Folder Location
- Read/Unread Status
- Reply/Forward Info
Also check out this comment on readiness of Exchange 2013 – http://sourceoneinsider.emc.com/2013/02/15/understanding-whats-new-in-exchange-2013-for-archiving-and-ediscovery/
Exchange 2013 is unlikely to be sufficient for organizations facing a higher frequency of litigation and who have invested in in-house legal staff to perform early case assessment and sophisticated review of content before exporting such content to external legal counsel. –
The information Governance Reference Model is circular so it works best when there is a continuous focus on IG. One of the biggest failures in Information Governance initiatives is that It is treated as a project with a start and end date. These initiatives are only successful if the end goal is to incorporate IG as part of required operational tasks.
This article does an excellent job of detailing the business use cases where Hadoop is useful and where traditional Database Management systems might be more appropriate.