January 20, 2013
With need for rapid changes in our apps, making change hard since change is risky — is flawed. Massive changes with rigorous change management approvals might exhibit no visible risks but could eventually lead to massive blowups catching everyone off guards. Introducing massive changes introduce new operations vulnerabilities with each release, due to dependence upon manual process steps.
The solution is to make changes small in scope, stress the deployment process so that the release process becomes a boring and a low risk task with a heavy reliance on automation. It requires organizational acceptance to rollback the changes if an issue arises, but since the scope of change is small it becomes more manageable. This approach over time makes the product more robust & less susceptible to massive blowups.
January 17, 2013
Just released a 30 page site built with bootstrap. Simple & extremely fast to build. built in less than 1 week with 1 developer new to the bootstrap framework. Site works very well on iPAD, Android OS and iPhones.
January 9, 2013
Twitter’s use of actual humans to make sense of its search results, points to the mundane reality that even with machine learning and lots of data, sometimes humans are the best source for insights.
Check out twitter’s post http://tinyurl.com/ar6dqyw
As soon as twitter discovers a new popular search query, they send it to human evaluators on Amazon’s Mechanical Turk service, who are asked a variety of questions about the query. The feedback from humans is then fed to the machine learning models that will make use of the additional information
January 6, 2013
The organizational approach in enterprise content and records management systems has traditionally relied upon the use of document attribute data as the primary information source for content storage and retrieval. Attributes such as business units, document types, expiration dates, functional areas, and similar things are common.
In the past few years content and records management applications have often used a “facetted” taxonomy design to define how the content and/or records are classified.
Is this sufficient? It might be acceptable in some cases for content management, but falls short of expectations for knowledge management.
Large volumes of knowledge content are often well suited to auto-categorization. he tools and methods most commonly used for auto-categorization are text analytics or image analysis. This can be expensive to implement. This makes it challenging for small law offices to benefit from this technology, who also have large volumes of data to contend with.
In coming blogs, I will talk about how to deliver inexpensive solutions to the problem.