Saffron Technology has been on a quest since 1999 to replicate the way the human brain learns using associative memory. Saffron is now commercially available as a cognitive computing platform following beta testing for real-time operational risk intelligence and decision support in defense, energy, healthcare and manufacturing applications.
For up-and-coming data scientists who need to get up to speed on Hadoop architectures, here is another in a long line of compelling Big Data & Brews episodes. In the video below we hear from three Hadoop luminaries about the Hadoop projects they’ve worked on – Erich Nachbar on Spark, Michael Stack on Hbase and Ari Zilka (from Hortonworks) on Stinger. Great insider’s perspective!
One of the attractions of the Hadoop Summit 2014 was the Big Data & Brews interview series – “Live from Hadoop Summit.” These short, well-focused discussions always provide good light into important industry trends. In the episode below, the conversation turns to the subject of SQL on Hadoop. Stefan Groschupf, the CEO of Datameer, recorded a special interview with Ovum analyst Tony Baer who gave his thoughts on the topic.
The recent Big Ideas for Sustainable Prosperity research conference brought together some of the world’s preeminent environment & economy thinkers for a two day conference to share knowledge and think big about Policy Innovation for Greening Growth. In the video presentation below, Dr. Matthew E. Kahn argues that the combination of Big Data and field experiments can sharply improve urban quality of life.
Here is a great learning resource for anyone wishing to dive into the field of machine learning – a complete class “Machine Learning” from Spring 2011 at Carnegie Mellon University. The course is taught by Tom Mitchell, Chair of the Machine Learning Department.
Big Workflow is a new industry term coined by Adaptive Computing that refers to technology that accelerates insights by more efficiently processing intense simulations and big data analysis. Big Workflow derives its name from its ability to solve big data challenges by streamlining the workflow to deliver valuable insights from massive quantities of data across multiple platforms, environments, and locations.
As the latest installment of the Big Data Use Case series here on insideBIGDATA, we offer a compelling presentation by our friend , Jeremy Carroll, Operations engineer at Pinterest. Jeremy talks about how they use HBase at massive scale at Pinterest.
“A common thread for many of this year new IBM Fellows is their commitment to developing solutions and practical applications in the field of Big Data and Analytics. IBM is a leader in the space – with 1500 Big Data and Analytics-related patents in 2013 alone, and $24 billion in investments since 2005 through both acquisitions and R&D – and these fellows maintain the drumbeat of momentum that has made IBM number one in Big Data market share for the second year running.”