LexisNexis® Risk Solutions has announced its inaugural HPCC Systems Developer Contest. Developers and other technical professionals have the opportunity to demonstrate how they leveraged HPCC Systems to solve either a Big Data or Complex Query problem.
In this installment we’ll set the stage for in-memory computing technology in terms of its current state as well as its next stage of evolution. We’ll begin with a discussion of the capabilities of in-memory databases (IMDBs) and in-memory data grids (IMDGs), and show how they differ. We’ll finish up the section by demonstrating how neither one is sufficient for a company’s strategic move to IMC; instead, we will explain why a comprehensive in-memory data platform is needed.
Using predictive analytics involves understanding and preparing the data, defining the predictive model, and following the predictive process. Predictive models can assume many shapes and sizes, depending on their complexity and the application for which they are designed. The first step is to understand what questions you are trying to answer for your organization.
The panel discussion video below comes from the Los Angeles Spark Users Group. The talk fosters a lively discussion on Spark’s initial goals, where it came from and what the future holds for Spark. Many leading Big Data vendors are responding by introducing Spark’s capabilities into their architectures. The panel discussion is between the top Hadoop distribution vendors – Cloudera, MapR, and Pivotal.
This article is the third in an editorial series that has the goal to provide direction for enterprise thought leaders on ways of leveraging big data technologies in support of analytics proficiencies designed to work more independently and effectively in today’s climate of working to increase the value of corporate data assets.
“When organizations operate both Lustre and Apache Hadoop within a shared HPC infrastructure, there is a compelling use case for using Lustre as the file system for Hadoop analytics, as well as HPC storage. Intel Enterprise Edition for Lustre includes an Intel-developed adapter which allows users to run MapReduce applications directly on Lustre. This optimizes the performance of MapReduce operations while delivering faster, more scalable, and easier to manage storage.”
Datameer, an end-to-end big data analytics application for the Hadoop ecosystem, today introduced Datameer 5.0 with Smart Execution, a new patent-pending technology that intelligently and dynamically selects the best-of-breed compute framework at each step in the big data analytics process.
Enterprise data assets are what feed the predictive analytic process, and any tool must facilitate easy integration with all the different types data sources required to answer critical business questions. Robust predictive analytics needs to access analytical and relational databases, OLAP cubes, flat files, and enterprise applications.