“Fortissimo Foundation is a clustered, pervasive, global direct-remote I/O access system that linearly scales I/O bandwidth, memory, Flash and hard disk storage capacity and server performance to provide an “in-memory” scale-out solution that intelligently aggregates all resources of a data center cluster into a massive global name space, bridging all remote compute and storage resources to look and act as if they were local.”
In this special guest feature, Andrew Herman, President of CorSource, addresses data quality, a challenge facing all companies in the age of mass data collection. “Successfully tackling data quality is imperative, and achievable with a progressive, methodical approach. Your competitors are struggling with this very issue, and the question is whether this is going to remain your problem, or just theirs.”
In this special guest feature, ISC Big Data conference chair Sverre Jarp discusses the Internet of Things with Dirk Slama, Director of Business Development at Bosch Software. In his keynote presentation on October 1, Slama will be focusing on how the IoT is enabling new business models and services, stressing particularly on the key success factors and present a framework that he believes will help enable that success.
In this podcast, the Radio Free HPC team discusses the new TPCx-HS benchmark for Big Data. Designed to asses a broad range of system topologies and implementation methodologies, TPCx-HS is the industry’s first objective specification enabling measurement of both hardware and software including Hadoop Runtime, Hadoop Filesystem API compatible systems and MapReduce layers.
Manifest Insights is an exciting new Startup from Portland. “We are a data consulting and visualization company. We help companies gather together data from all the different sources where it may be and bring it together in a both easy to use and powerful dashboard, where they can slice and dice and view the data.”
“Dolphin helps companies manage data volume and optimize processes so they can balance the performance and processing capabilities of SAP systems against the cost of running those systems. We develop a data volume management strategy so our customers can keep business critical data in SAP HANA, to get the fast efficient processing they need, and move static or business complete data on to other storage where it is still accessible. With a data volume management strategy in place, our customers are better prepared to go live on HANA and improve their return on investment.”
“The importance of Big Data processing is steadily growing. Numerous large-scale problems that represent social, physical, or financial interactions are most naturally modeled with graph representations. Such graphs requiring the processing and storage of billions of vertices and graph edges, high-performance graph computations to process them efficiently. The Graph500 list encourages the development of such large-scale high-performance graph processing systems and offers a forum to showcase and discuss the performance of various solutions.”