Enterprise Grade Lustre in the Clouds

commercial lustre

With the release of Intel® Cloud Edition for Lustre software in collaboration with key cloud infrastructure providers like Amazon Web Services (AWS), commercial customers have an ideal opportunity to employ a production-ready version of Lustre—optimized for business HPDA—in a pay-as-you-go cloud environment.

Performance Optimization of Hadoop Using InfiniBand RDMA

DK Panda

“The Hadoop framework has become the most popular open-source solution for Big Data processing. Traditionally, Hadoop communication calls are implemented over sockets and do not deliver best performance on modern clusters with high-performance interconnects. This talk will examine opportunities and challenges in optimizing performance of Hadoop with Remote DMA (RDMA) support, as available with InfiniBand, RoCE (RDMA over Converged Enhanced Ethernet) and other modern interconnects.”

Lustre 101

lustre logo

This week’s lustre 101 article looks at the history of lustre and the typical configuration of this high-performance scalable storage solution for big data applications.

Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapReduce

tata

In this video from the LAD’14 Lustre Administrators and Developers Conference in Reims, Rekha Singhal from Tata Consultancy Services presents: Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapReduce Applications.

10 Ways IBM Platform Computing Saves You Money

data center cloud

IBM Platform Computing products can save an organizations money by reducing a variety of direct costs associated with grid and cluster computing. Your organization can slow the rate of infrastructure growth and reduce the costs of management, support, personnel and training—while also avoiding hidden or unexpected costs.

InsideBIGDATA Guide to Big Data Solutions in the Cloud

Big Data Business Solution Cloud widget

For a long time, the industry’s biggest technical challenge was squeezing as many compute cycles as possible out of silicon chips so they could get on with solving the really important, and often gigantic problems in science and engineering faster than was ever thought possible. Now, by clustering computers to work together on problems, scientists are free to consider even larger and more complex real-world problems to compute, and data to analyze.

Slidecast: Fortissimo Foundation – A Clustered, Pervasive, Global Direct-remote I/O Access System

a3cube

“Fortissimo Foundation is a clustered, pervasive, global direct-remote I/O access system that linearly scales I/O bandwidth, memory, Flash and hard disk storage capacity and server performance to provide an “in-memory” scale-out solution that intelligently aggregates all resources of a data center cluster into a massive global name space, bridging all remote compute and storage resources to look and act as if they were local.”

Attaining High-Performance Scalable Storage

data center cloud

As compute speed advanced towards its theoretical maximum, the HPC community quickly discovered that the speed of storage devices and the underlying the Network File System (NFS) developed decades ago had not kept pace. As CPUs got faster, storage became the main bottleneck in high data-volume environments.

IBM Goes Deep for Big Data Market with POWER8 Open Server Innovation

Slide7

“This is the first truly disruptive advancement in high-end server technology in decades, with radical technology changes and the full support of an open server ecosystem that will seamlessly lead our clients into this world of massive data volumes and complexity,” said Tom Rosamilia, Senior Vice President, IBM Systems and Technology Group. “There no longer is a one-size-fits-all approach to scale out a data center. With our membership in the OpenPOWER Foundation, IBM’s POWER8 processor will become a catalyst for emerging applications and an open innovation platform.”

New Market Dynamics Report: HPC Life Sciences

HPC Life Sciences

Scientific research in the life sciences is often akin to searching for needles in haystacks. Finding the one protein, chemical, or genome that behaves or responds in the way the scientist is looking for is the key to the discovery process. For decades, high performance computing (HPC) systems have accelerated this process, often by helping to identify and eliminate in feasible targets sooner.