DDN has developed a Hadoop solution that is all about time to value: It simplifies rollout so that enterprises can get up and running more quickly, provides typical DDN performance to accelerate data processing, and reduces the amount of time needed to maintain a Hadoop solution.” said Dave Vellante, Chief Research Officer, Wikibon.org. “For enterprises with a deluge of data but a limited IT budget, the DDN hScaler appliance should be on the short list of potential solutions.”
In this video from the HPC Advisory Council Switzerland Conference, Derek Burke from Panasas presents: Introducing Panasas ActiveStor 14.
Part of the ClusterStor family, ClusterStor 6000 is designed to support installations with linear performance scalability in less space, scaling from up to 6 gigabytes per second to installations providing 1 terabyte per second file system throughput, as well as linear data storage capacity from terabytes up to tens of petabytes.
Big Data requires big computing, and the University of Illinois at Urbana-Champaign is doing its part with the launch of Blue Waters, one of the world’s fastest supercomputers.
U of I held an open house a couple of weeks ago, inviting one and all to visit its National Petascale Computing Facility and kick the tires on the $200 million machine built by Cray and funded by the National Science Foundation.
This is a petaflop machine designed to handle the challenging Big Data requirements associated with a wide range of problems – everything from unraveling complex biological systems to simulating the evolution of the cosmos.
This is where you go to get answers to questions about how the world works,’ says Bill Gropp, a computer science professor and one of four U of I researchers who oversaw the five-year development of the machine,” according to a story in Crain’s Chicago Business. The article goes on to say, “Blue Waters will keep the university in the lead on large-scale computing as researchers from around the country apply to the National Science Foundation to use the machine to crunch data for medical research, astrophysics, aerodynamics, weather forecasting, national security and other uses.”
This is not your everyday supercomputer. The Blue Waters system is a Cray XE/XK hybrid machine made up of AMD 6276 “interlagos” processors with a nominal clock speed of at least 2.3 GHz) and NIVIDIA GK110 Kepler accelerators, all connected by the Cray Gemini torus interconnect.
Blue Waters is capable of a sustained speed of over one petaflop, allowing it to perform more than one quadrillion calculations per second. The water-cooled system is housed in 276 black cabinets topped by silvery coolant pipes.
In addition to being really fast, Blue Waters has more than enough memory to handle Big Data requirements – 1.5 petabytes of total system memory and 300 petabytes of long-term storage.
In the Crain’s article, Gropp is quoted as saying, “We want people to ask, ‘What could you do if you could put massive amounts of data on a system and access it in microseconds?’”
The short answer is, “More than you can ever imagine.”
Read the Full Story.
In this video from the GPU Technology Conference, Nvidia CEO Jen-Hsun Huang shows how diverse companies are using GPU computing to tackle Big Data.
You can watch the entire keynote at Livestream.
Read the Full Story.
In this video, Sumit Gupta from Nvidia presents: Accelerated Computing Goes Beyond HPC. A wide array of companies are now using GPUs to accelerate Big Data analytics, and Gupta describes how these efforts are delivering competitive advantage.
In this video from the HPC Advisory Council Switzerland Conference, D.K. Panda from Ohio State University presents: Accelerating Big Data with Hadoop (HDFS, MapReduce and HBase) and Memcached. Download the slides (PDF).
In this video from the HPC Advisory Council Switzerland Conference, Patrick Demichel from HP presents: DCDC for Exascale and Big Data.
Information will be the most valuable resource in the 21st century. Operating on large volumes of diverse data sources to get the right actionable insights at the right time presents new challenges and opportunities for system design. Addressing these opportunities requires a rethinking of future server and data center design—with a datacentric focus across both hardware and software. Here, we’ve presented a brief introduction to some recent research activities in this exciting emerging area, with a specific focus on system architecture and systems software.
In this video, Ph.D candidate Jerome Mitchell from Indiana University details the benefits of Hadoop as well as offering a hands-on session illustrating its uses. You can also see Part 2, Part 3, Part 4, and Part 5 of this lecture.
The demand is already there, so how do we begin to fill the demand for tomorrow’s data scientists? In this video, Dr. Alexandra Fedorova from Simon Fraser University in Canada describes their new Professional Master’s program on Big Data.