Sensors, agents and an Internet of Things are all producing data, all of the time. It would be a vast understatement to say that the CIA has experience in acquiring, handling and analyzing big quantities of data. In this talk, the CTO of the CIA will talk about the scale of the problems his team deals with now, the coming inflection point in the increase in data, the grand challenges we face and why an emphasis on analytics is critical for the future. This is a talk not to be missed.
In this video, author Alistair Croll explains the concepts of his book, Lean Analytics. Croll strongly advises startups to pick the one metric that matters the most and to focus on it.
This talk was hosted by MaRS, a Canadian organization that provides resources — people, programs, physical facilities, funding and networks — to ensure that critical innovation happens.
Addison Snell will present some of the top insights from recent market intelligence studies from Intersect360 Research, including forward-looking views of the vertical markets, new applications, and technologies with the best prospects for growth in 2012 and beyond. The view from Intersect360 Research will include applications in both High Performance Technical Computing (HPTC) and High Performance Business Computing (HPBC), with an emphasis on the opportunities for HPC technologies in emerging Big Data applications. The evolving industry dynamics around accelerators, file systems, and InfiniBand will also be discussed.”
On Tuesday, April 2, President Obama announced a research initiative that has the ambitious goal of “revolutionizing our understanding of the human brain,” according to a White House press release.
Know as BRAIN (Brain Research through Advancing Innovative Neurotechnologies), the initiative is being launched in FY 2014 with an initial budget of about $100 million, a modest amount given the project’s goals.
In short, BRAIN is designed to help researchers find “…new ways to treat, cure, and even prevent brain disorders, such as Alzheimer’s disease, epilepsy, and traumatic brain injury.” Included is support for new technologies that will allow researchers to produce dynamic pictures of the brain that show how individual brain cells and complex neural circuits interact in real time.
This is a foray into Big Data. The initiative will let researchers amass and analyze the data needed to “…explore how the brain records, processes, uses, stores, and retrieves vast quantities of information, and shed light on the complex links between brain function and behavior.”
Among the many public and private organizations involved in the effort are the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency (DARPA), and the National Science Foundation (NSF). NSF in particular is leading the charge in applying the technologies and techniques of Big Data to the initiative.
The National Science Foundation will play an important role in the BRAIN Initiative because of its ability to support research that spans biology, the physical sciences, engineering, computer science, and the social and behavioral sciences,” according to the White House release. “The National Science Foundation intends to support approximately $20 million in FY 2014 in research that will advance this initiative, such as the development of molecular-scale probes that can sense and record the activity of neural networks; advances in ‘Big Data’ that are necessary to analyze the huge amounts of information that will be generated, and increased understanding of how thoughts, emotions, actions, and memories are represented in the brain.”
In a story in Information Week posted the same day, senior editor J. Nicholas Hoover, writes, “On a conference call with reporters after the President’s announcement, National Institutes of Health director Francis Collins said that the brain-mapping initiative might eventually require the handling of yottabytes of data. A yottabyte is equal to a billion petabytes.”
That’s Big Data at its mind-boggling best.
Read the Full Story.
In this video from the 2013 National HPCC Conference, Rich Brueckner from inside-BigData moderates a panel discussion on How to Talk to Your CFO about HPC and Big Data.
John C. Morris – Pfizer
Dr. George Ball – Raytheon
Henry Tufo – University of Colorado, Boulder
Dr. Flavio Villanustre – LexisNexis
As members of the HPC community, we spend a good share our time sharing our work and best practices with our colleagues. But how do we communicate the business value of high performance computing and Big Data analytics to CFOs who have little affinity to discussions of things like cores, Hadoop, and MPI? In this panel discussion, experts and Big Data and HPC will come together to share best practices and communication strategies that have proven effective when talking to CFOs and other C-level executives.”
In this video from the 2013 National HPCC Conference, Bob Feldman moderates a panel discussion entitled: Big Systems, Big Data, Better Products.
- Devin Jensen – Altair
- Rene Copeland – SGI
- Dr. Stephen Wheat – Intel
- Sanjay Umarji – HP
How will enormous data sets and an endless stream of ever-more granular variables drive supercomputing in the coming years? Will it be like a dust storm that buries us, or flood waters we can redirect and manage? How will it alter the evolution of architecture and subsystems? How will it change computer science education, development tools and job descriptions? And will gargantuan data form a barrier to our evolution to Exascale and beyond by sapping the shrinking resources for funding and creativity?
In this video from the 2013 National HPCC Conference, Bradford Spiers from Bank of America presents: Big Data in Banking.
To some people, Big Data in Banking might relate them to calls from their credit card when a charge seems unusual. To others, it might mean calculations behind low-latency trading. Initially, it seemed to mean just simple Hadoop. Now we see specialization according to the problem we are solving. This talk will discuss different types of Big Data seen in Banking and how one might tie them together to form viable workflows that solve our business and infrastructure challenges.”
HPCC Systems from LexisNexis Risk Solutions works with clients in various industries to manage different types of risk by helping them derive insight from massive data sets. To do this, we have developed our High Performance Computing Cluster (HPCC) technology, making it possible to process and analyze complex, massive data sets in a matter of seconds.
The internet, sensors and high performance computing are some of the top Big Data producers. Recently, there has been increased focus on extracting more value out of these generated data. Analysis of Big Data sets may be simplified as “looking for needle in a haystack” on one end of a spectrum to “looking for relationships between hay in a stack” on the other. We will discuss the architectural platforms and tools suitable for different parts of this spectrum.”
In the midst of all the ballyhoo surrounding Big Data and how it’s going to “transform how we live, work, and think” (a borrowing from the subtitle of the excellent book Big Data by Viktor Mayer-Schönberger and Kenneth Cukier), it’s encouraging to hear about applications that are actually living up to all the hype.
Case in point: Rip Empson writing in TechCrunch this week chronicles the rise of Bina Technologies, a Silicon Valley startup that makes it possible to analyze genomic data that until now, because of sheer volume, has been gathering digital dust.
The cost of genomic sequencing has been dropping, reports Empson, and we are well on the way to the $1000 genome and a new era of personalized medicine. Bina plans to be part of that era.
Although still in startup mode, Bina has already fielded a number of Big Data-based applications. For example, the company is working with the Medical Center of Wisconsin to implement whole genome sequencing for newborns in the Center’s neonatal intensive care unit. And back in the Valley, the Stanford Genetics Department is using the Bina platform to analyze several hundred whole human genomes in less than five hours, a task that normally takes several days.
Bina is poised to become a significant player in the $15 billion genomic research industry.
In this RichReport video, Narges Bani Asadi presents: Bina – Accelerating Data-Driven Healthcare.
Founded in 2011 by a group of Ph.Ds, big data junkies and bioinformaticians from Stanford and University of California Berkeley, Bina picks up and analyzes this genomic data that has been, until now, almost unusable,” comments Empson. “Through Bina, research universities, pharmaceutical companies and clinicians can get access to data that focuses on the rare variants in our genetics — in other words, those that cause our predispositions to cancer, newborn disorders, down syndrome, sickle cell, and so on.
Through the ability to better parse and make use of this data, the idea is that these downstream players can then facilitate significant improvements in patient care, treatment and, really, basic understanding of how the body works via insights at the molecular level.”
Read the Full Story.