The huge volume of Big Data produced by sensors, genomic sequencers, electronic exchanges, and connected devices continues to generate headlines but it’s the diverse types of data, not the volume, that’s a bigger challenge to data scientists and is causing them to “leave data on the table.”
The debate over which statistical platform sits premiere over the others for data science applications rages on. The discussion often turns to the popular R and SAS environments. But to focus the dialog on performance only, a new benchmark study was just completed by commercial R provider Revolution Analytics.
Seventy-five percent of businesses have yet to successfully deploy big data analytics solutions to gain business-impacting insights, despite 65 percent increasing their investment in analytic services and technologies in 2014. These findings are part of “Analytics 2014,” Lavastorm’s second annual survey on analytic usage, trends, and future initiatives.
Big Data has mostly been considered the realm of big enterprise and not the midmarket segment. Dell launched a survey to study this notion and discovered that midmarket companies not only need Big Data to engender better, more competitive business practices, but many are already using data analysis. We caught up with Darin Bartik, Executive Director and GM of Database Management at Dell, to learn more about the survey and its findings.
As the primary facilitator of data science and big data, machine learning has garnered much interest by a broad range of industries as a way to increase value of enterprise data assets. In this article series we’ll examine the principles underlying machine learning based on the R statistical environment.
Unsupervised machine learning techniques have proven useful in identifying fake research papers submitted to the arXiv preprint server. Approximately 500 preprints are receiving daily by the automated repository arXiv, but are not pre-screened by humans. As a result, many nonsense papers generated by software such as SCIgen and Mathgen have been found in the most popular repository used by scientists to share research results.
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. A new white paper that focuses on these issues is available here on insideBIGDATA.
Microsoft Research, the research arm of the software giant, is a hotbed of data science and machine learning research. Microsoft has the resources to hire the best and brightest researchers from around the globe. A recent publication is available for download (PDF): “Deep Learning: Methods and Applications” by Li Deng and Dong Yu, two prominent researchers in the field.
As an attempt to remain relevant in an increasingly data-driven world, many traditional news publications are embracing the sweeping changes in their industry by employing a broad swath of new technologies. Here is a good case in point: The Los Angeles Times Data Desk, offering content such as maps, databases, analysis, and visualizations.