Teradata (NYSE: TDC), a leading analytic data platforms, applications and services company announced today that Siemens AG will be able to enhance its manufacturing processes and product quality by deploying Teradata technology.
“This is the first truly disruptive advancement in high-end server technology in decades, with radical technology changes and the full support of an open server ecosystem that will seamlessly lead our clients into this world of massive data volumes and complexity,” said Tom Rosamilia, Senior Vice President, IBM Systems and Technology Group. “There no longer is a one-size-fits-all approach to scale out a data center. With our membership in the OpenPOWER Foundation, IBM’s POWER8 processor will become a catalyst for emerging applications and an open innovation platform.”
When disaster strikes, lost data could cost you your business. LC Technology International is continually improving their data recovery products to meet these needs.
In this video from the GPU Technology Conference 2014, Ami Gal from Sqream Technologies describes the company’s innovative Big Data processing technology. “Can you compare the technology of today with the technology of tomorrow? Yes, with SQream Technologies you can. This is because SQream Technologies uses GPUs to capture, store and process Big Data within seconds, resulting in 100x faster insights. Big Data analytics, once considered unattainable, can now be achieved in a matter of seconds with SQream’s hassle-free, robust analytic database.”
I’ve been monitoring an interesting discussion on the Big Data and Analytics group over on LinkedIn – “Is there a difference between big data and small data?” It is an interesting question, one that I’ve heard before during my travels down in the trenches as I explore our industry.
Last night I attended the Los Angeles Hadoop users Group (LA-HUG) meeting hosted by Shopzilla. The topic for the evening was “An Overview of Hulu’s Data Platform” presented by Prasan Samtani and Tristan Reid of Hulu. From all indications, Hulu is a significant player in the Hadoop user community and this talk documented the team’s command of big data technology.
“Machine logs contain simple and complex data – some logs contain time stamped data (i.e. syslogs) that are tactical events or errors used by sys admins to troubleshoot IT infrastructure. But other logs have more complex, unstructured or multi-structured text with sections on configuration info, statistics and other non-time stamped data. To make sense of the data in these logs, one needs a powerful language and processing engine to provide meaning and structure to the information. Once structure is defined, complex analytics and trend reporting can be performed.”