Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: @CloudExpo, Agile Computing, Release Management

@CloudExpo: Blog Post

25 Years of Big Data: From SQL To The Cloud

Three generations of tools: SQL, MapReduce, Cloudcel

Cloudcel on Ulitzer

Back in 1985, the world was pre-web, data volumes were small, and no one was grappling with information overload. Relational databases and the shiny new SQL query language were just about perfect for this era. At work, 100% of the data required by employees was internal business data, the data was highly structured, and was organized in simple tables. Users would pull data from the database when they realized they needed it.

Fast forward to 2010. Today, everyone is grappling constantly with information overload, both in their work and in their social life. Most data today is unstructured, and most of it is in files, streams or feeds, rather than in structured tables. Many of the data streams are realtime, and constantly changing. At work, most of the data required by employees is now external data, from the web, from analytics tools, and from monitoring systems of all kinds - all kinds of data about customers, partners, employees, competitors, marketing, advertising, pricing, infrastructure, and operations. Today what's needed is smart IT systems that can automatically analyze, filter and push exactly the right data to users in realtime, just when they need it. Oh, and since no one wants to own data processing hardware and software any more, those IT systems should be in the cloud.

So how has the IT industry responded to the dramatic changes brought about first by the web, then more recently by the  realtime social web and the cloud. What tools are now available to users in this new era of Big Data where data volumes are growing exponentially.

From 1985 to 2004, SQL was essentially the only game in town. Around 2004, a number of companies, led by Google, and including Ebay, Yahoo and later Facebook, realized that they required levels of scalability, parallelism, performance and data flexibility that went way beyond what relational databases and SQL could provide. Their solution was to adopt a simple parallel programming framework, MapReduce, in place of SQL. MapReduce and its open source version Hadoop are now widely used to analyze very large data sets.

So what's next? If SQL was the first generation Big Data tool, and MapReduce/Hadoop was the second generation tool, what might a third generation tool look like? To answer this, we need to look at the areas in which MapReduce/Hadoop are weak - those areas are (a) realtime, and (b) ease-of-use. The MapReduce model is optimized for large-scale batch processing. As such, it is not a good fit for the growing number of applications requiring realtime stream processing. The model is also designed for use by experienced programmers, in the case of Hadoop, for use by experienced Java programmers. Unfortunately, the vast majority of those grappling with Big Data challenges today are "non-programmers". They are individuals or business users who rely on tools like Excel spreadsheets for processing their data. And there are a lot of them! Several hundred million Excel users alone.

The third generation of tools for Big Data will therefore need to offer the scalability, parallelism, performance and data flexibility of tools like Hadoop, but also be able to continuously process realtime data streams, and be as easy to use as a spreadsheet. At Cloudscale we've been tackling this challenge. Our Cloudcel service provides the first example of such a third generation Big Data tool.

SQL remains a great tool for handling structured, tabular data, and for transactional applications. MapReduce and Hadoop are great tools if you are a programmer and your task is to process two petabytes of historical data across three thousand servers in less than 24 hours. We now also have a third type of Big Data tool aimed at the much larger number of people who need a simple and easy-to-use, but powerful and scalable cloud-based service for analyzing the huge volumes of data that are now continuously bombarding them in their life and their work.

More Stories By Bill McColl

Bill McColl left Oxford University to found Cloudscale. At Oxford he was Professor of Computer Science, Head of the Parallel Computing Research Center, and Chairman of the Computer Science Faculty. Along with Les Valiant of Harvard, he developed the BSP approach to parallel programming. He has led research, product, and business teams, in a number of areas: massively parallel algorithms and architectures, parallel programming languages and tools, datacenter virtualization, realtime stream processing, big data analytics, and cloud computing. He lives in Palo Alto, CA.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...