Tech

A new era for data: What’s possible with as-a-service

Published

on


But the right amount of data, clean and properly channeled, can quench a business’s thirst for insights, power its growth, and carry it to success, says Matt Baker, senior vice president of corporate strategy at Dell Technologies. Like water, data is not good or bad. The question is whether it’s useful for the purpose at hand. “What’s difficult is getting the data to align properly, in an inclusive way, in a common format,” Baker says. “It has to be purified and organized in some way to make it usable, secure, and reliable in creating good outcomes.”

Many organizations are overwhelmed by data, according to a recently commissioned study of more than 4,000 decision-makers conducted on Dell Technologies’ behalf by Forrester Consulting.1 During the past three years, 66% have seen an increase in the amount of data they generate—sometimes doubling or even tripling—and 75% say demand for data within their organizations has also increased.

The research company IDC estimates that the world generated 64.2 zettabytes of data in 2020, and that number is growing at 23% per year. A zettabyte is a trillion gigabytes—to put that in perspective, that’s enough storage for 60 billion video games or 7.5 trillion MP3 songs.

The Forrester study showed that 70% of business leaders are accumulating data faster than they can effectively analyze and use it. Although executives have enormous amounts of data, they don’t have the means to extract insights or value from it—what Baker calls the “Ancient Mariner” paradox, after the famous line from Samuel Taylor Coleridge’s epic poem, “Water, water everywhere and not a drop to drink.”

Data streams turn to data floods 

It’s easy to see why the amount and complexity of data are growing so fast. Every app, gadget, and digital transaction generates a data stream, and those streams flow together to generate even more data streams. Baker offers a potential future scenario in brick-and-mortar retailing. A loyalty app on a customer’s phone tracks her visit to an electronics store. The app uses the camera or a Bluetooth proximity sensor to understand where it is and taps the information the retailer already has about the customer’s demographics and past purchasing behavior to predict what she might buy. As she passes a particular aisle, the app generates a special offer on ink cartridges for the customer’s printer or an upgraded controller for her game box. It notes which offers result in sales, remembers for the next time, and adds the whole interaction to the retailer’s ever-growing pile of sales and promotion data, which then may entice other shoppers with smart targeting.

Adding to the complexity is an often-unwieldy mass of legacy data. Most organizations don’t have the luxury of building data systems from scratch. They may have years’ worth of accumulated data that must be cleaned to be “potable,” Baker says. Even something as simple as a customer’s birth date could be stored in half a dozen different and incompatible formats. Multiply that “contamination” by hundreds of data fields and achieving clean, useful data suddenly seems impossible.

But abandoning old data means abandoning potentially invaluable insights, Baker says. For example, historical data on warehouse stocking levels and customer ordering patterns could be pivotal for a company trying to create a more efficient supply chain. Advanced extract, transform, load capabilities—designed to tidy up disparate data sources and make them compatible—are essential tools.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Copyright © 2021 Vitamin Patches Online.