Tech

Embracing the promise of a compute-everywhere future

Published

on


The internet of things and smart devices are everywhere, which means computing needs to be everywhere, too. And this is where edge computing comes in, because as companies pursue faster, more efficient decision-making, all of that data needs to be processed locally, in real time—on device at the edge.

“The type of processing that needs to happen in near real time is not something that can be hauled all the way back to the cloud in order to make a decision,” says Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group at Intel.

The benefits of implementing an edge-computing architecture are operationally significant. Although larger AI and machine learning models will still require the compute power of the cloud or a data center, smaller models can be trained and deployed at the edge. Not having to move around large amounts of data, explains Rivera, results in enhanced security, lower latency, and increased reliability. Reliability can prove to be more of a requirement than a benefit when users have dubious connections, for example, or data applications are deployed in hostile environments, like severe weather or dangerous locations.

Edge-computing technologies and approaches can also help companies modernize legacy applications and infrastructure. “It makes it much more accessible for customers in the market to evolve and transform their infrastructure,” says Rivera, “while working through the issues and the challenges they have around needing to be more productive and more effective moving forward.”

A compute-everywhere future promises opportunities for companies that historically have been impossible to realize—or even imagine. And that will create great opportunity says Rivera, “We’re eventually going to see a world where edge and cloud aren’t perceived as separate domains, where compute is ubiquitous from the edge to the cloud to the client devices.”

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is edge-to-cloud computing. Data is now collected on billions of distributed devices from sensors to oil rigs. And it has to be processed in real time, right where it is to create the most benefit, the most insights, and the need is urgent. According to Gartner, by 2025, 75% of data will be created outside of central data centers. And that changes everything.

Two words for you: compute everywhere.

My guest is Sandra Rivera, who is the executive vice president and general manager of the Datacenter and AI Group at Intel. Sandra is on the board of directors for Equinix. She’s a member of University of California, Berkeley’s Engineering Advisory Board, as well as a member of the Intel Foundation Board. Sandra is also part of Intel’s Latinx Leadership Council.

This episode of Business Lab is produced in association with Intel.

Welcome Sandra.

Sandra Rivera: Thank you so much. Hello, Laurel.

Laurel: So, edge computing allows for massive computing power on a device at the edge of the network. As we mentioned, from oil rigs to handheld retail devices. How is Intel thinking about the ubiquity of computing?

Sandra: Well, I think you said it best when you said computing everywhere, because we do see with the continued exponential growth of data, accelerated by 5G. So much data is being created, in fact, half of the world’s data has been created in just the past two years, but we know that less than 10% of it has been used to do anything useful. The idea that data is being created and computing needs to happen everywhere is true and powerful and correct, but I think we’ve been really evolving our thought process around what happens with that data, where the last many years we’ve been trying to move the data to a centralized compute cluster, primarily in the cloud, and now we’re seeing that if you want to, or need to process data in real time, you actually have to bring the compute to the data, to the point of data creation and data consumption.

And that is what we call the build-out of edge computing and that continuing between what is processed in the cloud and what needs to be, or is better processed at the edge much, much closer to where that data is created and consumed.

Laurel: So the internet of things has been an early driver of edge computing; we can understand that, and like you said, closer to the compute point, but that’s just one use case. What does the edge-to-cloud computing landscape look like today because it does exist? And how has that evolved in the past couple years?

Sandra: Well, as you pointed out, when you have installations, or when you have applications that need to compute locally, you don’t have the time, or the bandwidth to go all the way up to the cloud. And the internet of things really brought that to the forefront, when you look at the many billions of devices that are computing and that are in fact needing to process data and inform some type of action. You can think about a factory floor where we have deployed computer vision to do inspections of products coming down the assembly line to identify defects, or to help the manufacturing process in terms of just the fidelity of the parts that are going through that assembly line. That type of response time is measured in single digit milliseconds, and it really cannot be something that is processed up in the cloud.

And so while you may have a model that you’ve trained in the cloud, the actual deployment of that model in near real time happens at the edge. And that’s just one example. We also know that when we look at retail as another opportunity, particularly when we saw what happened with the pandemic as we started to invite guests back into retail shops, computer vision and edge inference was used to identify, were customers maintaining their safe distance apart? Were they practicing a lot of the safety protocols that were being required in order to get back to some kind of new normal where you actually can invite guests back into a retail organization? So all of that type of processing that needs to happen in near real time really is not something that can be hauled all the way back to the cloud in order to make a decision.

So, we do have that continuum, Laurel, where there is training that is happening, especially the deep learning training, the very, very large models that are happening in the cloud, but the real-time decision-making and the collection of that metadata, that can be sent back to the cloud for the models to be, frankly, retrained, because what you find in practical implementations maybe is not the way that the models and the algorithms were designed in the cloud, there is that continuous loop of learning and relearning that’s happening between the models and the actual deployment of those models at the edge.

Laurel: OK. That’s really interesting. So it’s like the data processing that has to be done immediately is done at the edge, but then that more intensive, more complicated processing is done in the cloud. So really as a partnership, you need both for it to be successful.

Sandra: Indeed. It is that continuum of learning and relearning and training and deployment, and you can imagine that at the edge, you often are dealing with much more power-constrained devices and platforms and model training, especially large model training takes a lot of compute, and you will not often have that amount of compute and power and cooling on the edge. So, there’s clearly a role for the data centers and the cloud to train models, but at the edge, you’re needing to make decisions in real time, but there’s also the benefit of not necessarily hauling all of that data back to the cloud, much of that is not necessarily valuable. You’re really just wanting to send the metadata back to the cloud or the data center. So there’s some real TCO, total cost of operations, real benefits to not paying the price of hauling all of that data back and forth, which is also a benefit of being able to compute and deploy at the edge, which we see our customers really opting for.

Laurel: What are some of the other benefits for an edge-to-cloud architecture? You mentioned the cost was one of them for sure, as well as time and not how having to send data back and forth between the two modes. Are there others?

Sandra: Yeah. The other reasons why we see customers wanting to train the smaller models certainly and deploy at the edge is enhanced security. So there is the desire to have more control over your data to not necessarily be moving large amounts of data and transmitting that over the internet. So, enhanced security tends to be a value proposition. And frankly, in some countries, there’s a data sovereignty directive. So you have to keep that data local, you’re not allowed to necessarily take that data outside a premise, and certainly national borders also becomes one of the directives. So enhanced security is another benefit. We also know from a reliability standpoint, there are intermittent connections when you’re transmitting large amounts of data. Not everybody has a great connection. And so the ability to transmit and all of that data versus being able to capture the data, process it locally, store it locally, it does give you a sense of consistency and sustainability and reliability that you may not have if you’re really hauling all of that traffic back and forth.

So, we do see security, we see that reliability, and then as I mentioned, the lower latency and the increase speed is certainly one of the big benefits. Actually, it’s not just a benefit sometimes, Laurel, it’s just a requirement. If you think about an example like an autonomous vehicle, all of the camera information, the LIDAR information that is being processed, it needs to be processed locally, it really, there isn’t time for you to go back to the cloud. So, there’s safety requirements for implementing any new technology in automated vehicles of any type, cars and drones and robots. And so sometimes it isn’t really driven as much by cost, but just by security and safety requirements of implementing that particular platform at the edge.

Laurel: And with that many data points, if we take a, for example, an autonomous vehicle, there’s more data to collect. So does that increase the risk of safely transmitting that data back and forth? Is there more opportunities to secure data, as you said, locally versus transmitting it back and forth?

Sandra: Well, security is a huge factor in the design of any computing platform and the more disaggregated the architecture, the more end points with the internet of things, the more autonomous vehicles of every type, the more smart factories and smart cities and smart retail that you deploy, you do, in fact, increase that surface area for attacks. The good news is that modern computing has many layers of security and ensuring that the devices and platforms are added to the networks in a secure fashion. And that can be done both in software, as well as in hardware. In software you have a number of different schemes and capabilities around keys and encryption and ensuring that you’re isolating access to those keys so you’re not really centralizing the access to software keys that users may be able to hack into and then unlock a number of different customer encrypted keys, but there’s also hardware-based encryption and hardware-based isolation, if you will.

And certainly technologies that we’ve been working on at Intel have been a combination of both software types of innovations that run on our hardware that can define these secure enclaves, if you will, so that you can attest that you have a trusted execution environment and where you’re quite sensitive to any perturbation of that environment and can lock out a potential mal actor after, or at least isolate it. In the future, what we’re working on is much more hardware-isolated enclaves and environments for our customers, particularly when you look at virtualized infrastructure and virtual machines that are shared among different customers or applications, and this will be yet another level of protection of the IP for that tenant that’s sharing that infrastructure while we’re ensuring that they have a fast and good experience in terms of processing the application, but doing it in a way that’s safe and isolated and secure.

Laurel: So, thinking about all of this together, there’s obviously a lot of opportunity for companies to deploy and/or just really make great use of edge computing to do all sorts of different things. How are companies using edge computing to really drive digital transformation?

Sandra: Yeah, edge computing is just this idea that is taken off in terms of, I have all of this infrastructure, I have all of these applications, many of them are legacy applications, and I’m trying to make better, smarter decisions in my operation around efficiency and productivity and safety and security. And we see that this combination of having compute platforms that are disaggregated and available everywhere all the time, and AI as a learning tool to improve that productivity and that effectiveness and efficiency, and this combination of what the machines will help humans do better.

So, in many ways we see customers that have legacy applications wanting to modernize their infrastructure, and moving away from what have been the black box bespoke single application targeted platform to a much more virtualized, flexible, scalable, programmable infrastructure that is largely based on the type of CPU technologies that we’ve brought to the world. The CPU is the most ubiquitous computing platform on the planet, and the ability for all of these retailers and manufacturing sites and sports venues and any number of endpoints to look at that infrastructure and evolve those applications to be run on general-purpose computing platforms, and then insert AI capability through the software stack and through some of the acceleration, the AI acceleration features that we have in an underlying platform.

It just makes it much more accessible for customers in the market to evolve and transform their infrastructure while working through the issues and the challenges they have around needing to be more productive and more effective moving forward. And so this move from fixed function, really hardware-based solutions to virtualized general-purpose compute platform with AI capabilities infused into that platform, and then having software-based approach to adding features and doing upgrades, and doing software patches to the infrastructure, it really is the promise of the future, the software-defined everything environment, and then having AI be a part of that platform for learning and for deployment of these models that improve the effectiveness of that operation.

And so for us, we know that AI will continue to be this growth area of computing, and building out on the computing platform that is already there, and quite ubiquitous across the globe. I think about this as the AI you need on the CPU you have, because most everyone in the world has some type of an Intel CPU platform, or a computing platform from which to build out their AI models.

Laurel: So the AI that you need with the CPU that you have, that certainly is attractive to companies who are thinking about how much this may cost, but what are the potential returns on investment benefits for implementing an edge architecture?

Sandra: As I mentioned, much of what the companies and customers that we work with, they’re looking for faster and better quality decision-making. I mentioned the factory line we are working with automotive companies now where they’re doing that visual inspection in real time on the factory floor, identifying the defects, taking the defective material off the line and working that. And that is a, any high repetitive task where humans are involved is truly an opportunity for human error to be inserted. So, automating those functions faster and higher quality decision-making is clearly a benefit of moving to more AI-based computing platforms. As I mentioned, reducing the overall TCO, the need to move all of that data, whether or not you’ve included it’s even valuable, just centralized data center or cloud, and then hauling it back, or processing it there, and then figuring out what was valuable before applying that to the edge-computing platform. That’s just a lot of waste of bandwidth and network traffic and time. So that’s definitely the attraction to the edge-computing build-out is driven by this, the latency issues, as well as the TCO issues.

And as I mentioned, just the increased security and privacy, we have a lot of very sensitive data in our manufacturing sites, process technology that we drive, and we don’t necessarily want to move that off premise, and we prefer to have that level of control and that safety and security onsite. But we do see that the industrial sector, the manufacturing sites, being able to just automate their operations and providing a much more safe and stable and efficient operation is one of the big areas of opportunity, and currently where we’re working with a number of customers, whether it’s in, you mentioned oil refinery, whether that is in health care and medical applications on edge devices and instrumentation, whether that is in dangerous areas of the world where you’re sending in robots or drones to perform visual inspections, or to take some type of action. All of these are benefits that customers are seeing in application of edge computing and AI combined.

Laurel: So lots of opportunities, but what are the obstacles to edge computing? Why aren’t all companies looking at this as the wave of the future? Is it also device limitations? For example, your phone does run and out of battery. And then also there could be environmental factors for industrial applications that need to be taken under consideration.

Sandra: Yes, it’s a couple of things. So one, as you mentioned, computing takes power. And we know that we have to work within restricted power envelopes when we’re deploying on the edge and also on computing small form factor computing devices, or in areas where you have a hostile environment, for example, if you think about wireless infrastructure deployed across the globe, that wireless infrastructure, that connectivity will exist in the coldest places on earth and the hottest places on earth. And so you do have those limitations, which for us means that we drive working through, of course, all our materials and components research, and our process technology, and the way that we design and develop our products on our own, as well as together with customers for much more power efficiency types of platforms to address that particular set of issues. And there’s always more work to do, because there’s always more computing you want to do on an ever limited power budget.

The other big limitation we see is in legacy applications. If you look at, you brought up the internet of things earlier, the internet of things is really just a very, very broad range of different market segments and verticals and specific implementations to a customer’s environment. And our challenge is how do we have application developers, or how do we give application developers an easy way to migrate and integrate AI into their legacy applications? And so when we look at how to do that, first of all, we have to understand that vertical and working closely with customers, what is important to a financial sector? What is important to an educational sector? What is important to a health care sector, or a transportation sector? And understanding those workloads and applications and the types of developers that are going to be wanting to deploy their edge platforms. It informs how high of the stack we may need to abstract the underlying infrastructure, or how low in the stack some customers may desire to do that end level of fine-tuning and optimization of the infrastructure.

So that software stack and the onboarding of developers becomes both the challenge, as well as the opportunity to unlock as much innovation and capability as possible, and really meeting developers where they are, some are the ninjas that want to and are able to program to that last few percentage points of optimization, and others really just want a very easy low code or no code, one-touch deployment of an edge-inference application that you can do with the varying tools that certainly we offer and others offer in the market. And maybe the last one in terms of, what are the limitations I would say are meeting safety standards, that is true for robotics in a factory floor, that is true for automotive in terms of just meeting the types of safety standards that are required by transportation authorities across the globe, before you put anything in the car, and that is true in environments where you have either manufacturing or oil and gas industry, just a lot of safety requirements that you have to meet either for regulatory reasons, or, clearly, just for the overall safety promise that companies make to their employees.

Laurel: Yeah. That’s a very important point to probably reinforce, which is we are talking about hardware and software working together, as much as software has eaten the world there is still really important hardware applications of it that need to be considered. And even with something like AI and machine learning and the edge to the cloud, you still have to also consider your hardware.

Sandra: Yeah. I often think that while, to your point, software is eating the world and the software truly is the big unlock of the underlying hardware and taking all the complexity out of that motion, out of the ability for you to access virtually unlimited compute and an extraordinary amount of innovations in AI and computing technology, that is the big unlock in that democratization of computing in AI for everyone. But somebody does need to know how the hardware works. And somebody does need to ensure that that hardware is safe, is performant, is doing what we need it to do. And in cases where you may have some errors, or some defects, it’s going to shut itself down, in particular that’s true if you think about edge robots and autonomous devices of all sorts. So, our job is to make that very, very complex interaction between the hardware and the software simple, and to offer, if you will, the easy button for onboarding of developers where we take care of the complexity underneath.

Laurel: So speaking of artificial intelligence and machine learning technologies, how do they improve that edge to cloud capability?

Sandra: It’s a continuous process of iterative learning. And so, if you look at that whole continuum of pre-processing and packaging the data, and then training on that data to develop the models and then deploying the models at the edge, and then, of course, maintaining and operating that entire fleet, if you will, that you’ve deployed, it is this circular loop of learning. And that is the beauty of certainly computing and AI, is just that reinforcement of that learning and that iterative enhancements and improvements that you get in that entire loop and the retraining of the models to be more accurate and more precise, and to drive the outcomes that we’re trying to drive when we deploy new technologies.

Laurel: As we think about those capabilities, machine learning and artificial intelligence, and everything we’ve just spoken about, as you look to the future, what opportunities will edge computing help enable companies to create?

Sandra: Well, I think we go back to where we started, which is computing everywhere, and we believe we’re going to eventually see a world where edge and cloud don’t really exist, or perceived as separate domains where compute is ubiquitous from the edge to the cloud, out to the client devices, where you have a compute fabric that’s intelligent and dynamic, and where applications and services run seamlessly as needed, and where you’re meeting the service level requirements of those applications in real time, or near real time. So the computing behind all that will be infinitely flexible to support the service level agreements and the requirements for the applications. And when we look in the future, we are quite focused on research and development and working with universities on a lot of the innovations that they’re bringing, it’s quite exciting to see what’s happening in neuromorphic computing.

We have our own Intel labs leading in research efforts to help the goal of neuromorphic computing of enabling that next generation of intelligent devices and autonomous systems. And these are really guided by the principles of biological neural computation, since neuromorphic computing, we use all those algorithmic approaches that emulate the human brain interacts with the world to deliver those capabilities that are closer to human cognition. So, we are quite excited about the partnerships with universities and academia around neuromorphic computing and the innovative approach that will power the future autonomous AI solutions that will make the way we live, work, and play better.

Laurel: Excellent. Sandra, thank you so much for joining us today on the Business Lab.

Sandra: Thank you for having me.

Laurel: That was Sandra Rivera, the executive vice president and general manager of the Datacenter and AI Group at Intel, who we spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles River. That’s it for this episode of Business Lab, I’m your host, Laurel Ruma. I’m the director of insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can also find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Performance varies by use, configuration and other factors.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Copyright © 2021 Vitamin Patches Online.