Connect with us

Tech

IT strategies for hybrid cloud

Published

on

IT strategies for hybrid cloud


“If we think about a large modern enterprise, we may have two, three, four data centers; three, four, five public cloud providers; dozens, if not hundreds of edge locations,” says Sinclair. “And we have data moving and apps moving everywhere all the time.” 

For example, the London Stock Exchange Group has dozens of data centers, hundreds of applications, and a presence in Amazon Web Services, Google Cloud, and Microsoft Azure, according to Nikolay Plaunov. He’s a director and technologist in the infrastructure and cloud division of LSEG, the diversified company that runs the stock exchange and also provides data-based financial services. Its portfolio includes virtualized applications running on-premises, containerized apps running in the cloud, and legacy apps running on mainframes. 

“What is really hitting people today, versus probably five or 10 years ago, is this idea of, ‘I have these things in my data center, and I have these things I’ve moved to the public cloud and I need to manage a lot more things,’” adds Sinclair. “Now, I’m living in a world where not only do I have to manage a lot more things, but I am constantly dealing with data and apps moving in all directions.” 

One of the most significant effects of the 2020 coronavirus pandemic from an information technology (IT) perspective has been the sudden, unplanned migration of applications to the cloud, as organizations moved quickly to accommodate remote workers and the surge of online shoppers. Today, companies find themselves with one foot in the cloud and the other still in the on-premises world, facing significant challenges in terms of how to manage this mixed IT environment, how to secure it, and how to keep costs under control.

A hybrid cloud IT infrastructure, in which resources are distributed across on-premises, private cloud, and public cloud environments, enables companies to accelerate time to market, spur innovation, and increase the efficiency of business processes. And companies are keen on its promises: more than a third (37%) say hybrid is an investment priority over the next year and a half, according to a 2021 ESG survey of 372 IT professionals.

But the complexity of managing a hybrid cloud presents challenges that can bedevil chief information officers, including compatibility with legacy equipment, cybersecurity concerns, and cost issues associated with moving data and managing data access. 

To successfully manage a hybrid cloud environment, organizations need a specially designed hybrid cloud management plan that includes the right tools and strategies. These approaches can be as varied as the types of businesses out there, but some guidelines apply across industries—the need for a central control plane, for example, using automation to manage IT operations, and transitioning from managing infrastructure to managing service-level agreements with vendors.

It all starts with applications

Russell Skingsley, chief technology officer for digital infrastructure at Hitachi Vantara, says most customers started their cloud journeys with somewhat unrealistic expectations. They initially believed that all apps would eventually end up in the cloud.

What they’re finding is “there are things we can move, there are things we might move, and there are things we definitely can’t move,” Skingsley says.

Sinclair adds that while the rising tide is certainly lifting enterprise apps from the data center to the public cloud, there’s a countercurrent in which organizations are moving some applications from the cloud back to the data center. Some of the reasons cited by organizations speak to the complexity of hybrid cloud management: these include data sensitivity, performance, and availability requirements.

To effectively move applications to the public cloud, organizations need to set up a systematic methodology, almost a factory-style assembly line that analyzes each application in its portfolio and then decides which ones to “lift and shift” as-is to the cloud, which ones to re-factor or rewrite to take full advantage of the cloud, and which to keep on-premises.

The first step is conducting an inventory of the application portfolio. This can help organizations eliminate duplication and identify apps that no longer serve a business purpose and can be de-commissioned. The next step is to analyze applications through the lens of business outcomes. Then, organizations need to make decisions based on factors like time, risk, cost, and value.

At London Stock Exchange Group, Plaunov is constantly balancing cost with business criticality. Every application is different and requires its own specific calculation. “I’ve seen several applications that were lifted and shifted to the cloud, and in some cases, it’s relatively simple to optimize them and to optimize their costs.” In other cases, it can be expensive to convert a monolithic app to the public cloud because it entails breaking the app into smaller components.

The company’s risk management team analyzed its application portfolio and identified 14 high-priority apps in one of the business units. “If the application is business-critical and yet is running on obsolete infrastructure, then it’s an obvious choice to do something about it. And if you’re already budgeting for some changes to an application, if there are no regulatory or technological limits, then it’s a candidate to go to the public cloud.”

As more businesses deploy more internet-connected devices and sensors, they find themselves performing initial processing of some data at the edge, then moving relevant data to the cloud or a data center. Organizations need to deploy a data strategy that determines which data should be processed where, and how to most efficiently move data between nodes.

Ultimately, a hybrid cloud needs to become a flexible, resilient fabric that can accommodate shifting business requirements and react on the fly, handle spinning up new application instances as needed, with the underlying storage resources that provide data processing and analytics automatically responding to the business needs, says Skingsley.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

The Download: DeepMind’s AI shortcomings, and China’s social media translation problem

Published

on

The hype around DeepMind’s new AI model misses what’s actually cool about it


Earlier this month, DeepMind presented a new “generalist” AI model called Gato. The model can play the video game Atari, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do hundreds of different tasks.

But while Gato is undeniably fascinating, in the week since its release some researchers have got a bit carried away.

One of DeepMind’s top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn’t contain his excitement. “The game is over!” he tweeted, suggesting that there is now a clear path from Gato to artificial general intelligence, or ‘AGI’, a vague concept of human or superhuman-level AI. The way to build AGI, he claimed, is mostly a question of scale: making models such as Gato bigger and better.

Unsurprisingly, de Freitas’s announcement triggered breathless press coverage that Deepmind is “on the verge” of human-level artificial intelligence. This is not the first time hype has outstripped reality. Other exciting new AI models, such as OpenAI’s text generator GPT-3 and image generator DALL-E, have generated similar grand claims.

For many in the field, this kind of feverish discourse overshadows other important research areas in AI. Read the full story.

—Melissa Heikkilä 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Volunteers are translating Chinese social media posts into English
Even though the posts have passed China’s internet censorship regime, Beijing is unhappy. (The Atlantic $)
+ WeChat wants people to use its video platform. So they did, for digital protests. (TR)

2 Ukraine’s startup community is resuming business as usual
Many workers are juggling their day jobs with after-hours war effort volunteering. (WP $)
+ Russian-speaking tech bosses living in the US are cutting ties with pro-war workers. (NYT $)
+ YouTube has taken down more than 9,000 channels linked to the war. (The Guardian)

3 The Buffalo shooting highlighted the failings of tech’s anti-terrorism accord
Critics say platforms haven’t done enough to tackle the root causes of extremism. (WSJ $)
+ America has experienced more than 3,500 mass shootings since Sandy Hook. (WP $)

4 Crypto appears to have an insider trading problem
Just like the banking system its supporters rail against. (WSJ $)
+ Christine Lagarde thinks crypto is worth “nothing.” (Bloomberg $)
+ Crypto is weathering a bitter storm. Some still hold on for dear life. (TR)
+ The crypto industry has lost around $1.5 trillion since November. (The Atlantic $)
+ Stablecoin Tether has paid out $10 billion in withdrawals since the crash started. (The Guardian)

5 The nuclear fusion industry is in turmoil
It isn’t even up and running yet, but fuel supplies are already running low. (Wired $)
+ A hole in the ground could be the future of fusion power. (TR)
+ The US midwest could be facing power grid failure this summer. (Motherboard)

6 Big Tech isn’t worried about the economic downturn
Even if it drops some of its market valuation along the way. (NYT $)
+ But lawmakers are determined to rein them in with antitrust legislation. (Recode)
+ Their carbon emissions are spiraling out of control, too. (New Yorker $)

7 The US military wants to build a flying ship
The Liberty Lifer X-plane would be independent of fixed airfields and ports. (IEEE Spectrum)

8 We need to change how we recycle plastic
The good news is that the technology to overhaul it exists—it just needs refining. (Wired $)
+ A French company is using enzymes to recycle one of the most common single-use plastics. (TR)

9 Why you should treat using your phone like drinking wine
Striking that delicate balance from stopping the positive tipping into negative. (The Guardian $)

10 Inside the wholesome world of internet knitting 🧶
Its favorite knitter’s creations have gained a cult following. (Input)
+ How a ban on pro-Trump patterns unraveled the online knitting world. (TR)

Quote of the day

“I like the instant gratification of making the internet better.”

—Jason Moore, who is credited with creating more than 50,000 Wikipedia pages, tells CNN about his motivations for taking on the unpaid work.

Continue Reading

Tech

The hype around DeepMind’s new AI model misses what’s actually cool about it

Published

on

The hype around DeepMind’s new AI model misses what’s actually cool about it


“Nature is trying to tell us something here, which is, this doesn’t really work, but the field is so believing its own press clippings, that it just can’t see that,” he adds. 

Even de Freitas’s DeepMind colleagues, Jackie Kay and Scott Reed, who worked with him on Gato, were more circumspect when I asked them directly about his claims. When asked about whether Gato was heading towards AGI, they wouldn’t be drawn. “I don’t actually think it’s really feasible to make predictions with these kinds of things. I try to avoid that. It’s like predicting the stock market,” said Kay.

Reed said the question was a difficult one. “I think most machine learning people will studiously avoid answering. Very hard to predict, but, you know, hopefully we get there someday.”

In a way, the fact that DeepMind called Gato a “generalist” might have made it a victim of the AI sector’s excessive hype around AGI. The AI systems of today are called “narrow” AI, meaning they can only do a specific, restricted set of tasks such as generate text. 

Some technologists, including at Deepmind, think that one day humans will develop “broader” AI systems that will be able to function as well or even better than humans. Some call this artificial “general” intelligence. Others say it is like “belief in magic.“ Many top researchers, such as Meta’s chief AI scientist Yann LeCun question whether it is even possible at all.

Gato is a “generalist” in the sense that it can do many different things at the same time. But that is a world apart from a “general” AI that can meaningfully adapt to new tasks that are different from what the model was trained on, says MIT’s Andreas. “We’re still quite far from being able to do that.”

Making models bigger will also not address the issue that models don’t have “lifelong learning”, meaning they can be taught things once and they will understand all of the implications and use it to inform all of the other decisions that they are going to make, he says.

The hype around tools like Gato is harmful for the general development of AI, argues Emmanuel Kahembwe, an AI/robotics researcher and part of the Black in AI organization co-founded by Timnit Gebru. “There are many interesting topics that are left to the side, that are underfunded, that deserve more attention, but that’s not what the big tech companies and the bulk of researchers in such tech companies are interested in,” he says.

Tech companies ought to take a step back and take stock of why they are building what they are building, says Vilas Dhar, president of the Patrick J. McGovern Foundation, a charity that funds AI projects “for good.” 

“AGI speaks to something deeply human—the idea that we can become more than we are, by building tools that propel us to greatness,” he says. “And that’s really nice, except it also is a way to distract us from the fact that we have real problems that face us today that we should be trying to address using AI.”

Continue Reading

Tech

Equipment management and sustainability

Published

on

Equipment management and sustainability


One area that Castrip has been working on for the last two years is increasing the use of machine intelligence to increase process efficiency in the yield. “This is quite affected by the skill of the operator, which sets the points for automation, so we are using reinforcement learning-based neural networks to increase the precision of that setting to create a self-driving casting machine. This is certainly going to create more energy-efficiency gains—nothing like the earlier big-step changes, but they’re still measurable.”

Reuse, recycle, remanufacture: design for circular manufacturing

Growth in the use of digital technologies to automate machinery and monitor and analyze manufacturing processes—a suite of capabilities commonly referred to as Industry 4.0—is primarily driven by needs to increase efficiency and reduce waste. Firms are extending the productive capabilities of tools and machinery in manufacturing processes through the use of monitoring and management technologies that can assess performance and proactively predict optimum repair and refurbishment cycles. Such operational strategy, known as condition-based maintenance, can extend the lifespan of manufacturing assets and reduce failure and downtime, all of which not only creates greater operational efficiency, but also directly improves energy-efficiency and optimizes material usage, which helps decrease a production facility’s carbon footprint.

The use of such tools can also set a firm on the first steps of a journey toward a business defined by “circular economy” principles, whereby a firm not only produces goods in a carbon-neutral fashion, but relies on refurbished or recycled inputs to manufacture them. Circularity is a progressive journey of many steps. Each step requires a viable long-term business plan for managing materials and energy in the short term, and “design-for-sustainability” manufacturing in the future.

IoT monitoring and measurement sensors deployed on manufacturing assets, and in production and assembly lines, represent a critical element of a firm’s efforts to implement circularity. Through condition-based maintenance initiatives, a company is able to reduce its energy expenditure and increase the lifespan and efficiency of its machinery and other production assets. “Performance and condition data gathered by IoT sensors and analyzed by management systems provides a ‘next level’ of real-time, factory-floor insight, which allows much greater precision in maintenance assessments and condition-refurbishment schedules,” notes Pierre Sagrafena, circularity program leader at Schneider Electric’s energy management business.

Global food manufacturer Nestle is undergoing digital transformation through its Connected Worker initiative, which focuses on improving operations by increasing paperless information flow to facilitate better decision-making. José Luis Buela Salazar, Nestle’s eurozone maintenance manager, oversees an effort to increase process-control capabilities and maintenance performance for the company’s 120 factories in Europe.

“Condition monitoring is a long journey,” he says. “We used to rely on a lengthy ‘Level One’ process: knowledge experts on the shop floor reviewing performance and writing reports to establish alarm system settings and maintenance schedules. We are now coming onto a ‘4.0’ process, where data sensors are online and our maintenance scheduling processes are predictive, using artificial intelligence to predict failures based on historical data that is gathered from hundreds of sensors often on an hourly basis.” About 80% of Nestle’s global facilities use advanced condition and process-parameter monitoring, which Buela Salazar estimates has cut maintenance costs by 5% and raised equipment performance by 5% to 7%.

Buela Salazar says much of this improvement is due to an increasingly dense array of IoT-based sensors (each factory has between 150 and 300), “which collect more and more reliable data, allowing us to detect even slight deteriorations at early stages, giving us more time to react, and reducing our need for external maintenance solutions.” Currently, Buela Salazar explains, the carbon-reduction benefits of condition-based maintenance are implicit, but this is fast changing.

“We have a major energy-intensive equipment initiative to install IoT sensors for all such machines in 500 facilities globally to monitor water, gas, and energy consumption for each, and make correlations with its respective process performance data,” he says. This will help Nestle lower manufacturing energy consumption by 5% in 2023. In the future, such correlation analysis will help Nestle conduct “big data analysis to carbon-optimize production-line configurations at an integrated level” by combining insights on material usage measurements, energy efficiency of machines, rotation schedules for motors and gearboxes, and as many as 100 other parameters in a complex food-production facility, adds Buela Salazar. “Integrating all this data with IoT and machine learning will allow us to see what we have not been able to see to date.”

Continue Reading

Copyright © 2021 Vitamin Patches Online.