Connect with us

Tech

The great chip crisis threatens the promise of Moore’s Law

Published

on

The great chip crisis threatens the promise of Moore’s Law


Even as microchips have become essential in so many products, their development and manufacturing have come to be dominated by a small number of producers with limited capacity—and appetite—for churning out the commodity chips that are a staple for today’s technologies. And because making chips requires hundreds of manufacturing steps and months of production time, the semiconductor industry cannot quickly pivot to satisfy the pandemic-fueled surge in demand. 

After decades of fretting about how we will carve out features as small as a few nanometers on silicon wafers, the spirit of Moore’s Law—the expectation that cheap, powerful chips will be readily available—is now being threatened by something far more mundane: inflexible supply chains. 

A lonely frontier

Twenty years ago, the world had 25 manufacturers making leading-edge chips. Today, only Taiwan Semiconductor Manufacturing Company (TSMC) in Taiwan, Intel in the United States, and Samsung in South Korea have the facilities, or fabs, that produce the most advanced chips. And Intel, long a technology leader, is struggling to keep up, having repeatedly missed deadlines for producing its latest generations. 

One reason for the consolidation is that building a facility to make the most advanced chips costs between $5 billion and $20 billion. These fabs make chips with features as small as a few nanometers; in industry jargon they’re called 5-nanometer and 7-nanometer nodes. Much of the cost of new fabs goes toward buying the latest equipment, such as a tool called an extreme ultraviolet lithography (EUV) machine that costs more than $100 million. Made solely by ASML in the Netherlands, EUV machines are used to etch detailed circuit patterns with nanometer-size features.

Chipmakers have been working on EUV technology for more than two decades. After billions of dollars of investment, EUV machines were first used in commercial chip production in 2018. “That tool is 20 years late, 10x over budget, because it’s amazing,” says David Kanter, executive director of an open engineering consortium focused on machine learning. “It’s almost magical that it even works. It’s totally like science fiction.”

Such gargantuan effort made it possible to create the billions of tiny transistors in Apple’s M1 chip, which was made by TSMC; it’s among the first generation of leading-edge chips to rely fully on EUV. 

Only the largest tech companies are willing to pay hundreds of millions of dollars to design a chip for leading-edge nodes.

Paying for the best chips makes sense for Apple because these chips go into the latest MacBook and iPhone models, which sell by the millions at luxury-brand prices. “The only company that is actually using EUV in high volume is Apple, and they sell $1,000 smartphones for which they have insane margin,” Kanter says.

Not only are the fabs for manufacturing such chips expensive, but the cost of designing the immensely complex circuits is now beyond the reach of many companies. In addition to Apple, only the largest tech companies that require the highest computing performance, such as Qualcomm, AMD, and Nvidia, are willing to pay hundreds of millions of dollars to design a chip for leading–edge nodes, says Sri Samavedam, senior vice president of CMOS technologies at Imec, an international research institute based in Leuven, Belgium. 

Many more companies are producing laptops, TVs, and cars that use chips made with older technologies, and a spike in demand for these is at the heart of the current chip shortage. Simply put, a majority of chip customers can’t afford—or don’t want to pay for—the latest chips; a typical car today uses dozens of microchips, while an electric vehicle uses many more. It quickly adds up. Instead, makers of things like cars have stuck with chips made using older technologies.

What’s more, many of today’s most popular electronics simply don’t require leading-edge chips. “It doesn’t make sense to put, for example, an A14 [iPhone and iPad] chip in every single computer that we have in the world,” says Hassan Khan, a former doctoral researcher at Carnegie Mellon University who studied the public policy implications of the end of Moore’s Law and currently works at Apple. “You don’t need it in your smart thermometer at home, and you don’t need 15 of them in your car, because it’s very power hungry and it’s very expensive.”

The problem is that even as more users rely on older and cheaper chip technologies, the giants of the semiconductor industry have focused on building new leading-edge fabs. TSMC, Samsung, and Intel have all recently announced billions of dollars in investments for the latest manufacturing facilities. Yes, they’re expensive, but that’s where the profits are—and for the last 50 years, it has been where the future is. 

TSMC, the world’s largest contract manufacturer for chips, earned almost 60% of its 2020 revenue from making leading-edge chips with features 16 nanometers and smaller, including Apple’s M1 chip made with the 5-nanometer manufacturing process.

Making the problem worse is that “nobody is building semiconductor manufacturing equipment to support older technologies,” says Dale Ford, chief analyst at the Electronic Components Industry Association, a trade association based in Alpharetta, Georgia. “And so we’re kind of stuck between a rock and a hard spot here.”

Low-end chips

All this matters to users of technology not only because of the supply disruption it’s causing today, but also because it threatens the development of many potential innovations. In addition to being harder to come by, cheaper commodity chips are also becoming relatively more expensive, since each chip generation has required more costly equipment and facilities than the generations before. 

Some consumer products will simply demand more powerful chips. The buildout of faster 5G mobile networks and the rise of computing applications reliant on 5G speeds could compel investment in specialized chips designed for networking equipment that talks to dozens or hundreds of Internet-connected devices. Automotive features such as advanced driver-assistance systems and in-vehicle “infotainment” systems may also benefit from leading-edge chips, as evidenced by electric-vehicle maker Tesla’s reported partnerships with both TSMC and Samsung on chip development for future self-driving cars.

But buying the latest leading-edge chips or investing in specialized chip designs may not be practical for many companies when developing products for an “intelligence everywhere” future. Makers of consumer devices such as a Wi-Fi-enabled sous vide machine are unlikely to spend the money to develop specialized chips on their own for the sake of adding even fancier features, Kanter says. Instead, they will likely fall back on whatever chips made using older technologies can provide.

The majority of today’s chip customers make do with the cheaper commodity chips that represent a trade-off between cost and performance.

And lower-cost items such as clothing, he says, have “razor-thin margins” that leave little wiggle room for more expensive chips that would add a dollar—let alone $10 or $20—to each item’s price tag. That means the climbing price of computing power may prevent the development of clothing that could, for example, detect and respond to voice commands or changes in the weather.

The world can probably live without fancier sous vide machines, but the lack of ever cheaper and more powerful chips would come with a real cost: the end of an era of inventions fueled by Moore’s Law and its decades-old promise that increasingly affordable computation power will be available for the next innovation. 

The majority of today’s chip customers make do with the cheaper commodity chips that represent a trade-off between cost and performance. And it’s the supply of such commodity chips that appears far from adequate as the global demand for computing power grows. 

“It is still the case that semiconductor usage in vehicles is going up, semiconductor usage in your toaster oven and for all kinds of things is going up,” says Willy Shih, a professor of management practice at Harvard Business School. “So then the question is, where is the shortage going to hit next?”

A global concern

In early 2021, President Joe Biden signed an executive order mandating supply chain reviews for chips and threw his support behind a bipartisan push in Congress to approve at least $50 billion for semiconductor manufacturing and research. Biden also held two White House summits with leaders from the semiconductor and auto industries, including an April 12 meeting during which he prominently displayed a silicon wafer.

The actions won’t solve the imbalance between chip demand and supply anytime soon. But at the very least, experts say, today’s crisis represents an opportunity for the US government to try to finally fix the supply chain and reverse the overall slowdown in semiconductor innovation—and perhaps shore up the US’s capacity to make the badly needed chips.

An estimated 75% of all chip manufacturing capacity was based in East Asia as of 2019, with the US share sitting at approximately 13%. Taiwan’s TSMC alone has nearly 55% of the foundry market that handles consumer chip manufacturing orders.

Looming over everything is the US-China rivalry. China’s national champion firm SMIC has been building fabs that are still five or six years behind the cutting edge in chip technologies. But it’s possible that Chinese foundries could help meet the global demand for chips built on older nodes in the coming years.  “Given the state subsidies they receive, it’s possible Chinese foundries will be the lowest-cost manufacturers as they stand up fabs at the 22-nanometer and 14-nanometer nodes,” Khan says. “Chinese fabs may not be competitive at the frontier, but they could supply a growing portion of demand.”

Tech

Deep learning can almost perfectly predict how ice forms

Published

on

Deep learning can almost perfectly predict how ice forms


Researchers have used deep learning techniques to model how ice crystals form in the atmosphere with much higher precision than ever before. Their paper, published this week in PNAS, hints at the potential for the new method to significantly increase the accuracy of weather and climate forecasting.

The researchers used deep learning to predict how atoms and molecules behave. First, deep learning models were trained on small-scale simulations of 64 water molecules to help them predict how electrons in atoms interact. The models then replicated those interactions on a larger scale, with more atoms and molecules. It’s this ability to precisely simulate electron interactions that allowed the team to accurately predict physical and chemical behavior. 

“The properties of matter emerge from how electrons behave,” says Pablo Piaggi, a research fellow at Princeton University and the lead author on the study. “Simulating explicitly what happens at that level is a way to capture much more rich physical phenomena.”

It’s the first time this method has been used to model something as complex as the formation of ice crystals, also known as ice nucleation. This development may eventually improve the accuracy of weather and climate forecasting, because the formation of ice crystals is one of the first steps in the formation of clouds, which is where all precipitation comes from. 

Xiaohong Liu, professor of atmospheric sciences at Texas A&M University, who was not involved in the study, says half of all precipitation events—whether it’s snow or rain or sleet—begin as ice crystals, which then grow larger and result in precipitation. If researchers can model ice nucleation more accurately, it could give a big boost to weather prediction overall.

Ice nucleation is currently predicted based on laboratory experiments. Researchers collect data on ice formation under different laboratory conditions, and that data is fed into weather prediction models under similar real-world conditions. This method works well enough sometimes, but often ends up being inaccurate because of the sheer number of variables in real-world conditions. If even a few factors vary between the lab and actual conditions, the results can be quite different.

“Your data is only valid for a certain region, temperature, or kind of laboratory setting,” Liu says.

Basing ice nucleation on how electrons interact is much more precise, but it’s also extremely computationally expensive. Predicting ice nucleation requires researchers to model at least 4000 to 100,000 water molecules, which even on supercomputers could take years to run. And even that would only be able to model the interactions for 100 picoseconds, or 10-10 seconds, not enough to observe the ice nucleation process.

Using deep learning, however, researchers were able to run the calculations in just 10 days. The time duration was also 1,000 times longer—still a fraction of a second, but just enough to see the ice nucleation process.

Of course, more accurate ice nucleation models alone won’t make weather forecasting perfect, says Liu. Ice nucleation is only a small but critical component of weather modeling. Other aspects, like understanding how water droplets and ice crystals grow, and how they move and interact together under different conditions, is also important.

Still, the ability to more accurately model how ice crystals form in the atmosphere would significantly improve weather predictions, especially whether it’s likely to rain or snow, and by how much. It could also improve climate forecasting by improving the ability to model clouds, which are vital players in the absorption of sunlight and abundance of greenhouse gasses.

Piaggi says future research could model ice nucleation when there are substances like smoke in the air, which can improve the accuracy of models even more. Because of deep learning techniques, it’s now possible to use electron interactions to model larger systems for longer periods of time.

“That has opened essentially a new field,” Piaggi says. “It’s already having and will have an even greater role in simulations in chemistry and in our simulations of materials.”

Continue Reading

Tech

How to craft effective AI policy

Published

on

How to craft effective AI policy


So to your first question, I think you’re right. That policy makers should actually define the guardrails, but I don’t think they need to do it for everything. I think we need to pick those areas that are most sensitive. The EU has called them high risk. And maybe we might take from that, some models that help us think about what’s high risk and where should we spend more time and potentially policy makers, where should we spend time together?

I’m a huge fan of regulatory sandboxes when it comes to co-design and co-evolution of feedback. Uh, I have an article coming out in an Oxford University press book on an incentive-based rating system that I could talk about in just a moment. But I also think on the flip side that all of you have to take account for your reputational risk.

As we move into a much more digitally advanced society, it is incumbent upon developers to do their due diligence too. You can’t afford as a company to go out and put an algorithm that you think, or an autonomous system that you think is the best idea, and then land up on the first page of the newspaper. Because what that does is it degrades the trustworthiness by your consumers of your product.

And so what I tell, you know, both sides is that I think it’s worth a conversation where we have certain guardrails when it comes to facial recognition technology, because we don’t have the technical accuracy when it applies to all populations. When it comes to disparate impact on financial products and services.There are great models that I’ve found in my work, in the banking industry, where they actually have triggers because they have regulatory bodies that help them understand what proxies actually deliver disparate impact. There are areas that we just saw this right in the housing and appraisal market, where AI is being used to sort of, um, replace a subjective decision making, but contributing more to the type of discrimination and predatory appraisals that we see. There are certain cases that we actually need policy makers to impose guardrails, but more so be proactive. I tell policymakers all the time, you can’t blame data scientists. If the data is horrible.

Anthony Green: Right.

Nicol Turner Lee: Put more money in R and D. Help us create better data sets that are overrepresented in certain areas or underrepresented in terms of minority populations. The key thing is, it has to work together. I don’t think that we’ll have a good winning solution if policy makers actually, you know, lead this or data scientists lead it by itself in certain areas. I think you really need people working together and collaborating on what those principles are. We create these models. Computers don’t. We know what we’re doing with these models when we’re creating algorithms or autonomous systems or ad targeting. We know! We in this room, we cannot sit back and say, we don’t understand why we use these technologies. We know because they actually have a precedent for how they’ve been expanded in our society, but we need some accountability. And that’s really what I’m trying to get at. Who’s making us accountable for these systems that we’re creating?

It’s so interesting, Anthony, these last few, uh, weeks, as many of us have watched the, uh, conflict in Ukraine. My daughter, because I have a 15 year old, has come to me with a variety of TikToks and other things that she’s seen to sort of say, “Hey mom, did you know that this is happening?” And I’ve had to sort of pull myself back cause I’ve gotten really involved in the conversation, not knowing that in some ways, once I go down that path with her. I’m going deeper and deeper and deeper into that well.

Anthony Green: Yeah.

Continue Reading

Tech

A bioengineered cornea can restore sight to blind people

Published

on

A bioengineered cornea can restore sight to blind people


One unexpected bonus was that the implant changed the shape of the cornea enough for its recipients to wear contact lenses for the best possible vision, even though they had been previously unable to tolerate them.

The cornea helps focus light rays on the retina at the back of the eye and protects the eye from dirt and germs. When damaged by infection or injury, it can prevent light from reaching the retina, making it difficult to see.

Corneal blindness is a big problem: around 12.7 million people are estimated to be affected by the condition, and cases are rising at a rate of around a million each year. Iran, India, China, and various countries in Africa have particularly high levels of corneal blindness, and specifically keratoconus.

Because pig skin is a by-product of the food industry, using this bioengineered implant should cost fraction as much as transplanting a human donor cornea, said Neil Lagali, a professor at the Department of Biomedical and Clinical Sciences at Linköping University, one of the researchers behind the study.

“It will be affordable, even to people in low-income countries,” he said. “There’s a much bigger cost saving compared to the way traditional corneal transplantation is being done today.”

The team is hoping to run a larger clinical trial of at least 100 patients in Europe and the US. In the meantime, they plan to kick-start the regulatory process required for the US Food and Drug Administration to eventually approve the device for the market.

Continue Reading

Copyright © 2021 Vitamin Patches Online.