Connect with us

Tech

It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.

Published

on

It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.


It will soon become easy for self-driving cars to hide in plain sight. The rooftop lidar sensors that currently mark many of them out are likely to become smaller. Mercedes vehicles with the new, partially automated Drive Pilot system, which carries its lidar sensors behind the car’s front grille, are already indistinguishable to the naked eye from ordinary human-operated vehicles.

Is this a good thing? As part of our Driverless Futures project at University College London, my colleagues and I recently concluded the largest and most comprehensive survey of citizens’ attitudes to self-driving vehicles and the rules of the road. One of the questions we decided to ask, after conducting more than 50 deep interviews with experts, was whether autonomous cars should be labeled. The consensus from our sample of 4,800 UK citizens is clear: 87% agreed with the statement “It must be clear to other road users if a vehicle is driving itself” (just 4% disagreed, with the rest unsure). 

We sent the same survey to a smaller group of experts. They were less convinced: 44% agreed and 28% disagreed that a vehicle’s status should be advertised. The question isn’t straightforward. There are valid arguments on both sides. 

We could argue that, on principle, humans should know when they are interacting with robots. That was the argument put forth in 2017, in a report commissioned by the UK’s Engineering and Physical Sciences Research Council. “Robots are manufactured artefacts,” it said. “They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.” If self-driving cars on public roads are genuinely being tested, then other road users could be considered subjects in that experiment and should give something like informed consent. Another argument in favor of labeling, this one practical, is that—as with a car operated by a student driver—it is safer to give a wide berth to a vehicle that may not behave like one driven by a well-practiced human.

There are arguments against labeling too. A label could be seen as an abdication of innovators’ responsibilities, implying that others should acknowledge and accommodate a self-driving vehicle. And it could be argued that a new label, without a clear shared sense of the technology’s limits, would only add confusion to roads that are already replete with distractions. 

From a scientific perspective, labels also affect data collection. If a self-driving car is learning to drive and others know this and behave differently, this could taint the data it gathers. Something like that seemed to be on the mind of a Volvo executive who told a reporter in 2016 that “just to be on the safe side,” the company would be using unmarked cars for its proposed self-driving trial on UK roads. “I’m pretty sure that people will challenge them if they are marked by doing really harsh braking in front of a self-driving car or putting themselves in the way,” he said.

On balance, the arguments for labeling, at least in the short term, are more persuasive. This debate is about more than just self-driving cars. It cuts to the heart of the question of how novel technologies should be regulated. The developers of emerging technologies, who often portray them as disruptive and world-changing at first, are apt to paint them as merely incremental and unproblematic once regulators come knocking. But novel technologies do not just fit right into the world as it is. They reshape worlds. If we are to realize their benefits and make good decisions about their risks, we need to be honest about them. 

To better understand and manage the deployment of autonomous cars, we need to dispel the myth that computers will drive just like humans, but better. Management professor Ajay Agrawal, for example, has argued that self-driving cars basically just do what drivers do, but more efficiently: “Humans have data coming in through the sensors—the cameras on our face and the microphones on the sides of our heads—and the data comes in, we process the data with our monkey brains and then we take actions and our actions are very limited: we can turn left, we can turn right, we can brake, we can accelerate.”

Tech

The Download: DeepMind’s AI shortcomings, and China’s social media translation problem

Published

on

The hype around DeepMind’s new AI model misses what’s actually cool about it


Earlier this month, DeepMind presented a new “generalist” AI model called Gato. The model can play the video game Atari, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do hundreds of different tasks.

But while Gato is undeniably fascinating, in the week since its release some researchers have got a bit carried away.

One of DeepMind’s top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn’t contain his excitement. “The game is over!” he tweeted, suggesting that there is now a clear path from Gato to artificial general intelligence, or ‘AGI’, a vague concept of human or superhuman-level AI. The way to build AGI, he claimed, is mostly a question of scale: making models such as Gato bigger and better.

Unsurprisingly, de Freitas’s announcement triggered breathless press coverage that Deepmind is “on the verge” of human-level artificial intelligence. This is not the first time hype has outstripped reality. Other exciting new AI models, such as OpenAI’s text generator GPT-3 and image generator DALL-E, have generated similar grand claims.

For many in the field, this kind of feverish discourse overshadows other important research areas in AI. Read the full story.

—Melissa Heikkilä 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Volunteers are translating Chinese social media posts into English
Even though the posts have passed China’s internet censorship regime, Beijing is unhappy. (The Atlantic $)
+ WeChat wants people to use its video platform. So they did, for digital protests. (TR)

2 Ukraine’s startup community is resuming business as usual
Many workers are juggling their day jobs with after-hours war effort volunteering. (WP $)
+ Russian-speaking tech bosses living in the US are cutting ties with pro-war workers. (NYT $)
+ YouTube has taken down more than 9,000 channels linked to the war. (The Guardian)

3 The Buffalo shooting highlighted the failings of tech’s anti-terrorism accord
Critics say platforms haven’t done enough to tackle the root causes of extremism. (WSJ $)
+ America has experienced more than 3,500 mass shootings since Sandy Hook. (WP $)

4 Crypto appears to have an insider trading problem
Just like the banking system its supporters rail against. (WSJ $)
+ Christine Lagarde thinks crypto is worth “nothing.” (Bloomberg $)
+ Crypto is weathering a bitter storm. Some still hold on for dear life. (TR)
+ The crypto industry has lost around $1.5 trillion since November. (The Atlantic $)
+ Stablecoin Tether has paid out $10 billion in withdrawals since the crash started. (The Guardian)

5 The nuclear fusion industry is in turmoil
It isn’t even up and running yet, but fuel supplies are already running low. (Wired $)
+ A hole in the ground could be the future of fusion power. (TR)
+ The US midwest could be facing power grid failure this summer. (Motherboard)

6 Big Tech isn’t worried about the economic downturn
Even if it drops some of its market valuation along the way. (NYT $)
+ But lawmakers are determined to rein them in with antitrust legislation. (Recode)
+ Their carbon emissions are spiraling out of control, too. (New Yorker $)

7 The US military wants to build a flying ship
The Liberty Lifer X-plane would be independent of fixed airfields and ports. (IEEE Spectrum)

8 We need to change how we recycle plastic
The good news is that the technology to overhaul it exists—it just needs refining. (Wired $)
+ A French company is using enzymes to recycle one of the most common single-use plastics. (TR)

9 Why you should treat using your phone like drinking wine
Striking that delicate balance from stopping the positive tipping into negative. (The Guardian $)

10 Inside the wholesome world of internet knitting 🧶
Its favorite knitter’s creations have gained a cult following. (Input)
+ How a ban on pro-Trump patterns unraveled the online knitting world. (TR)

Quote of the day

“I like the instant gratification of making the internet better.”

—Jason Moore, who is credited with creating more than 50,000 Wikipedia pages, tells CNN about his motivations for taking on the unpaid work.

Continue Reading

Tech

The hype around DeepMind’s new AI model misses what’s actually cool about it

Published

on

The hype around DeepMind’s new AI model misses what’s actually cool about it


“Nature is trying to tell us something here, which is, this doesn’t really work, but the field is so believing its own press clippings, that it just can’t see that,” he adds. 

Even de Freitas’s DeepMind colleagues, Jackie Kay and Scott Reed, who worked with him on Gato, were more circumspect when I asked them directly about his claims. When asked about whether Gato was heading towards AGI, they wouldn’t be drawn. “I don’t actually think it’s really feasible to make predictions with these kinds of things. I try to avoid that. It’s like predicting the stock market,” said Kay.

Reed said the question was a difficult one. “I think most machine learning people will studiously avoid answering. Very hard to predict, but, you know, hopefully we get there someday.”

In a way, the fact that DeepMind called Gato a “generalist” might have made it a victim of the AI sector’s excessive hype around AGI. The AI systems of today are called “narrow” AI, meaning they can only do a specific, restricted set of tasks such as generate text. 

Some technologists, including at Deepmind, think that one day humans will develop “broader” AI systems that will be able to function as well or even better than humans. Some call this artificial “general” intelligence. Others say it is like “belief in magic.“ Many top researchers, such as Meta’s chief AI scientist Yann LeCun question whether it is even possible at all.

Gato is a “generalist” in the sense that it can do many different things at the same time. But that is a world apart from a “general” AI that can meaningfully adapt to new tasks that are different from what the model was trained on, says MIT’s Andreas. “We’re still quite far from being able to do that.”

Making models bigger will also not address the issue that models don’t have “lifelong learning”, meaning they can be taught things once and they will understand all of the implications and use it to inform all of the other decisions that they are going to make, he says.

The hype around tools like Gato is harmful for the general development of AI, argues Emmanuel Kahembwe, an AI/robotics researcher and part of the Black in AI organization co-founded by Timnit Gebru. “There are many interesting topics that are left to the side, that are underfunded, that deserve more attention, but that’s not what the big tech companies and the bulk of researchers in such tech companies are interested in,” he says.

Tech companies ought to take a step back and take stock of why they are building what they are building, says Vilas Dhar, president of the Patrick J. McGovern Foundation, a charity that funds AI projects “for good.” 

“AGI speaks to something deeply human—the idea that we can become more than we are, by building tools that propel us to greatness,” he says. “And that’s really nice, except it also is a way to distract us from the fact that we have real problems that face us today that we should be trying to address using AI.”

Continue Reading

Tech

Equipment management and sustainability

Published

on

Equipment management and sustainability


One area that Castrip has been working on for the last two years is increasing the use of machine intelligence to increase process efficiency in the yield. “This is quite affected by the skill of the operator, which sets the points for automation, so we are using reinforcement learning-based neural networks to increase the precision of that setting to create a self-driving casting machine. This is certainly going to create more energy-efficiency gains—nothing like the earlier big-step changes, but they’re still measurable.”

Reuse, recycle, remanufacture: design for circular manufacturing

Growth in the use of digital technologies to automate machinery and monitor and analyze manufacturing processes—a suite of capabilities commonly referred to as Industry 4.0—is primarily driven by needs to increase efficiency and reduce waste. Firms are extending the productive capabilities of tools and machinery in manufacturing processes through the use of monitoring and management technologies that can assess performance and proactively predict optimum repair and refurbishment cycles. Such operational strategy, known as condition-based maintenance, can extend the lifespan of manufacturing assets and reduce failure and downtime, all of which not only creates greater operational efficiency, but also directly improves energy-efficiency and optimizes material usage, which helps decrease a production facility’s carbon footprint.

The use of such tools can also set a firm on the first steps of a journey toward a business defined by “circular economy” principles, whereby a firm not only produces goods in a carbon-neutral fashion, but relies on refurbished or recycled inputs to manufacture them. Circularity is a progressive journey of many steps. Each step requires a viable long-term business plan for managing materials and energy in the short term, and “design-for-sustainability” manufacturing in the future.

IoT monitoring and measurement sensors deployed on manufacturing assets, and in production and assembly lines, represent a critical element of a firm’s efforts to implement circularity. Through condition-based maintenance initiatives, a company is able to reduce its energy expenditure and increase the lifespan and efficiency of its machinery and other production assets. “Performance and condition data gathered by IoT sensors and analyzed by management systems provides a ‘next level’ of real-time, factory-floor insight, which allows much greater precision in maintenance assessments and condition-refurbishment schedules,” notes Pierre Sagrafena, circularity program leader at Schneider Electric’s energy management business.

Global food manufacturer Nestle is undergoing digital transformation through its Connected Worker initiative, which focuses on improving operations by increasing paperless information flow to facilitate better decision-making. José Luis Buela Salazar, Nestle’s eurozone maintenance manager, oversees an effort to increase process-control capabilities and maintenance performance for the company’s 120 factories in Europe.

“Condition monitoring is a long journey,” he says. “We used to rely on a lengthy ‘Level One’ process: knowledge experts on the shop floor reviewing performance and writing reports to establish alarm system settings and maintenance schedules. We are now coming onto a ‘4.0’ process, where data sensors are online and our maintenance scheduling processes are predictive, using artificial intelligence to predict failures based on historical data that is gathered from hundreds of sensors often on an hourly basis.” About 80% of Nestle’s global facilities use advanced condition and process-parameter monitoring, which Buela Salazar estimates has cut maintenance costs by 5% and raised equipment performance by 5% to 7%.

Buela Salazar says much of this improvement is due to an increasingly dense array of IoT-based sensors (each factory has between 150 and 300), “which collect more and more reliable data, allowing us to detect even slight deteriorations at early stages, giving us more time to react, and reducing our need for external maintenance solutions.” Currently, Buela Salazar explains, the carbon-reduction benefits of condition-based maintenance are implicit, but this is fast changing.

“We have a major energy-intensive equipment initiative to install IoT sensors for all such machines in 500 facilities globally to monitor water, gas, and energy consumption for each, and make correlations with its respective process performance data,” he says. This will help Nestle lower manufacturing energy consumption by 5% in 2023. In the future, such correlation analysis will help Nestle conduct “big data analysis to carbon-optimize production-line configurations at an integrated level” by combining insights on material usage measurements, energy efficiency of machines, rotation schedules for motors and gearboxes, and as many as 100 other parameters in a complex food-production facility, adds Buela Salazar. “Integrating all this data with IoT and machine learning will allow us to see what we have not been able to see to date.”

Continue Reading

Copyright © 2021 Vitamin Patches Online.