Connect with us

Tech

Podcast: How pricing algorithms learn to collude

Published

on

Podcast: How pricing algorithms learn to collude


Algorithms now determine how much things cost. It’s called dynamic pricing and it adjusts according to current market conditions in order to increase profits. The rise of e-commerce has propelled pricing algorithms into an everyday occurrence—whether you’re shopping on Amazon, booking a flight, hotel or ordering an Uber. In this continuation of our series on automation and your wallet, we explore what happens when a machine determines the price you pay. 

In this episode we meet: 

  • Lisa Wilkins, UX designer 
  • Gabe Smith, chief evangelist, PriceFX
  • Aylin Caliskan, assistant professor, University of Washington
  • Joseph Harrington, professor of business, economics and public policy, University of Pennsylvania
  • Maxime Cohen, Scale AI Chair professor, McGill University

Credits:

This episode was reported by Anthony Green and produced by Jennifer Strong and Emma Cillekens. We’re edited by Mat Honan and our mix engineer is Garret Lang, with sound design and music by Jacob Gorski.

Full transcript:

[TR ID]

Jennifer: Alright so I’m in an airport just outside New York City and just looking at the departures board here seeing all these flights going different places… It makes me think about how we decide how much something should cost… like a ticket for one of these flights. Because where the plane is going is just part of the puzzle. The price of airfare is highly personalized. It includes massive amounts of consumer data. The prices also change in real time based on things like our booking patterns, competitor prices, even the weather….

Jennifer: But it wasn’t always that way. There was a time… we could rely on the notion that “what you see is what you get”.

These days, prices are decided by algorithms. It’s called dynamic pricing… which prices things according to current market conditions in order to increase profits. 

And it’s not just airlines that use this technique.

[SOT: Retailers Adopt ‘Dynamic Pricing’ – via YouTube]

TV news reporter: A practice started by the airlines, dynamic pricing has now been adopted by retailers, thanks to some new technology. 

[SOT: Amazon accused of surge pricing WCPO ABC 9, via YouTube]

TV news reporter: …and it’s becoming more and more common thanks to computer algorithms. You’ll find it with Disney World tickets, hotel rooms, Major League Baseball seats…and now. AMAZON. 

Jennifer: Ecommerce propelled these algorithms into an everyday occurrence…

But what does that mean for consumers?

[SOT: ANTITRUST AND COMPETITION CONFERENCE Part 12 Day Two Panel Three “Amazon Phenomenon” – via YouTube]

Lina Khan, Director, Legal Policy, Open Markets Institute: Amazon changes prices two million times a day, you know, so what is a stable price for any of us and how will we know that we’re paying different prices? I think that’s going to be a key question going forward. 

Jennifer: I’m Jennifer Strong and this episode, what happens when a machine determines the price you pay. 

[SHOW ID] 

OC:…you have reached your destination.

[MUSIC]

[SOT: KIRO7 Seattle – Via web]

News Anchor 2: When gunfire rang out last night, people were looking for any way out. Tonight, some are saying safety went to the highest bidder. 

Jennifer: It was the middle of the evening commute. Last January. When there was a shooting in downtown Seattle.

News Anchor 1: Rideshare companies are under fire tonight for raising prices while people were trying to flee the gunfire. Some riders say they were gouged. 

Lisa Wilkins: The bus that I would normally take would go down the street that the shooting happened on. So all of the buses that were going down that street, they all stopped. They didn’t get rerouted or anything, they just stopped. 

Jennifer: Lisa Wilkins works in tech, and her office is less than a block away from where that shooting happened.

Lisa Wilkins: I just decided I’ll grab an Uber or Lyft and, you know, take it home or take it back to my car, which is at a Park and Ride, which was about 17 miles away. And then when I opened the app, I then saw it was like a hundred dollars or something to get there when normally it would have been maybe 30 dollars.

Jennifer: When demand is high the price of a ride with Lyft or Uber automatically gets more expensive. In emergencies companies cap those prices once it’s clear what’s going on, and in this case, offered to reimburse riders who paid higher fares. 

But even though Lisa Wilkins’ job is to design apps with an eye on user experience she says it still took a moment to realize what was happening to her – was because of a pricing algorithm. 

Lisa Wilkins: At first, I was really angry because you want to take it personally, like they’re intentionally doing this. This is a shooting and they’re taking advantage of it. And then when I kind of was talking to another coworker about it. You know, we were still upset that it was going to cost so much to get anywhere, but we realized, like, this is price surging. This is a bot basically saying what the prices are going to be. And being a UX designer, I understand like there’s a lot of edge cases that you might not plan for that happen in your product.

Jennifer: And this can have some unintended results.

Gabe Smith: There was a book about fly genetics on Amazon. That was.. there were two competing algorithms that just kept looking at each other and increase the price a little bit. The other one would increase the price a little bit on top of that. And they just kept going back and forth unchecked for, you know, many days. And it ended up with the price of this book being like $1.2 million right.

Gabe Smith: My name is Gabe Smith and I’m the chief evangelist for PriceFX. And I have about 14 years of experience in price optimization and management. 

Jennifer: He uses AI and other tools to help companies decide what something should cost. He also thinks about how to avoid those outliers… like that million dollar book about bugs.

Gabe Smith: So in the eighties really is when the computing power and the data availability got to the point where these techniques could start being leveraged. And really, it appeared first in the airline industries and then followed on in the other travel and leisure industries such as rental cars and hotels.  

Jennifer: Dynamic pricing can help companies know what to charge for products that expire, or are limited in supply. Like when a plane takes off… there’s no changing how many of those seats are filled. So, to drive the most revenue, airlines need to sell the greatest number of seats for the highest possible price. And to learn what that price is? They need to understand the nuances of passenger behavior and market demand. 

Gabe Smith: So that was really the first use of pricing optimization and artificial intelligence to drive pricing into a market. And since then, it’s you know really expanded in use across many different industries. We have a company, for example, that does dynamic pricing for their ski tickets based on the upcoming events, weather conditions, snow conditions,but we also have other customers that are selling electronics, chemicals. We have industrial manufacturing companies, distribution companies, really these techniques are gaining adoption in a wide variety of industries.  

Jennifer: The key to making this all work is a rich data set on customers and what drives their willingness to pay. The more data… The more targeted prices can be for individuals. 

Gabe Smith: How they behave. What product that you’re offering. Things like, what is the nature of the transaction or the quote that you’re doing? All those can be factored into your pricing optimization algorithms and influence what you’re going to offer. So if you have data like that, it can be actually fairly straightforward to be able to implement pricing optimization. So we have customers where we’ve implemented things in as little as a couple months. 

Jennifer: And he says these systems are getting better at managing complexity and balancing competing goals. 

Gabe Smith: So maybe I want to make sure that I’m always positioned in a certain way versus my competition, right? Or maybe I want to say, ‘Hey, I never want to increase pricing by more than 5% on anyone.’ Am I trying to maximize revenue, am I trying to maximize profit? Am I trying to maximize volume throughput? I could balance between those. So, what happens in organizations, you know, there’s competing objectives a lot of times. And so you can be guiding not only, okay, what’s my list price, but what’s the, you know, the negotiated price or or promotion based on a customer product combination.

Jennifer: These constraints are important because left unbound, pricing algorithms can simply prioritize higher prices. 

Another issue? Making sure those prices don’t reinforce systemic bias. 

But this isn’t so straightforward. 

Gabe Smith: It could be that, you know, you don’t see one of those things explicitly, but they could be just beneath the surface in another attribute that you’re using. So if you’re using a zip code or you’re using the demographics in terms of income levels, you know, there might be systemic bias that’s in that data. So you really need to be thoughtful about how you build these things out and make sure you’re doing the right thing from an ethics perspective. And I think part of the acceptance is: Do I feel like as a consumer, I’m getting a good deal or a better deal in some cases as a result of this, or is it always to the provider’s benefit?

[MUSIC TRANSITION]

Aylin Caliskan: We know that big tech uses these individualized pricing algorithms widely and we don’t necessarily understand what is going on behind these systems or algorithms because they are black boxes. We only see the outcomes on an individual basis, basically the price we receive. And we don’t really have methods or data sets to systematically study price discrimination algorithms. 

Aylin Caliskan: I am Aylin Caliskan. I’m currently an assistant professor at the University of Washington and my research focuses on machine learning and artificial intelligence bias. 

Jennifer: A couple of years ago, the city of Chicago mandated that companies like Uber and Lyft release fare data to the public. This gave researchers access to millions of anonymized trips throughout the city. She compared prices against the demographics of the neighborhood and what she found? Surprised her. 

Aylin Caliskan: Our results show that neighborhoods that have younger residents or highly educated residents were paying significantly higher fare prices. And neighborhoods that have higher nonwhite residents, as well as impoverished neighborhoods, we’re also paying higher fare prices that were determined by these price discrimination algorithms.

Jennifer: Her team wants to know why this happens, but that’s hard without details about supply and demand – that aren’t made public.

Researchers are only able to get a subset of this data. 

Aylin Caliskan: Are residents in disadvantaged neighborhoods paying higher fair pricing because of the characteristics of their neighborhoods. Or does supply of drivers have an impact on fair pricing in these neighborhoods where demand seems relatively low. But if supply is even lower, accordingly, relative demand would look higher, which might be increasing fare pricing and the more transparency, the better methods we can develop to study the disparate impact of these algorithms or their dynamics, how they are learning from neighborhood transportation patterns and traffic patterns. 

Jennifer: Which brings up another thorny issue? There aren’t really rules about this.  

Aylin Caliskan: We need more policy and regulations so that we can get access to this dataset and keep studying this and understand how this might be impacting smart city planning as well as resource allocation, because if such data sets are used, for example, in driverless cars or resource allocation in smart cities, these biases might end up being perpetuated or potentially amplified in the future, causing all kinds of unexpected side effects that we would need to deal with in the future.

Jennifer: After the break, we find out what regulation might look like… and we learn how these algorithms might work in a grocery store.

But first, I want to tell you about an event called CyberSecure. It’s Tech Review’s cybersecurity conference and I’ll be there with my colleagues talking about ransomware and other important issues. You can learn more at Cyber Secure M-I-T dot com.

We’ll be right back… after this.

[MIDROLL] 

[MUSIC] 

Jennifer: Pricing algorithms can also help consumers…. by personalizing products and recommendations… or providing insights to companies that help them design better products and services. 

But these systems also present new challenges for those who regulate competition.  

Congress passed the first antitrust law over a century ago but it wasn’t until 2015 that the government prosecuted its first antitrust case specifically targeting e-commerce. In that case, a man pled guilty in conspiring to illegally fix the prices of posters he sold on Amazon with other sellers… using an algorithm designed to coordinate price changes. 

Joseph Harrington: The pricing algorithm would look around for the best or the lowest price of competing sellers, that is, competitors to those two online sellers. And then the two online sellers would set a slightly lower common price. So the two sellers were still competing against other firms in the market, but just weren’t competing against each other.  So instead of coordinating on a common price, they coordinated on a common pricing algorithm and that had the same effect of reducing competition.

Joseph Harrington: So I’m Joe Harrington. I’m professor of business, economics and public policy at the Wharton School, University of Pennsylvania. My research is in the area of collusion and cartels. 

Jennifer: The case involving the Amazon poster sellers is something that’s pretty close to traditional collusion… where otherwise competing businesses coordinate prices via direct, human to human communication. 

But there’s growing research that pricing algorithms themselves could learn to form a kind of digital cartel of their own… and collude to raise prices without any human involvement. 

Joseph Harrington: Now, well let’s think about a manager deciding that they’re going to delegate the pricing decision to a self learning algorithm. That self-learning algorithm is going to experiment with different pricing algorithms or pricing rules in the hope of finding ones that are more profitable. So they do end up with more profitable pricing rules. And the reason why they’re more profitable is because of the fact that the self-learning algorithms have learned not to compete against one another. 

Jennifer: And researchers in Italy have already found evidence of that happening in a simulated environment. 

Joseph Harrington: So they considered a very standard economic model of a market. One that’s been used by many economists, both for theoretical and empirical work. And the question was would they be able to learn to collude in a fairly kind of sophisticated and complex simulated environment. And the answer is very clearly, yes, there are found to be prices that were just, just routinely well above competitive prices, sometimes quite close to monopoly prices. 

Jennifer: He says these self-learning algorithms behave in a way that mirrors human cartels. 

Joseph Harrington: Algorithms are setting a high price above competitive prices, which creates then an incentive, at least in the short run, to set a lower price in order to pick up more market share and higher profits. What the self-learning algorithms have learned about the consequences of deviating from that by setting a lower price is that the other self-learning algorithm has adopted a pricing algorithm that will punish that behavior. So specifically if one of them was to all of a sudden drop the price, the other self-learning algorithm’s pricing algorithm was trained to respond with a very low price in response. The prices would remain low for some time but they would tend to work their way back up to the high collusive prices. So what we have here really is these self-learning algorithms learning that, okay, we’re going to set a high price and the reason why they don’t veer from that, is they’ve learned that there’s going to be a retaliatory punishment by the other, self-learning algorithm. And that’s exactly what we think about as collusion.

Jennifer: It’s still an open question as to whether this kind of thing could happen in a real market, with all its additional complexity. 

But the concept of automated collusion raises all sorts of legal questions. 

Joseph Harrington: If we go back to the example of, on the Amazon marketplace and the online poster sellers, well it’s that type of collusion for which the legal framework is well-designed. It’s designed for conspiracy where competitors communicate. And coordinate their conduct. The law is defined in terms of a meeting of minds, a conscious commitment to a common scheme. The idea that there has been this communication, which has led to some mutual understanding among competitors to no longer compete. All that is absent with competitors having adopted self-learning algorithms as long as they did so independently. These self-learning algorithms don’t have understanding, much less mutual understanding, which is really what’s required in the context of the law. 

Jennifer: And for now… there’s no one in charge of monitoring if these systems are playing by rules we deem fair.

Joseph Harrington: I mean, I think what really is the potential legal response in the future would be to prohibit certain properties of pricing algorithms. If those were prohibited, there’d be an incentive for the firms themselves to monitor their pricing algorithms, not to expose themselves illegally. But as of right now, there really is no one monitoring them. And certainly the firms have no incentive, I would say, to monitor them. 

Jennifer: He says anti-competitive pricing algorithms could also come embedded in software… which might be used by companies competing against each other.. without those companies even realizing it.  

Joseph Harrington: And then the question is, well, what can be done about it? And now here we are, once again, in a little bit murky legal territory, because conspiracy requires two or more actors, which is traditionally two or more competitors who have decided no longer to compete. But now we’re imagining that it’s kind of one actor, which is the third party developer who might design a pricing algorithm that is not very competitive. And if it can convince many firms in a market to adopt it, will perform well for those firms, because it will result in higher prices and less price competition. Now, once again, that’s bad, but there’s not conspiracy because there’s really just that one actor, the third-party developer who’s promoting this.

Jennifer: And there is an example of that in the real world..in a study done of German gas stations that began adopting a pricing algorithm.

Joseph Harrington: And the evidence is that average price cost margins did go up in response to this, on the order of about 12%. But was really very striking was, if you looked at markets where there were just two stations, so just imagine a geographic market where there’s just kind of two stations competing. And what the study found was that if one of them adopted the pricing algorithm there was really no noticeable effect on prices. But if both adopted, then there was a significant increase in price cost margins. On the order of around 29%. So now this is informing in terms of what these pricing algorithms are doing. If they’re leading to just more efficient dynamic pricing, then you would’ve expected to see some effect, even when just one station operator adopted it. But that’s not what’s found in the study. It’s only when both competitors adopted do you see an effect. And it’s an effect, which is a sizeable increase in price. So I think that’s something which is, I think, is happening. And it’s something that is a bit more, I think, concrete and where there’s potentially more policy options for dealing with. As opposed to the case of self-learning algorithms, which I think is a potential problem that we want to get ahead of.

Maxime Cohen: We used to be able to change prices every day or every month, but now prices can change every hour or in some applications, even every minute.

Maxime Cohen: My name is Maxime Cohen. I’m the Scale AI Chair professor at McGill University in Montreal, Canada and I’m also the co-director of the Retail Innovation lab.  

Jennifer: The past few years have seen an explosion of dynamic pricing practices… And personalized pricing is also increasingly common. 

In the future, dynamic pricing systems could be fully autonomous… and applied at an even larger scale. 

Which begs the question: How do we protect our privacy when our data is being used to determine how much we pay for things? 

Maxime Cohen: So, the pricing algorithm at the end of the day should be based on non-personal attributes. For example, you can collect purchasing history, you can collect, potentially, the location of the users, the actions they took in the past, but you don’t want to use any type of personal attributes like names or gender or anything that is more personal.

Jennifer: Another question… where do we draw the line between fair and unfair pricing? 

Maxime Cohen: One needs to ask themselves the question. Is it fair to offer different prices to different customers for the same products or the same service? And the answer to that question is not simple actually. These two topics of privacy and fairness are very delicate and in my opinion, need careful regulations moving forward.

Jennifer: He says regulators should come together and make clear what data can be collected, stored and used to make pricing decisions. 

Maxime Cohen: For example, if Uber starts shouting different prices, based on the percent of battery you have in your phone when you order a ride. Would that be okay? Would that be not okay? So regulators should come together to the table and make a list of attributes that are reasonable to use for pricing decisions and some other attributes in a blacklist where they should not be used for pricing decisions.

Jennifer: And it’s not just our online shopping carts at stake. Dynamic pricing algorithms could soon find a home in physical retail as well… in the form of electronic shelf labels. 

Maxime Cohen: You can actually change the price of specific products at specific times, by simply modifying a single line of code and pressing one button. You change one line of code. Then you can deploy a change of price at virtually zero costs. Now the only remaining question in physical retail is how customers will react to surge, dynamic pricing practices. If you think about it, prices will start going up in supermarkets during busy hours. If there is a time of the day where they have a lot of people in the supermarket, prices will go up. Similarly, prices will start going up when you have very low inventory for specific products. If you have less stock prices will go up in order to like, make sure that you optimize your profits. Now it’s not clear whether customers will be happy and it will be accepting those types of practices that are already in place in the online world. It may be definitely profitable in the short run, but it may generate long-run losses, especially in terms of customer loyalty. So we need to do a lot of research to try to understand the power and the potential benefits of dynamic pricing for physical retail.  

[CREDITS]

Jennifer: This episode was reported by Anthony Green and produced by the two of us with Emma Cillekens. We’re edited by Mat Honan and our mix engineer is Garret Lang, with sound design and music by Jacob Gorski. 

Thanks for listening, I’m Jennifer Strong. 

[TR ID]

Sounds from:

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.