Connect with us

Tech

Taxing digital advertising could help break up big tech

Published

on

Taxing digital advertising could help break up big tech


For the past several years, economists, and government leaders have regularly sounded alarms about the dangers of big tech monopolies. On her 2020 campaign website, for example, Senator Elizabeth Warren said “big tech companies have too much power, too much power over our economy, our society, our democracy.” In the months since the election, politicians on both the left and right have expressed concerns over how to encourage competition and innovation among the big tech leaders, and even how to hold onto democratic ideals in the face of digital misinformation and conspiracy theories.

The challenge with a company like Facebook is that its business model actively encourages tribalism and anger, which is not the way markets usually work, says Paul Romer, an economics professor at New York University who previously served as the chief economist of The World Bank and was the co-recipient of the 2018 Nobel Prize in Economics Sciences. “When economists defend the market, we have this very simple idea in mind, where I as a buyer give something and get some good back,” he says. “None of those features are characteristic of this new market for digital services, where advertising is like the hidden method of capturing compensation for these firms.”

Users, he says, “are being manipulated in ways that they don’t fully understand.”

Regulators won’t work because big tech firms are too powerful, Romer maintains, while traditional antitrust laws are not well-suited to deal with this problem. But a progressive tax on digital advertising revenue, passed by state legislatures, could create a unique incentive for companies such as Google and Facebook to split up their businesses and discourage growth by acquisition.

Such a progressive tax model, however, needs to be aggressive: “The kind of tax that I think would create a big incentive to change at, say, Google and Facebook, the two biggest firms in this market, has to be a tax where the average tax rate they pay right now, given their size, is 35% of their revenue.”

Show notes and links:

Taxing Digital Advertising,” Paul Romer, May 1, 2021

Maryland Breaks Ground with Digital Advertising Tax,” National Law Review, March 17, 2021

Once Tech’s Favorite Economist, Now a Thorn in Its Side,” Steve Lohr, New York Times, May 20, 2021

Full transcript:

Laurel Ruma: I’m Laurel Ruma from MIT Technology Review and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is taxing digital advertising. Can taxes specifically aimed at breaking up big tech be levied to encourage competition, innovation, and help democracy? The five largest tech companies, Facebook, Amazon, Apple, Alphabet/Google and Microsoft are worth a combined $7 trillion. What economic efficiencies can be gained in the fight for fairness? Two words for you: Rethinking capitalism.

My guest is Paul Romer, an economics professor at New York University who served as the chief economist of The World Bank. Paul was the co-recipient of the 2018 Nobel Prize in Economics Sciences, for his work in integrating technological innovations into long-run macro economic analysis. For the first time, this integrated ideas and innovation into economic models and clarified the societal benefits that are possible when people come together to collaborate in new ways.

This episode of Business Lab is produced in association with Omidyar Network.

Welcome to The Business Lab, Paul.

Paul Romer: It’s good to be here.

Laurel: United States Senator Elizabeth Warren said, and I quote, “Big tech companies have too much power, too much power over our economy, our society, our democracy.” What is the danger of monopolies, of these large powerful companies?

Paul: That’s a well-crafted sentence by Senator Warren because it ends on the most important point. The real danger here is the threat to our democracy. The second most important one is the threat to the social fabric that determines our quality of life. One of the problems with economics and the way it has approached antitrust is that it has neglected those two issues and focused on very narrow questions: Are firms charging too much for some service? And does that mean that some people aren’t using as much of it as they could? But that captures only a small fraction of the damage that’s being done by having firms that are so large, and firms that are using a particular business model, this model based on targeted digital advertising, which has created so many bad incentives, and which creates such unusual risks for our democratic system.

Laurel: What are some of those risks?

Paul: The nature of the advertising model is that these firms want to keep people engaged watching the screen, so that they see more ads. Facebook discovered, and their research has been published on this, that if they could create more contention, more animosity, more anger, people would stay engaged for a longer period of time. And so we’ve got a business model which is actively encouraging some of the most damaging sides of human nature, this tribalism, this anger, this tendency to treat your opponent as an enemy who’s almost inhuman. So this is not the way markets usually work. When economists defend the market, we have this very simple idea in mind, where I as a buyer give something, I give money to a seller. I get some good back. And then if I don’t like what I get back, I can take my business elsewhere. None of those features are characteristic of this new market for digital services, where advertising is like the hidden method of capturing compensation for these firms. And users are being manipulated in ways that they don’t fully understand.

Laurel: So what kind of regulatory actions could have or should have been taken to confront the growth of some of these enormous companies?

Paul: To be honest, push back if you don’t like this answer, but I tend to like to look forward. We could look at decisions that we made in the past that were a mistake. But I think the really important ones are: What should we do now?

Laurel: To go ahead and challenge that, is it something that needs to be looked at perhaps more frequently? I mean, do we have to wait until something really bad happens, until an election is almost overthrown?

Paul: Well, I will say I think we’ve been negligent. Economists and people who shape opinion, people who worry about policy, I think we’re guilty of gross negligence in letting this problem fester and become so bad. So I think it’s very clear to me that we need to do something to stop the trajectory that we’re on. And I think it’s a huge mistake on all of our parts that we didn’t act sooner. But the real question is: What do we do now?

Laurel: There’s two issues here, right? One is the way that these enormous companies make the money, and then the enormity of these enormous companies.

Paul: Well, of those two, I think this business model, based on targeted digital advertising, has created these enormous incentives for spying on people and collecting information. A few years ago, I started saying that these firms know more about me than the Stasi knew about people in East Germany. And that was kind of like a controversial thing to say back then. Now everybody just accepts that. They think this is just the inevitable consequence of the market and technology. But they’ve lost the outrage, and they’ve lost the sense of how dangerous it is to let any small group of people have that much information that they can use to manipulate us.

Laurel: We’ve fallen into this trap of thinking, “Well, we use these services for free, so giving them a little bit of my data, I’m okay with.” But that’s not really what we’re talking about anymore, is it?

Paul: I think this one is a tricky one because by and large, the cost from, say, each person letting these companies have all this information is not something that each individual bears. It’s really a cost to society, so letting them have information from all of us means that they have enormous monopoly power. They can collect enormous returns and accumulate this enormous amount of wealth that you described. But it also gives them the ability to, for example, display targeted political ads, where one demographic group is being shown a message from one candidate that the rest of us never see. And those ads, just like the strategy for engagement, those ads often appeal to animosity, tribalism, anger. Again, we’re using advertising to enhance, to develop the worst side of human nature. And you don’t have to look very far in history to see how bad things can turn out when you amplify and normalize this very ugly, angry side of our instincts about us versus them.

Laurel: A slight shift: It seems as soon as we as a society identify something as too big to fail, it fails, causing unknown and often catastrophic outcomes. I’m thinking of Boeing as an example. So what do you think about Boeing and how large it’s become and what that actually means?

Paul: After the 2008 financial crisis, I wrote a paper saying that the FFA, combined with the NTSB, the National Transportation Safety Board, those two agencies were the gold standard for regulation. We should be trying to have a similar kind of structure for regulating financial markets. Well, fast forward a decade and a half, what’s happened is that Boeing, as this concentrated interest, was able to work through the Congress and cite the messages from economists about how regulation slows down innovation. And Boeing managed to eviscerate what used to be this very effective regulatory system at the FAA with some oversight by the NTSB.

And then Boeing, as a result, because there was no regulatory oversight, built this really kludge of an airplane that turned out to be incredibly dangerous and killed people. So it’s a story of the erosion of regulatory capacity that was achieved through pretty straightforward means, for example, just cutting the budget or limiting the budget at the FAA, so they couldn’t hire enough people to do the job they were assigned to do, to regulate Boeing.  So this was a case where, by undercutting the regulation, Boeing hurt its workers, hurt its shareholders, killed people. It was a really terrible turn of events, but I think it’s a caution for us because people who say, well, like Facebook, are saying, “Well, let’s just have some regulators that regulate the tech firms.”

What the Boeing episode tells us is that a firm that’s strong enough can actually corrupt and eviscerate any regulatory system, and can often capture those regulators. So I’m very pessimistic that any regulatory body can actually rein in and control these firms. And of course, I think that’s why Facebook is advocating for regulation because they know that’s the measure that would leave them in the strongest position. So when I started thinking, well, what can we do about these firms? I started from the very beginning and said, “We’ve got a system with checks and balances, with a kind of executive branch, where regulators sit. You’ve got the judiciary that hears antitrust cases. And you’ve got the legislature.” Which of these three systems is the one to use to try and deal with the problems that we’re facing?

I concluded that I think regulators would just not work because the firms we’re dealing with are already way too powerful. And I also, this is a separate point that we could explore, but I also think that the judiciary and antitrust, traditional antitrust laws, are not well suited to dealing with this problem. So the way forward, it seemed to me, was for us as voters to say to our legislators, “We don’t want to live in a society like this, where a few individuals have so much power, and where they’re using that power to kind of undermine the quality of social life and threaten our democracy.” So if we said that to our legislators, we’d tell the legislators, “Pass a law that stops this bad behavior.” And then the tax that I proposed was a measure that legislatures could pass that could do a lot to solve the problems that we’re facing.

Laurel: Let’s talk a little bit about that. You mentioned a progressive tax on advertising. How would that work?

Paul: When you impose a tax, you have to anticipate that people will do things to avoid paying tax. So I designed a tax where the things they would do to try and avoid paying tax are exactly the things we want them to do. So we want this tax to be progressive. The bigger the total advertising revenue the firm collects, the higher the tax rate. So if one of these firms splits itself in two, like if Facebook were to spin Instagram out, the total tax bill for the two firms would be smaller when they’re separate compared to when it’s part of one combined entity. So the progressivity in the tax encourages split ups, spin outs. It discourages growth by acquisition.

The other thing is that I suggested it be a tax imposed on revenue from digital advertising. So if these firms don’t want to pay this tax, they could shift to a subscription model, the kind of model that Netflix uses, or a service like Duolingo uses, so that people actually pay something to get access to some valuable service. So you can do this, but this tax has to be big enough to create a real stick that if you don’t do something to change, you’re going to pay a lot of tax to the government if you stick with this very damaging model.

Laurel: I was absolutely captivated by this model and the fact that it’s real in the US state of Maryland. The state legislature is considering legislation, Senate Bill Two, to create an advertising tax on tech companies, and it works like this, a tax somewhere between 2.5% and 10% would be applied to digital ad sales in the state of Maryland on IP addresses. And that would be a huge amount of money raised, something like $250 million annually. So you were part of that effort to really push this through the legislature. What did you say in your testimony to support this idea?

Paul: Just to kind of just recap where we are, they’ve actually passed this bill. The governor vetoed it at the end of last year, but the legislature overrode the veto, so this bill is now law in Maryland. It is going to be challenged by these tech companies, usually operating through some front organizations that they’ll use to challenge it in court. So we have some ways to go in this fight, the fight’s not over. But the message I gave to the legislators, I mean first, I wrote an op-ed in the New York Times, which is what somebody there read and then reached out to me about pursuing this idea. They were interested in this partly because they had made a commitment to significantly improve their educational system and they were looking for sources of revenue.

But they also understood the problems with big tech, and understood the appeal of going after a tax which actually is targeting harmful behavior. To set expectations, I think there’s a chance that the current bill will be overturned in court. There’s going to be a lot of legal resources that are deployed to try and fight this. And one of the things I told the legislators in private is just expect that the first bill might be overturned. Watch and see what this really somewhat politicized federal judiciary is going to say is wrong with the bill, and be ready to pass a new version that avoids the problems that they complain about. So this is a longer term battle plan we have to have, and we shouldn’t be worried about setbacks along the way.

The other point I made to them was that most taxes discourage good things. If you imposed a tax on going to school, fewer people would go to school. That’d be a bad tax. But this is a tax which discourages a bad thing, and that’s the most important kind of tax to pursue when you need revenue, and it’s a way to discourage bad things. I liken it to my co-recipient for the prize, Bill Nordhaus’ idea of a tax on carbon emissions, which has the same motivation, which is to stop people from doing something which is very harmful for all of us.

The other thing is that the tax rates that they thought were politically feasible in Maryland are frankly too low to make much difference for these tech firms. Even if every state in the United States, or the federal government adopted a tax at the rates that they’re looking at, progressive from 0%, to 2%, to 10%, this would be kind of small change for these tech companies. So I have a new proposal that I’m about to launch for the national government, where we impose taxes that get much higher and which I think really are strong enough to change behavior in these tech firms. And one other thing we might want to talk about is why it’s so important to tax revenue rather than corporate income because the corporate income tax is a deeply flawed and failing way to try and tax corporations.

Laurel: That seems to be an issue in the United States that’s coming up more and more, as companies look for creative ways to avoid paying on those corporate revenue numbers.

Paul: It’s really a losing battle because conceptually, income is the difference between revenue and cost. Revenue and cost are incurred in different places, so you can’t say, “Where is income earned?” That creates at this level of principle, I mean, forget about how hard it is to get the information you need to impose this tax. Even if you had all the information you wanted, reasonable people can differ about where income is earned because it’s a difference in two things. That creates all this opportunity for firms to shift the legal location for income and to move income to these low tax jurisdictions, so you get this race to the bottom, different jurisdictions are competing by offering lower and lower corporate tax rates.

Some people think you can patch this and try and limit this behavior. I think you’re just fighting a losing battle, and we really need to switch to something like taxing revenue because we know where revenue is collected. We know that there are ads that these firms get paid to serve up, that are shown to people in Maryland, or in Massachusetts, or California. And so this empowers each of those states to tax revenue that is incurred in those states. And they don’t face this issue of a race to the bottom.

Laurel: We’re increasing taxes, but we’re doing it for a good reason because education needs more money. We’re also doing it because these large companies aren’t paying their fair share. 10% may sound like a large number, but not when you’re talking about hundreds of billions of dollars. But this is a start. Right? So the Omidyar Network is looking at how you actually implement various policy ideas to rebalance this inequity in the data economy. This is one solution. Can you think of others? Are you looking at others?

Paul: It’s important to emphasize that this will not address all of the issues we face associated with firms that are so large and so powerful. Apple, for example, does not capture much revenue through advertising, and it’s got a very strong market position that people may want to think about other measures that might limit its power. I frankly am not as worried about Apple because Apple isn’t destroying our democracy and undermining the quality of life. But there are traditional reasons why you might not want firms that are so powerful.

Amazon, for example, is now collecting a growing share of its revenue through advertising, but it also had very strong positions in just being the platform for matching buyers and sellers. So it would still be a very powerful force, even if it just abandoned digital advertising revenue. So in both of these cases, there’s room to think about other measures that could deal with the traditional problems of firms that are too large. In terms of the specific measures that one could employ, the one part of antitrust law that’s been significantly underutilized and should be brought back is merger review. It should be much harder for one of these dominant firms to acquire a new firm that could potentially grow into a competitor, such as the Facebook purchase of Instagram or WhatsApp.

In a properly functioning system, those mergers and acquisitions should not have been allowed, so that’s an easy thing to do. The part of antitrust which I think is just doomed is trying to bring a lawsuit and charge them with committing a crime, and then get a judge to agree to break them up based on their “crime” that they’ve committed. This is a very crude way to try and limit size, and it puts judges in a position which is really untenable for them. It is a very complicated type of penalty to impose, and so their tendency has been even in cases where there’s a clearly demonstrated violation of the antitrust law, like there was with Microsoft, judges overturned. In the appeals courts, they overturned the breakup remedy that the Justice Department had proposed.

And to be clear, I worked with the Justice Department in crafting this remedy. The appeals courts refused to implement something that they felt was so aggressive and so intrusive. And I think that’s the problem we’ll face with any lawsuit that tries to now force Facebook to spin out Instagram. So the only way I see to get those two things separate now is to create a very strong incentive, so that they’ll save $10 billion a year in taxes if they split it into two companies instead of running it as one company.

Laurel: So perhaps we should get down into these details about a progressive tax on advertising. If that is one possible lever, how does that progressive tax work? And would it necessarily be federal, or could it be state by state, by municipality?

Paul: I think that it could be either. And this is why it’s so important to pick revenue because different jurisdictions could make their own decisions on this. This has implications internationally as well. The US could decide how much it wants to tax ad revenue, but Canada could make its own decision on that. Germany and France could make their own decisions. So we want to empower all of these different jurisdictions to make their own decisions in response to the wishes of their citizens and voters. So we want to get away from a system where you have to have these international tax treaties where everybody’s agreeing to do the same thing to have the tax system work, and that’s really where we are with the corporate income tax.

But in terms of the level of taxation, I want to be clear about this. The kind of tax that I think would create a big incentive to change at, say, Google and Facebook, the two biggest firms in this market, I think this has to be a tax where the average tax rate they pay right now, given their size, is on the order of 35%. So 35% of their revenue would be collected by the government if they don’t change, if they just stick with business as usual. And to get to an average tax rate, if your tax rate is kind of gradually increasing as you come up, you start with a big bracket where there’s no tax at all, and then it’s a 5% tax, 10% tax. To get an average tax rate of 35%, you need to have marginal tax rates, like the tax on the highest bracket of revenue. You need marginal tax rates that are 50%, 60%, even approaching 70%.

So this needs to be a very aggressive tax. People will scream like stuck pigs when I go public, as I guess I’m doing right now about what these tax rates need to be. But there’s a couple of easy ways to respond to this. I mean, one is, these companies will say, “If you took 30% or 40% of our revenue, you would kill us.” Well, that’s actually not true–30% or 40% of their revenue would just move them back to what they were earning in 2019, 2020. They’ve experienced enormous growth. Everybody thought they were viable in 2018, 2019, 2020, so it can’t be true that you take away 30% of their revenue, suddenly revenue that was great three years ago is now impossible to live on in this new model. And of course, this is because their costs are mainly fixed costs. They can just scale up how many of these ads they serve up without incurring a lot more cost.

So they could certainly be viable if they had to pay 30%, 40% of their revenue to the government. And this would actually attract and collect a reasonable amount of revenue that could be used, say, to finance the infrastructure bill, for example. $50 billion, $60 billion and growing per year in tax revenue. The other thing about a tax that is aggressive is that it does mean that a firm that might pay $15 billion, at the scale of Google and Facebook, might pay $12 billion, $15 billion in tax a year. If they split themselves in half, that’ll go down dramatically, maybe from $12 billion to $6 billion, or $15 billion to $6 billion. And if they split themselves into four pieces, their tax bill would go down, the total tax bill across all of the surviving firms, the total tax bill could be as low as $2 billion.

And the reason to be so aggressive about this is that if these companies scream as they will, the answer is just, listen, guys, if you don’t want to pay the tax, just switch to a subscription model. Just don’t use the ads. Or if you don’t want to pay the tax, just split yourself up into independent companies. So I think we have to be ready to tolerate and remain firm in the face of these screams of outrage about high marginal tax rates and just insist that, listen, we are the citizens in this country. And in a democracy, we get to decide what kind of society we’re going to live in. And we don’t want to live in a society that lets you continue to do what you’re doing right now.

Laurel: And that is certainly unique characteristics of the data economy. So we now have these issues of: How do we reduce disinformation? How do we increase privacy? Rebalancing the wealth and reducing the economic dependency on these large farms, to think that you could break up one of them into four different companies and still have each one be worth $2 billion at least is quite something else.

Paul: Worth probably, I don’t know, $25 billion or more. But they’d collectively still be paying $2 billion a year, say, in tax.

Laurel: I’m sorry. You’re correct. Thank you.

Paul: There’s a movie I like, Chinatown, with Jack Nicholson, where at the very end of the movie something terrible happens to an innocent woman who’s killed. And Nicholson is devastated. And some friend says to him, “Forget it, Jake. It’s Chinatown.” The message is, you can’t do anything. This is so complicated. The forces you’re fighting are so powerful. You can’t do anything about this. Well, this is kind of the message economists have been sending for decades now. It’s the market, forget it. It’s the market. You can’t control what the market does. If you’ve got these firms that are now dominating political advertising, forget about it. Forget it. You can’t do anything.

That’s just so false. As citizens, we can decide we don’t want them to have that kind of power in our markets for political advertising. We don’t want all of these secret targeted ads that are inflaming the passions. And so the economists need to stop encouraging this learned helplessness amongst the citizenry, and we need to be saying, “It is up to us to decide what kind of a society we want to live in.” And if we make a decision, we get our legislators to make a change.

And by the way, I think that despite the polarization we’re seeing right now, this issue might be one where you could attract some attention from both the left and the right because the right has been keenly aware of the enormous power, say, that Mark Zuckerberg possesses, or Jack Dorsey possesses at Twitter. And so they are now kind of shifting away from their usual defense of, well, it’s the market, so it must be good, and recognizing, no, there’s some aspects of this market equilibrium that we think are really bad, that are kind of inconsistent with the principles of freedom and free speech that this country was founded on. So I’m mildly optimistic that this is something where we could reach some kind of a consensus and actually do something.

Laurel: Speaking of representation, on which America is founded, there have been rumblings in Congress holding these firms accountable. Are you hopeful that might actually happen?

Paul: Well, I think those rumblings have been somewhat useful in raising attention. But they’re mostly, so far at least, theater. There’s really no consensus around an agenda for what we could do. There are people like Senator Warren, Senator Warner, who’ve been thinking about measures we could adopt. But there’s been no coalescing around some practical measure. So we need to get out, get moved beyond these showpieces, where we express outrage and try to watch these executives squirm. We need to get to the point where we actually do something that will make a difference.

Laurel: And what a great call to action that is. Thank you, Paul, for joining us today on The Business Lab.

Paul: Thank you. This is the first time I’ve actually told people, no, I mean marginal tax rates as high as 65%, 75%, so you may get some animated responses when this goes live. But people should also go look at my blog because I’ll actually have analytics behind this available on my blog. And anybody who’s interested can learn more there.

Laurel: That was Paul Romer, Nobel Prize-winning economist and professor at New York University, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at dozens of events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.