Tech
Digital inclusion and equity changes what’s possible
Published
3 years agoon
By
Terry Power
Democratizing data access is key to bolstering data inclusion and equity but requires sophisticated data organization and sharing that doesn’t compromise privacy. Rights management governance and high levels of end-to-end security can help ensure that data is being shared without security risks, says Zdankus.
Ultimately, improving digital inclusion and equity comes down to company culture. “It can’t just be a P&L [profit and loss] decision. It has to be around thought leadership and innovation and how you can engage your employees in a way that’s meaningful in a way to build relevance for your company,” says Zdankus. Solutions need to be value-based to foster goodwill and trust among employees, other organizations, and consumers.
“If innovation for equity and inclusion were that easy, it would’ve been done already,” says Zdankus. The push for greater inclusion and equity is a long-term and full-fledged commitment. Companies need to prioritize inclusion within their workforce and offer greater visibility to marginalized voices, develop interest in technology among young people, and implement systems thinking that focuses on how to bring individual strengths together towards a common outcome.
This episode of Business Lab is produced in association with Hewlett Packard Enterprises.
Show notes and references
Full transcript:
Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab. The show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is digital inclusion and equity. The pandemic made clear that access to tech isn’t the same for everyone. From broadband access to bias and data to who is hired, but innovation and digital transformation need to work for everyone. And that’s a challenge for the entire tech community.
Two words for you. Unconditional inclusivity.
My guest is Janice Zdankus, who is the vice president of strategy and planning and innovation for social impact at HPE.
This episode of Business Lab is produced in association with Hewlett Packard Enterprise.
Welcome Janice.
Janice Zdankus: Hi there. Great to be here.
Laurel: So, you’ve been hosting HPE’s Element podcast this season, and the episodes focus on inclusion. In your conversations with experts about digital equity—which includes balancing business and social agendas, biasing data, and how companies can use digital equity as a means of innovation—what sorts of innovative thinking and approaches stand out to you?
Janice: So, we’ve been talking a lot about ways that technology and innovative approaches can actually be useful for tackling equity and inclusion. And we’ve had a number of very interesting guests and topics ranging from thinking about how bias in media can be detected, all the way into thinking about trustworthy AI and how companies can actually build in an innovation agenda with digital equity in mind.
So, one example would be, we recently spoke to Yves Bergquist, who’s the director of the entertainment technology center at the University of Southern California. And he leads a research center focusing on AI in neuro neuroscience and media. And he shared with us an effort to use AI, to actually scan images, to scan scripts, to watch movies and detect common uses of stereotypes to also look at how bias can be associated with stereotypes, whether intentional or not in the creation of a media piece, for example, and then to help provide that information on thousands of scripts and movies back to script writers and script reviewers and movie producers, so that they can start to increase their awareness and understanding of how the selection of certain actors or directors use of certain images and approaches can lead to an impression of bias.
And so by being able to automate that using AI, it really makes the job easier for those in the profession to actually understand how maybe, in an unconscious way they’re creating bias or creating an illusion that maybe they didn’t intend to. So that’s an example of how technology is really assisting human-centered, thinking about how we’re using media to influence.
Laurel: That’s amazing because that’s an industry that may be, I mean, obviously there’s technology involved, but maybe a bit surprised that AI could be actually used in such a way.
Janice: Yeah. AI has a lot of ability to scan and learn way beyond the scale that the human brain can do that in. But I think there’s also you have to be careful when you’re talking about AI and how AI models are trained and the possibility for bias being introduced into those models. So, you really have to think about it end-to-end.
Laurel: So, if we dig a little deeper into the components of inclusion and digital equity issues, like starting with where we are now, what does the landscape look like at this point? And where are we falling short when it comes to digital equity?
Janice: There’s three ways to think about this. One being is their bias within the technology itself. An example, I just mentioned around AI potentially being built on bias models, is certainly one example of that. The second is who has access to the technology. We have quite a disproportionate set of accessibility to cellular, to broadband, to technologies itself across the world. And the third is what is the representation of underrepresented groups, underserved groups in tech companies overall, and all three of those factors contribute to where we could be falling short around digital equity.
Laurel: Yeah. That’s not a small amount of points there to really think about and dig through. But when we’re thinking about this through the tech lens, how has the enormous increase in the volume of data affected digital equity?
Janice: So, it’s a great thing to point out. There is a ton of data growing, at what we call at the edge, at the source of where information gets created. Whether it be on a manufacturing line or on an agricultural field, or whether sensors detecting creation of processes and information. In fact, most companies, I think more than 70% of companies say they don’t have a full grasp on data being created in their organizations that they may have access to. So, it’s being created. The problem is: is that data useful? Is that data meaningful? How is that data organized? And how do you share that data in such a way that you can actually gain useful outcomes and insights for it? And is that data also potentially being created in a way that’s biased from the get-go?
So, an example for that might be, I think a common example that we hear about a lot is, gosh, a lot of medical testing is done on white males. And so therefore does that mean the outcomes from medical testing that’s occurring and all the data gathered on that should only be used or applied to white males? Is there any problem around it not representing females or people of color, could those data points gathered from testing in a broader, more diverse range of demographics result in different outcomes? And that’s really an important thing to do.
The second thing is around the access to the data. So yes, data is being generated in increasing volumes far more than we predicted, but how is that data being shared and are the people collecting or the machines or the organizations collecting that data willing to share it?
I think we see today that there’s not an equitable exchange of data and those producing data aren’t always seeing the value back to them for sharing their data. So, an example of that would be smallholder farmers around the world of which 70% are women, they may be producing a lot of information about what they’re growing and how they’re growing it. And if they share that to various members along the food system or the food supply chain, is there a benefit back to them for sharing that data, for example? So, there are other examples of this in the medical or health field. So there might be private information about your body, your images, your health results. How do you share that for the benefit in an aggregated way of society or for research without compromising privacy?
I mean, an example of addressing this is the introduction of swarm learning where data can be shared, but it can also be held private. So, I think this really highlights the need for rights management governance, high levels, and degrees of security end-to-end and trust ensuring that the data being shared is being used and the way it was intended to be used. I think the third challenge around all this is that the volume of data is almost too wieldy to work with, unless you really have a sophisticated technology system. In many cases there’s an increasing demand for high performance computing and GPUs. At HPE, for example, we have high performance computing as a service offered through GreenLake, and that’s a way to help create greater access or democratizing the access to data, but having systems and ways or I’ll call it data spaces to share, distributed and diverse data sets is going to be more and more important as we look at the possibilities of sharing across not just within a company, but across companies and across governments and across NGOs to actually drive the benefit.
Laurel: Yeah and across research bodies and hospitals and schools as the pandemic has told us as well. That sort of sharing is really important, but to keep the privacy settings on as well.
Janice: That’s right. And that’s not widely available today. That’s an area of innovation that really needs to be applied across all of the data sharing concepts.
Laurel: There’s a lot to this, but is there a return on investment for enterprises that actually invest in digital equity?
Janice: So, I have a problem with the question and that’s because we shouldn’t be thinking about digital equity only in terms of, does it improve the P&L [profit and loss]. I think there’s been a lot of effort recently done to try to make that argument to bring the discussion back to the purpose. But ultimately to me, this is about the culture and purpose of a company or an organization. It can’t just be a P&L decision. It has to be around thought leadership and innovation and how you can engage your employees in a way that’s meaningful in a way to build relevance for your company. I think one of the examples that NCWIT, the National Center for Women Information Technology used to describe the need for equity and inclusion is that inclusion changes what’s possible.
So, when you start to think about innovation and addressing problems of the long term, you really need to stretch your thinking and away from just the immediate product you’re creating next quarter and selling for the rest of the year. It needs to be values-based set of activities that oftentimes can bring goodwill, can bring trust. It leads to new partnerships, it grows new pipelines.
And the recent Trust Barometer published by Edelman had a couple of really interesting data points. One being that 86% of consumers expect brands to act beyond their product in business. And they believe that trust pays dividends. That 61% of consumers will advocate for a brand that they trust. And 43% will remain loyal to that brand even through a crisis. And then it’s true for investors too. They also found that 90% of investors believe that a strong ESG [Environmental, Social and Governance] performance makes for better long-term investments for a company. And then I think what we’ve seen really in spades here at Hewlett Packard Enterprise is that our employees really want to be a part of these projects because it’s rewarding, it’s value aligned, and it gives them exposure to really sometimes very difficult problems around solving for. If innovation for equity and inclusion were that easy, it would’ve been done already.
So, some of the challenges in the world today that aligned to the United Nations, SDGs [Sustainable Development Goals] for example, are very difficult problems, and they are stress stretching the boundaries of technology innovation today. I think the Edelman Barometer also found that 59% of people who are thinking about leaving their jobs are doing so for better alignment with their personal values. So having programs like this and activities in your company or in your organization really can impact all of these aspects, not just your P&L. And I think you have to think about it systematically like that.
Laurel: And ESG stands for Environmental Social and Governance ideas or aspects, standards, et cetera. And SDG is the UN’s initiative on Sustainability Development Goals. So, this is a lot because we’re not actually assigning a dollar amount to what is possible here. It’s more like if an enterprise wants to be socially conscious, not even socially conscious, just a player and attract the right talent and their customers have trust in them. They really have to invest in other ways of making digital equity real for everyone, maybe not just for their customers, but for tomorrow’s customers as well.
Janice: That’s right. And so the thing though is it’s not just a one and done activity, it’s not like, ‘Oh, I want my company to do better at digital equity. And so let’s go do this project.’ It really has to be a full-fledged commitment around a culture change or an enhancement to a comprehensive approach around this. And so ways to do this would be, don’t expect to go too fast. This is a long term, you’re in it for the long haul. And you’re really thinking or needing to think across industries with your customers, with your partners, and to really take into account that innovation around achieving digital equity needs to be inclusive in and of itself. So, you can’t move too fast. You actually need to include those who provide a voice to ideas that maybe you don’t have.
I think another great comment or slogan from NCWIT is the idea you don’t have is the voice you haven’t heard. So how do you hear those voices you haven’t heard? And how do you learn from the experts or from those you’re trying to serve and expect you don’t know what you don’t know. Expect that you don’t necessarily have the right awareness necessarily at the ready in your company. And you need to really bring that in so that you have representation to help drive that innovation. And then that innovation will drive inclusivity.
Laurel: Yeah. And I think that’s probably so crucial, especially what we’ve learned the last few years of the pandemic. If customers don’t trust brands and employees don’t trust the company they work for, they’ll find other opportunities. So, this is a real thing. This is affecting companies’ bottom lines. This is not a touchy-feely, pie in the sky thing, but it is ongoing. As you mentioned, inclusivity changes what’s possible. That’s a one-time thing that’s ongoing, but there are still obstacles. So maybe the first obstacle is just understanding, this is a long process. it’s ongoing. The company is changing. So digital transformation is important as is digital equity transformation. So, what other things do companies have to think about when they’re working toward digital equity?
Janice: So as I said, I think you have to include voices that you don’t presently have. You have to have the voice of those you’re trying to serve in your work on innovation to drive digital equity. You need to build the expectation that this is not a one and done thing. This is a culture shift. This is a long term commitment that has to be in place. And you can’t go too fast. You can’t expect that just in let’s just say, ‘Oh, I’m going to adopt a new’— let’s just say, for example, facial recognition technology—’into my application so that I have more awareness.’ Well, you know what, sometimes those technologies don’t work. We know already that facial recognition technologies, which are rapidly being decommissioned are inherently biased and they’re not working for all skin tones.
And so that’s an example of, oh, okay. Somebody had a good idea and maybe a good intention in mind, but it failed miserably in terms of addressing inclusivity and equity. So, expect to iterate, expect that there will be challenges and you have to learn as you go to actually achieve it. But do you have an outcome in mind? Do you have a goal or an objective around equity, are you measuring that in some way, shape or form over the long haul and who are you involving to actually create that? Those are all important considerations to be able to address as you try to achieve digital equity.
Laurel: You mentioned the example of using AI to go through screenplays, to point out bias. That must be applicable in a number of different industries. So where else does AI machine learning have such a role for possibility really in digital equity?
Janice: Many, many places, certainly a lot of use cases in health care, but one I’ll add is in agriculture and food systems. So that is a very urgent problem with the growth of the population expected to be over 9 billion by 2050. We are not on track on being able to feed the world. And that’s tightly complicated by the issues around climate change. So, we’ve been working with CGIAR, an academic research leader in the world around food systems, and also with a nonprofit called digital green in India, where they’re working with 2 million farmers in Behar around helping those farmers gain better market information about when to harvest their crops and to understand what the market opportunity is for those crops at the different markets that they’ve may go to. And so it’s a great AI problem around weather, transportation, crop type market pricing, and how those figures all come together into the hands of a farmer who can actually decide to harvest or not.
That’s one example. I think other examples with CGIAR really are around biodiversity and understanding information about what to plant given the changing nature of water and precipitation and soil health and providing those insights and that information in a way that small holder farmers in Africa can actually benefit from that. When to fertilize, when to and where to fertilize, perhaps. Those are all techniques for improving profitability on the part of a small shareholder farmer. And that’s an example of where AI can do those complicated insights and models over time in concert with weather and climate data to actually make pretty good recommendations that can be useful to these farmers. So, I mean, that’s an example.
I mean, another example we’ve been working on is one around disease predictions. So really understanding for certain diseases that are prominent in tropical areas, what are the factors that lead up to an outbreak of a mosquito-borne disease and how can you predict it, or can you predict it well enough in advance of actually being able to take an action or move a therapeutic or an intervention to the area that could be suspect to the outbreak. That’s another complicated AI problem that hasn’t been solved today. And those are great ways to address challenges that affect equity and access to treatment, for example.
Laurel: And certainly with the capabilities of compute power and AI, we’re talking about almost real time capabilities versus trying to go back over history of weather maps and much more analog types of ways to deliver and understand information. So, what practical actions can companies take today to address digital equity challenges?
Janice: So, I think there are a few things. One is first of all, building your company with an intention to have an equitable inclusive employee population. So first of all the actions you take around hiring, who you mentor, who you help grow and develop in your company are important. And as part of that companies need to showcase role models. It might be a little cliché at this point, but you can’t be what you can’t see. And so we know in the world of technology that there haven’t been a lot of great visible examples of women CIOs or African American CTOs or leaders and engineers doing really cool work that can inspire the next generation of talent to participate. So I think that’s one thing. So, showcase those role models, invest in describing your efforts in inclusivity and innovation around achieving digital equity.
So really trying to explain how a particular technology innovation is leading to a better outcome around equity and inclusion is just important. So many students choose by the time they are in fifth grade, for example, that technology is boring or that it’s not for them. It doesn’t have a human impact that they really desire. And that falls on us. So, we have worked with a program called Curated Pathways to Innovation, which is an online, personalized learning product that’s free, for schools that is attempting to exactly do that reach middle schoolers before they make that decision that a career in technology is not for them by really helping them improve their awareness and interest in careers and technology, and then help them in a stepwise function in an agency-driven approach, start to prepare for that content and that development around technology.
But you can think about children in the early elementary school days, where they’re reading books and seeing examples of what does a nurse do? What does a firefighter do? What does a policeman do? Are those kinds of communications and examples available around what does a data scientist do? What does a computer engineer do? What does a cybersecurity professional do? And why is that important and why is that relevant? And I do think we have a lot of work to do as companies and technology to really showcase these examples. I mean, I would argue that technology companies have had the greatest amount of impact on our world globally in the last decade or two than probably any other industry. Yet we don’t tell that story. And so how do we help connect the dots for students? So, we need to be a voice we need to be visible in developing that interest in the field. And that’s something that everybody can do right now. So that’s my two cents on that.
Laurel: So, there’s so much opportunity here, Janice and certainly a lot of responsibility technologists really need to take on. So how do you envision the next two or three years going with digital equity and inclusion? Do you feel like this Clarion bell is just ringing all over the tech industry?
Janice: I do. In fact, I see a few key points really, really essential in the future evolution of equity and inclusion. First of all, I think we need to recognize that technology advancements are actually ways that inclusion can be improved and supported. So, it’s a means to an end. And so recognize that the improvements we make in technology innovations we bring can drive in inclusion more fully. Secondly, I think we need to think about the future of work and where the jobs will be and how they’ll be developing. We need to think about education as a means to participate in what is and will continue to be the fastest growing sector globally. And that’s around technology around cyber security, around data science and those career fields. But yet right now some states really don’t even have high school computer science curriculum in place.
It’s hard to believe that, but it’s true. And in some states that do, don’t give college prep credit for that. And so, if we think the majority of jobs that are going to be created are going to be in the technology sector, in the fields I just described, then we need to ensure that our education system is supporting that in all avenues, in order to address the future of work. First and foremost, it has to start with literacy. We do still have issues around the world and even in the United States around literacy. So, we really have to tackle that at the get go.
The third thing is systems thinking. So, these really tough problems around equity are more than just funding or writing a check to an NGO or doing a philanthropic lunch-packing exercise. Those are all great. I’m not saying we should stop those, but I actually think we have a lot of expertise in the technology sector around how to partner, how work together, how to think about a system and to allow for outcomes where you bring the individual strengths of all the partners together towards a common outcome.
And I think now more than ever, and then going into the future, being able to build systems of change for inclusion and equity are going to be essential. And then finally, I think the innovation that is being created through the current programs around equity and social impact are really challenging us to think about bigger, better solutions. And I’m really, really optimistic that these new ideas that can be gained from those working on social innovation and technology innovation for social impact are just going to continue to impress us and to continue to drive solutions to these problems.
Laurel: I love that optimism and bigger and better solutions to the problems, that’s what we all really need to focus on today. Janice, thank you so much for joining us on the Business Lab.
Janice: Thank so much for having me.
Laurel: That was Janice Zdankus, vice president of strategy and planning and innovation for social impact at HPE, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each around the world. For more information about us in the show, please check out our website at technologyreview.com.
This show is available wherever you get your podcast. If you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
You may like
-
What’s changed in the US since the breakthrough climate bill passed a year ago?
-
Merging physical and digital tools to build resilient supply chains
-
The Download: China’s digital currency ambitions, and US AI rules
-
What’s next for China’s digital currency?
-
Generative AI is empowering the digital workforce
-
The Download: what’s next for the moon, and facial recognition’s stalemate
My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”
Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox.
Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary.
Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience.
“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”
When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.
Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding.
In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.
And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account.
Tech
The Download: spying keyboard software, and why boring AI is best
Published
1 year agoon
22 August 2023By
Terry Power
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk
For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes.
QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones.
But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story.
—Zeyi Yang
Why we should all be rooting for boring AI
Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning.
But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.
Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story.
This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
The ice cores that will let us look 1.5 million years into the past
To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules.
By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.
—Christian Elliott
This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian)
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
2 Researchers are racing to understand a new coronavirus variant
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
+ Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
+ People are donning cooling vests so they can work through the heat. (Wired $)
4 Brain privacy is set to become important
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
+ How your brain data could be used against you. (MIT Technology Review)
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
+ The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
6 Inside the complex world of dissociative identity disorder on TikTok
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
10 Christie’s accidentally leaked the location of tons of valuable art
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)
Quote of the day
“Is it going to take people dying for something to move forward?”
—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.
The big story
Inside effective altruism, where the far future counts a lot more than the present
October 2022
Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.
It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.
—Rebecca Ackermann
We can still have nice things
A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)
+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle.
+ Learn about a weird and wonderful new instrument called a harpejji.
Tech
Why we should all be rooting for boring AI
Published
1 year agoon
22 August 2023By
Terry Power
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again.
Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.
Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services.
Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department.
The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.”
But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.
Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations.
Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes.
The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.
It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all.
While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring.
Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.
Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.
Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.)
That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes.
Boring AI is not morally complex. It’s not magic. But it works.
Deeper Learning
AI isn’t great at decoding human emotions. So why are regulators targeting the tech?
Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings.
But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.
Bits and Bytes
Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information)
OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)
Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)
Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)