Connect with us

Tech

Creating a better human experience at work starts with trust

Published

on

Creating a better human experience at work starts with trust


What if managers and leaders at companies focused on a new goal: to elevate the human experience?

This paradigm shift is something Amelia Dunlop, chief experience officer at Deloitte Digital, advocates for. She and her team have worked hard to measure the amount of humanity in the workplace—a measurement that often depends on how much trust exists between workers and leaders.

Dunlop’s team focused on four signals of trust that leaders can track: capability, reliability, humanity, and transparency. Using these four measurements, which make up Deloitte’s HX TrustID solution, the team was able to predict future behaviors with high accuracy.

It can appear far-fetched to measure seemingly intangible concepts with hard data, and Dunlop acknowledges that many remain skeptical about her use of the word “love” when it comes to work.

There was part of me that wanted to be deliberately provocative, to say that there is, in fact, a role for love in the workplace. And the way it connects is that worth can be either intrinsic or extrinsic. So, there’s an extrinsic measures of worth, such as titles and promotions, how much someone is paid, or who has the awesome corner office. Intrinsic worth is much more about how you feel before you give a presentation, or before you get a job promotion. And do you feel like you are ‘enough’ in a workplace that’s constantly evaluating you?”

Especially post-pandemic, Dunlop argues that workers and leaders need to embrace this kind of love and worth so that companies can move into the future successfully.

“There’s something about humanizing leadership that I’ve been thinking a lot about.  When we, as leaders, are willing to make ourselves vulnerable, to show up authentically, drop the professional masks we all wear, be transparent, demonstrate that we care—these are all signals that foster trust.”

Show notes and links:

·      Elevating the human experience: The imperative of forging deep human connections, Deloitte Perspectives

·      Elevating the Human Experience: Three Paths to Love and Worth at Work, Amelia Dunlop, Wiley, October 2021

·      Navigating Uncertainty: The Protector, the Pragmatist, and the Prevailer, Deloitte Digital, June 30, 2020

·      HX™ in times of uncertainty, Deloitte Digital

·      A new measure of trust, Deloitte Digital

Full transcript

From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is trust. The pandemic has taught us many hard lessons, but it also brought us back to talking about humanity in the workplace. How can we best establish trust in the workplace for customers and employees? How much does it cost companies in reputation and market cap when they don’t?

Two words for you: human experience.

My guest is Amelia Dunlop. She’s the chief experience officer at Deloitte Digital and leader of the US customer strategy and applied design practice for Deloitte Consulting LLP. Her upcoming book, Elevating the Human Experience: Three Paths to Love and Worth at Work, is available now for pre-order; it launches in October. Amelia regularly writes and speaks about human experience, creativity, customer strategy, and trust. This episode of Business Lab is produced in association with Deloitte Digital.

Laurel Ruma: Welcome Amelia.

Amelia Dunlop: Thank you for having me.

Laurel: I really like this perspective of yours, and I’ll quote you right here: “We begin and end our days as humans. Amidst uncertainty, organizations need to take steps to become more human themselves.” That certainly has been at the forefront of work during the pandemic.

Amelia: Absolutely. We set this aspiration to elevate the human experience here at Deloitte Digital about three years ago. Since then, we’ve been trying to make it come to life and mean something for our employees and for our customers. We realized when the pandemic struck that the whole human experience was shifting in a time of uncertainty. So, we led some research—at the time, about 28,000 people across the US. We realized that what businesses most needed to get right was trust, safety, and human connection.

I found that was fascinating, Laurel, because even as we found ourselves to be more digitally connected than ever, we were still in need of that human connection, which was grounded in the need for empathy, the need for psychological safety, the need for authenticity—these fundamental drivers of what it means to elevate the human experience.

Laurel: Yeah. It’s funny how all the time online we have spent, how amazing it is to actually be able to be in public with someone else and at work even; having that relationship in person is really something I think folks missed. It’s one thing to work from home, and it’s a whole different thing to not be able to see other people.

Amelia: Oh, totally. I’m definitely one of those people who are the extroverts, who are languishing spending my day—I sometimes joke, Laurel, that I’m now a call center operator, because we all have the earbuds in and the mouthpieces in, and we’re learning with a great deal of empathy what it’s like to talk to a computer for 10 to 12 hours a day.

Laurel: Yeah. Not an easy job. As you mentioned, part of being human is how much trust we have in each other. We have connections at work, etc., but also with companies. This is another aspect that’s been challenged, not just with the pandemic this year, but with other issues like societal disruption and even ethical AI. How important is trust to all of this?

Amelia: Oh, my goodness. You cannot elevate anyone’s human experience if they don’t trust you. Trust is absolutely foundational. When we conducted our research, we found some things that were startling, which is perhaps not surprising. We found that 60% of Americans don’t trust each other to social distance. We also found that only 4% of us trust businesses when they tell us it’s safe to reenter, it’s safe to get back on the airplane, or it’s safe to return to the hotel.

Many of us are navigating requirements for our own business, for our schooling, healthcare, and banking. Every time you walk into a store, you need to ask yourself the question: what are the protocols for this particular store? Are we doing the right thing to maintain safety?

Laurel: How do you define human experience? Is it an evolution of customer experience?

Amelia: I’ll take the second part of that first, because we do get that question a lot. The field of customer experience and even employee experience has been around for decades now. What we wanted to do was take more of a human-centered design perspective. We don’t wake up in the morning as a customer of a particularly awesome cup of coffee. I didn’t wake up this morning as an employee, and I’m sure you didn’t either. We wake up as humans. We show up with all of our messy humanness when we come to work. What we’re trying to do is acknowledge that more human focus really is important in the business world. The definition of elevating the human experience really is about investing in humans and their growth, recognizing their potential through love and worth.

Laurel: How do you actually measure trust? At some point, you have to be able to quantify this idea, right?

Amelia: Trust is one of those interesting topics. What is it? I think the expression is: ‘trust is earned in drops and lost in buckets.’ I’m not sure who to attribute that to, but trust is really easy to lose and really hard to gain. There are measures of trust, barometers of trust, which are very rear-view-mirror looking. What we wanted to do is ask, is there a way in which we can predict trust? Then tie that to organizational performance. My colleagues and I set out to do just that with something we call the HX TrustID— human experience, obviously.

We came up with four signals of trust that are pretty universal across organizations. The first is capability. Can I actually do the thing I said I would do in exchange for your money or time? Reliability: can I do it consistently when I say that I’m going to? Humanity: how do I make you feel when you interact with me? Then, transparency: how cleanly and clearly do I communicate with you about whether it’s going well or not well? Together, these four signals of trust are predictive of future behaviors. We found that it’s actually 74%, accurate, which in the field of social sciences, is significant.

Laurel: That you can actually predict to that extent of accuracy is amazing, right?

Amelia: Well, the reason we can do it, Laurel, is that it’s based on customers’ and employees’ actual behavior. It’s not based on what I tell you I’m going to do, but on what I’ve already done, and that’s a predictor of what your future behavior will be.

Laurel: Which is important when we’re talking about businesses getting back to business.

Amelia: Totally. One of the things we realized is that employees who believe their company is humane, or has a high humanity score, are two and a half times more likely to be motivated to work. That’s huge, right? Particularly now, as we’re facing what’s I think is lovingly called ‘the great resignation,’ that humanity in the workplace is so tied to motivation.

Laurel: A humane workplace is not pool tables and endless snacks necessarily, right?

Amelia: Totally agree.

Laurel: There’s actually a work-life balance, right? Not just in words, but leadership actually following it. I also imagine generous—or just any kind of—family leave when there are sicknesses or pregnancies, etc. There actually are other measurable, tangible ideas here.

Amelia: Some of the things we measure in humanity are things like, to what extent do you believe your boss actually cares about you? To what extent do you actually care about your boss? To what extent do you believe that your peers in your organization care about you, and vice versa? We’re always looking for that reciprocal relationship and that reciprocal measure of trust with employees.

Laurel: Of those four integrated signals—capability, reliability, humanity, and transparency—which are the most difficult for companies to embrace? Does it vary by industry?

Amelia: I will start by telling you that all four matter. When you have all four, and a high composite score, that’s when you’re most likely to drive employee behavior, customer behavior, and long-term loyalty. We have noticed across industries that capability can be the highest predictor of loyalty, and that’s intuitive, right? If I’m going to give you my money or my time, I want you to have the ability to capably deliver on the thing that you said you were going to do. I wanted to buy a car, and you sold me a car, right?

Next up is reliability, that you did it at the time and place that you said you would. Again, those two make sense across industries. Then we notice that humanity and, in some cases, transparency can be the most difficult to get right. They are particularly important in the fields of healthcare with patients, obviously, but also in travel and hospitality—the humanity we expect when we show up at a hotel, when we show up at a restaurant, or any of the service industries, is an important predictor of loyalty.

Laurel: It’s an interesting time to think about that now, because a lot of that trust, which should be reciprocal, perhaps it’s not being found because the travel and hospitality industries are also relying on their customers to have this kind of humanity.

Amelia: Absolutely. Like you said before, I feel like we’re all in a period of renegotiating what it means to build human connection. What does it mean to trust? What does it mean to feel safe? It is a great period of uncertainty when we’re renegotiating those things on a daily basis.

Laurel: Were there other industries that were particularly high scoring with one of the four capabilities?

Amelia: I would say the service industries, which have a longer track record of focusing on things like customer experience, do tend to score higher. Some of the industries which are more product centric, more technology centric, or more engineering centric tend to have lower scores.

Laurel: How can bringing back trust affect the profitability of a company?

Amelia: That’s the ultimate question. We can intuitively state that trust matters and trust builds long-term loyalty. One of the things we noticed in our research was that those companies and organizations that had the highest trust scores were twice as likely to be resilient in the face of downturn relative to their competitors. We also know that the companies in a sector that tends to have the highest composite HX TrustID also tend to have the highest total shareholder return. That’s correlation. We can’t prove causality, but there is definitely an interesting correlation that the most trusted companies are also the most profitable ones.

Laurel: I would imagine people would sit up and take great interest at that.

Amelia: We all have to, because we all are in the business of trying to foster trust with our employees and with the customers we serve. Trust is our reputation.

Laurel: Do you have advice or best practices for companies trying to turn that boat around, to become one of those highly trusted companies?

Amelia: One of the things we do first is identify an individual organization’s actual trust score, broken down by the four signals and then relative to their peer set, to understand the table stakes versus what would actually need to be differentiated. Then we dig in a little bit deeper.

For example, if your relative humanity score is low, are there specific things you need to be doing to show up authentically with your employees? This connects back to the conversation we were having earlier about social unrest, a focus on purpose, a focus on diversity, equity, and inclusion. A lot of organizations are being taken to task right now to demonstrate their humanity in meaningful ways across those topics.

Laurel: Absolutely. The other aspect of trust here is how it affects employee satisfaction and motivation. There must be a number of companies actually behaving differently in the light of the pandemic.

Amelia: Some of the things we look at are, on the employee side, that 48% of employees who highly trust their employer almost never seek outside opportunities. I feel like it’s worth repeating. Again, as we think about the mobility in the employee workforce these days, if you establish high levels of trust with your employees, they’re much more likely to stick with you, versus the 66% who don’t trust you–they’re going to be looking for their next job.

Laurel: As you’ve brought your work together, tell me more about the research that led to your book, Elevating the Human Experience: Three Paths to Love and Worth at Work.

Amelia: I guess I should start by saying, Laurel, I wrote this book because I needed it. I needed a book that was equal parts head and heart, equal parts 20+ year veteran as a management consultant, and mother of three. I was really curious about what it meant to show up as fully human in the workplace with my authentic self. So, I led a study of 6,000 people in the US on the topic of love and worth. We asked questions like, to what extent do you feel worthy? To what extent does it matter to you to feel worthy? To what extent do you feel like you love yourself? To what extent do you feel like you speak to yourself with kindness? To what extent do you feel like you are spoken over in the workplace? We asked these types of questions to understand people’s experience of love and worth.

Obviously, we geeked out across sectors and age and different demographic indicators. The thing we found most startling was the fact that nine out of 10 people said it matters to them to feel worthy, but about half say they struggle, sometimes often or always to feel worthy, particularly when they show up at work. That gap between how much it matters to us to feel worthy and how much we struggle to do so is what I call the worthiness gap. I wrote about that in the book.

Laurel: Why is it, in general, that important to find worth at work?

Amelia: My research for the book showed that we [in the US] now spend more time working than any other culture and any other time in history. Some of the data from the independent labor organizations verifies that the workday is longer. What is the expression? We no longer work from home; we live at work.

Laurel: Yes.

Amelia: But the days are even longer, so the amount of social capital we’re getting from our colleagues matters even more.

Laurel: How do you differentiate between love and worth?

Amelia: The way I think about defining love is important because I think we have immediate thought bubbles that are going to pop up when we hear the word love, particularly in the context of work. My definition of love is adapted from Erich Fromm’s book The Art of Loving, from the 1950s. It’s the will to extend ourselves, to care for ourselves, or one another, to foster growth. It’s a growth mindset—to say that I care enough about you to invest in your growth, or I care enough about myself to invest in my growth. That definition of love is related to the Greek eudaimonia, which is much more akin to ‘flourishing’ as we think about the definition of love.

Laurel: Which is interesting, because if you had tried to take a shortcut and instead said growth and worth in the workplace, I think people would have thought you were talking about shares and how to get the most out of a startup experience.

Amelia: I realize I could have not used the word ‘love.’ Sometimes people said, well, why don’t you just use the word care? Or is there another word that might be less provocative? There was part of me that wanted it to be deliberately provocative, to say that there is, in fact, a role for love in the workplace. The way it connects to worth is that worth can be either intrinsic or extrinsic. There are extrinsic measures of worth, which includes titles, promotions, how much someone is paid, or who has the awesome corner office. Intrinsic worth is more about how you feel before you give a presentation, or before you get a job promotion. Do you feel like you are ‘enough’ in a workplace that’s constantly evaluating you?

Laurel: I like that tension because I find that the word “love” did challenge me. What does “love” mean, especially in our highly charged litigious society. Then I came to that same realization, that not only do you have to love yourself and love your coworkers in that broad sense, but you need to love the work you do, which I know is not simple for everyone

Amelia: Sometimes I get asked for examples to illustrate what it means to love yourself, or love your colleagues in the workplace. Is there a time you have stayed late or spent that extra two hours to teach a more junior colleague how to do something they didn’t know how to do, or you gave your time to listen to somebody who was facing a challenge in the office? In those examples, you didn’t have to give your time to either of those people, but you did because in some way you cared enough about them and their growth to give of your own time and energy.

Laurel: Could you talk a bit more about those three paths to love and worth in the workplace?

Amelia: As I was wrestling with the question of how we go on this journey to understanding love and worth in the workplace, I realized that first and foremost, it’s a journey of the self. That for me, is a very personal one, to understand what it means to love myself and see myself as worthy before I say or do anything. The second path is what I then do to recognize that worth and love another as a colleague, as a mentor, a sponsor, or even a benefactor, and to serve as an ally to help them in their career. The third path is what you and I can do to help change the systems that we all participate in, to change those systems to recognize people’s fundamental worth.

Laurel: That’s interesting to think about as a manager, as you participate in your team’s growth. As a leader of a division, you encounter many people. It interesting to think about keeping each of their value in mind when you speak to them and bringing your whole self to these conversations, and also to expecting that kind of response from them. When you do have those moments you can spend with someone to talk about their future, talk about their worth, talk about the growth for the company, it’s important for you both to have a back and forth to help define what that path is.

Amelia: I love the way you characterize that, Laurel. I’ve been thinking a lot about what happens when we as leaders are willing to make ourselves vulnerable, to show up authentically, drop the professional masks that we all wear, to be transparent, demonstrate that we care—exhibit all these signals that foster trust. I’ve noticed there’s a reciprocal equation: when we humanize ourselves as leaders, our employees are much more likely to humanize themselves. That’s what creates a more positive human experience in the workplace.

Laurel: It certainly has ongoing effects that you can feel in your team and across your department. It’s not just one drop in the pond; it’s definitely a ripple.

Amelia: I think about the fact that you know when you feel loved—you don’t have to explain it or describe it; it very much is a feeling when you feel supported at the workplace, when you feel loved and cared for. It’s just something that you know.

Laurel: We may have covered this already, but when you do hold in mind these principles from your book, how have they made a difference with your team and with a client? I’m assuming everyone’s expecting you to walk the walk.

Amelia: Yes. One of the things I’ve found is, as soon as you declare the aspiration to elevate the human experience, you will get comments like, “this pricing review did not elevate my human experience.” It does put a high bar out there, and I’m okay with that because, again, part of humanizing ourselves is acknowledging we’re not perfect. We have to recognize that not everything is going to elevate your experience, such as a pricing approval call or a call to review the quarterly results. That being said, it does allow for an intentionality where we ask ourselves, what can we do to elevate the experience of this particular call, of this town hall, of this particular meeting? I would definitely encourage folks to understand that there is no one way to elevate the human experience authentically; it’s important to experiment. This is where the innovator in me comes out, in trying different things.

One of my favorite examples occurred in the midst of the pandemic, around January of last year, when we’d all been at it for about 10 months of quarantine. I live in Boston, with particularly gray and snowy days. I found Wednesdays to be the hardest to summon myself to go through yet another day of Zoom. So, I started something called “joy days.”  Wednesdays are now joy days on our team. Every Wednesday I send a note out to my entire practice with the things that brought me joy that week. Then I encourage the team to write in with what brings them joy.

It has been such an awesome way to both connect with our team as humans, but also to remind ourselves that we can cultivate joy. Even if the notes were about cultivating joy because I bought my kids a packet of M&M’s and gave it to them while they were in their own respective Zoom schooling. It was these small ways of connecting, and these small ways of reminding ourselves that we can bring joy, that we can make a big difference for our employees. On the client side—as you can probably tell, I believe clients are humans, too—any way in which we can treat our customers or clients as humans matters. I’ve definitely had the experience where in competitive bids or competitive situations, clients have told us that we show up with equal parts EQ an IQ, and that’s what made the difference for them.

Laurel: That’s a huge compliment and a practice that has to be carried out throughout the entire team, and that really does make a difference.

Amelia: I like to say that this is the kind of world I want to live in, or the type of organization I want to be a part of, I want to be a leader of, so why not try to be a positive influence for what better might look like?

Laurel: Why, other than the pandemic, are these topics so important right now?

Amelia: I believe these topics are important right now because we’re seeing what I would describe as related topics: social unrest, and the focus on Me Too, diversity, and equity inclusion. We’re seeing the conversations around wellbeing and the topics of burnout. We’re seeing the focus on purpose and social justice almost as though they’re unrelated topics, but from my perspective, they all add up to the fact that we are demanding organizations to see us as fully human, whether we are an employee or a customer. The pandemic has just accelerated our desire for greater humanity from the organizations that we give our time and our money to.

Laurel: I’m behind that a hundred percent. Today’s conversation has been a highlight of joy in my week, so thank you very much, Amelia.

Amelia: Awesome. I will add it to my joy list for the week.

Laurel: Thank you, Amelia, for such a fantastic, joyful conversation with me today. That was Amelia Dunlop, chief experience officer at Deloitte Digital, who I spoke with from Cambridge, Massachusetts, the home of MIT, and MIT Technology Review, overlooking the Charles River.

Laurel: That’s it for this episode of Business Lab—I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at Massachusetts Institute of Technology. You can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, I hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.