Connect with us

Tech

Embracing culture change on the path to digital transformation

Published

on

Embracing culture change on the path to digital transformation


Meanwhile, young financial services companies were coming to market with innovative products and services and NAB was finding it difficult to compete. “Many customers today are expecting an Amazon experience, a Google experience, a Meta experience, but we were still operating in the 1990s,” says Day. “We stood back, and we looked at it, and we decided that our entire culture needed to change.”

What ensued was nothing less than an internal transformation. “Our original teams didn’t have a lot of tech skills, so to tell them that they were going to have to take on all of this technical accountability, an operational task that had previously been handed to our outsourcers, was daunting,” says Day.

Day and his team rolled out a number of initiatives to instill confidence across the organization and train people in the necessary technical skills. “We built confidence through education, through a lot of cultural work, a lot of explaining the strategy, a lot of explaining to people what good looked like in 2020, and how we were going to get to that place,” says Day.

This episode of Business Lab is produced in association with Infosys Cobalt.

Full transcript:

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab. The show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is digital transformation. Most organizations have begun the journey to digitize their services and operations, and some are further along than others in bringing disruption to the marketplace. How do you bring transformation to organizations that are in highly regulated, service-based industries where competitive differentiation requires innovation?

Two words for you, internal transformation.

My guest is Steve Day, the chief technology officer of enterprise technology at National Australia Bank.

This podcast is produced in partnership with Infosys Cobalt.

Welcome, Steve.

Steve: Thank you, Laurel. It’s a pleasure to be here.

Laurel: National Australia Bank or NAB is undergoing a significant digital transformation. Gartner recently found that IT executives see the talent shortage as the largest barrier to deploying emerging technologies, specifically cloud-based technologies, but NAB uses insourcing. Most listeners are familiar with outsourcing, what exactly is insourcing and how does it relate to outsourcing?

Steve: Yeah. I think it’s all in the name. Insourcing would be the exact opposite of outsourcing. And to give you a little bit of history, National Australia Bank, like many banks, decided to outsource a large part of its operations in the 1990s. We basically pushed all our operations and a large part of our development capability out to third parties with the intent of lowering costs and making our operations far more process driven. I think those two objectives were achieved, but we did have an unintended consequence. We basically froze our operations in time, and that created a situation. If you roll forward to 2018, we realized that we were still operating like we’re in the 1990s. We were very waterfall driven. Our systems were highly processed driven, but in a very manual way, and it took us a very long time to roll out new products and services that our customers really needed.

It was about at that time that we realized we needed to do something different. We spoke with our outsources, of course, but to be honest, they weren’t motivated to reduce our internal costs and to help us become far more agile. They were very happy for us to be paying them large amounts of money to do large amounts of work. So at that point, we decided to bring our capability back into the business.

Laurel: So waterfall being the opposite of agile, right? You were finding that was hindering your progress as a company, correct?

Steve: It really was hindering our progress. We were very slow. It took us years to roll out new products and services. We had some young financial services companies knocking on the doors, startups, and the like, that were agile and able to compete really quickly, and we needed to change. We needed to look at a different way to roll out our products so that we could give customers what they’re expecting. Many customers today are expecting an Amazon experience, a Google experience, a Meta experience, but we were still operating in the 1990s. That’s when we really pushed our call too. We stood back and we looked at it, and we decided that our entire culture needed to change.

We did that by building a series of tech guilds. We built a cloud guild, a data guild, an insourcing framework. We built our NAB Engineering Foundation and with a goal of building a culture of innovation of cloud, of agile, and being able to deliver great products and services to our customers in a cost effective, but very safe way. And as part of that, we started on our cloud migrations and that is really moving at pace now.

Laurel: Insourcing seems to be working so far, but it didn’t happen overnight, as you said. And even though 2018 wasn’t that long ago, what was the journey like to first realize that you had to change the way you were working and then convince everyone to work in a very different way?

Steve: We did realize that if we didn’t get the culture embedded that we would not be successful. So building that capability and building the culture was number one on the list. It was five years ago. It feels like a very long time ago to me. But we started that process and through the cloud guild we trained 7,000 people in cloud and 2,700 of those today are industry certified and working in our teams. So we’ve made really good progress. We’ve actually moved a lot of the original teams that were a bit hesitant, a bit concerned about having to move to this whole new way of working. And remember that our original teams didn’t have a lot of tech skills, so to tell them that they were going to have to take on all of this technical accountability, an operational task that had previously been handed to our outsourcers, was daunting. And the only way we were going to overcome that was to build confidence. And we built confidence through education, through a lot of cultural work, a lot of explaining the strategy, a lot of explaining to people what good looked like in 2020, and how we were going to get to that place.

Laurel: NAB’s proportion of apps on public cloud will move from one third to about 80% by 2025, but security and regulatory compliance have been primary concerns for organizations and regulated industries like healthcare and financial services. How has NAB addressed these concerns in the cloud?

Steve: Initially, there was a lot of concern. People were not sure about whether cloud was resilient, whether it was secure, whether it could meet the compliance requirements of our regulators, or whether the board and our senior leadership team would be happy to take such a large change to the way we did business. We actually flew the board over to meet with many of the companies in the Valley to give them an idea of what was going on. We did a huge education program for our own teams. We created a new thing called The Executive Guild, so that middle management would have a great feel on what we were doing and why we were doing it. And as part of that, we created a set of tools that would help us move safely.

One of those was CAST, a framework that we use to migrate applications to cloud. CAST stands for Cloud, Adoption, Standards, and Techniques. And it really covers all the controls we use and how we apply those controls in our environment to make sure that when we migrate applications to cloud, they are the absolute safest they can be. It’s safe to say that when we built CAST, we actually did an uplift in our requirements. That enabled a lot of people to see that we were taking it very seriously, and that it was actually quite a high bar to achieve this compliance. But we were willing to invest, and we invested a lot in getting the applications to that level.

Another thing we did was build compliance as code. Now, infrastructure as code, what cloud is built on, allows you to then create compliance as code. So all of the checks and balances that used to be done manually by people with check boards, I used to say, are now being done in the code itself. And because a server is no longer a piece of tin in the corner, it’s an actual piece of code itself, a piece of software, you can run a lot of compliance checks on that, also from software.

A third thing that we did to give everyone a sense of comfort is we didn’t pin the success of NAB to the success of any one cloud company. We came up with a public, multi-cloud strategy, and that meant that at least for all our significant applications, we would run them on two different cloud providers. Now that would be expensive if you did every cloud in the most robust way, which would be active-active across both clouds. So we created our multi-cloud framework, which was about categorizing each application across multi-dimensions, and then assigning that workload to one of six multi-cloud treatments. Multi-cloud treatment one being, basically no multi-cloud, it’s an app for convenience. It doesn’t really matter if that application goes away. We allow that to sit in one cloud all the way through to our most critical applications, which we insist on running active-active across both clouds. And in our case, that would be MCT6. So given all of those frameworks, the tools, and the focus that we put on that, I think we gave the organization and the leadership at the organization some confidence that what we were doing was the right move and that it would give us our ability to serve customers well, while also remaining safe.

Laurel: How has cloud enabled innovation across NAB? I can see it in the teams and you’ve even upskilled executives to be comfortable with technology and what agile means and how you’re going to change the way that things are done. But what else are you seeing that’s just brought some kind of a particular efficiency that is a particularly proud moment for you?

Steve: I think I would go back to that description I just gave you about infrastructure as code being an incredible enabler of innovation. I mentioned compliance as code, but there’s also all kinds of operational innovation that you can perform when your infrastructure is software rather than hardware. Just being able to replicate things very quickly. The fact that you can have as many development environments as you need to develop your applications quickly and efficiently, because when you’re finished with them, you just turn them off and stop paying for them. The fact that we can move to serverless type applications now that don’t actually require any infrastructure sitting below them and enable our application team to not have to interact with anyone and just get on and develop their applications. Things like grid computing, which create massive computing power for a short burst of time. You pay a lot, but you only pay a lot for a very short amount of time. So you end up paying not very much at all. But to achieve massive things in predicting what the market’s going to do at times of concern and things like that.  Infrastructure-aware apps, some of the amazing things we are doing in cyber at the moment to understand cyberattacks, to be able to thwart them in a much more elegant way than we have in the past. Financial operations that enable us to take control of the elasticity of that cloud environment. And all of those things sort of add up to this platform of innovation that people can build things on that really create creative innovation.

Laurel: And how does that turn into benefits for customers? Because user experience is always an important consideration when building out tech services and as you mentioned, customers certainly expect Google- or Meta-like experiences. They want online, fast, convenient, anywhere they are, on any device, so how is something like artificial intelligence at an ATM serving both the need for improved security and improved user experience?

Steve: Great question. I think for improved security, fraud is a great one. There are so many scams going on right now, and AI has really enabled us to be able to detect fraud and to work with our customers, to prevent it in many cases. We’re seeing patterns of fraud or the ways that fraudsters actually approach their victims, and we’re able to pick that up and intervene in many cases. Operational predictions on things that are going to fail or break. And then things that are just better for customers like faster home loans. A large number of our home loans are approved in under an hour now because the AI allows us to take calculated risks, basically to do risk management in a really fast and efficient way. And then there are small things. There’s some great stuff like if I get a check, I just take a picture of it from my banking app on the iPhone and it’s instantly processed. Those sorts of things are really leading to better customer experiences.

Laurel: That’s my favorite as well, but a home loan under an hour, that’s pretty amazing.

Steve: And that’s because we have a history of what that customer’s done with us. We no longer have to have that customer fill in large surveys of what their monthly spending is and what their salary is and all of that. We have all that data. We know all that about the customer and to have to ask them again, is just silly to be frank. We can take all that information and process it directly out of their account. All we need is the customer’s permission. The open banking legislation and things that have come through at the moment that allow us to gain access to information with the customer’s permission through their other financial services, that also enables us to have a good understanding of that customer’s ability to meet their repayments.

We also do a lot of AI on things like valuations. The amount of AI going into valuing the property now is absolutely incredible. In the past, you’ve had to send somebody out to a house to do the valuation so that they can appreciate things like road noise, right? How much road noise does that property have? What are the aspects of that house? And through being able to look at, say, Google Maps and see how many cars per hour are flowing past that house, what the topology of the landscape is around that house, we can actually do calculations and tell exactly what the road noise is at that property. And we’re able to use layers and layers and layers of information such as that and that goes along with, is the house on a flood plain? Is the house overflown by aircraft, what material is the house made of? We can pick all of that from satellite imagery. Does it have a swimming pool? Does it have solar panels? We can gather a lot of that and actually do the valuation on the property as well, much faster than we have in the past. And that enables us to then provide these really fast turnarounds on things like home loans.

Laurel: That’s amazing. And of course, all of that helps keep innovation up at the bank, but then also improve your own efficiencies and money. Making money is part of being a business. And then you put the money back into making better experiences for your customers. So it’s sort of a win-win for everyone.

Steve: Yeah, I think so. I haven’t loaned money for a house since all of that has been put into place, but I’m really looking forward to the next time I do and having such a good experience.

Laurel: Collaborating with your customers is very important and collaborating with your competitors could be as well. So NAB teamed up with cloud providers and other global banks on an open digital finance challenge to prototype new banking services on a global scale. Why did NAB decide to do this? And what are some of the global financial challenges this initiative was looking to solve?

Steve: I think creating great partnerships to encourage innovation is a path forward. Like everything, we don’t have a monopoly on great ideas. And I think if we limited ourselves to the ideas we came up with, we wouldn’t be serving our customer’s best interests. Searching globally for great ideas and then going through a process of looking to see whether they can actually be productionized, it’s a great way of bringing innovation into the bank.

My favorite at the moment is Project Carbon, which is seven banks around the world all getting together to create a secure clearinghouse for voluntary carbon credits, which if you think about that and where the world’s going and how important that will be going forward, it’s just absolutely wonderful that we’ve got this situation being built today. But yeah, there’ll be things that create more secure payments, faster payments, more convenient payments, more resilient ledgers, and I mentioned faster home loans, etc. It’s just an exciting time to be in the industry.

Laurel:  And to be so open and willing to work with other folks as well. What else are you excited about? There’s so much innovation happening at NAB and across the financial services industry, what are you seeing in the next three to five years?

Steve: I’m seeing a faster pace of change. One of the things I’m aware of at the moment, things are changing so fast, that it’s really hard to predict what is going to come up in the near future. But one thing we know for sure is we will need a platform that enables us to pivot quickly to whatever that is. So I’m actually most excited about the opportunity to build a platform that is incredibly agile and allows us to pivot and to move and to exploit some of these great ideas that are coming in from global partners, or internally or wherever they’re coming from. Our new graduates come up with quite a few themselves. How do we get those ideas to production really quickly in a safe way? And I think that is what really excites me is the opportunity to build such a platform.

Laurel: Steve, thank you so much for joining us on the Business Lab. This has been a fantastic conversation.

Steve: Thank you, Laurel.

Laurel: That was Steve Day, the chief technology officer of enterprise technology at National Australia Bank, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyReview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.