Connect with us

Tech

To accelerate business, build better human-machine partnerships

Published

on

To accelerate business, build better human-machine partnerships


Businesses that want to be digital leaders in their markets need to embrace automation, not only to augment existing capabilities or to reduce costs but to position themselves to successfully maneuver the rapid expansion of IT demand ushered in through digital innovation. “It’s a scale issue,” says John Roese, global chief technology officer at Dell Technologies. “Without autonomous operations, it becomes impossible to keep up with the growing opportunity to become a more digital business using human effort alone.”

The main hurdle to autonomous operations, says Roese, is more psychological than technological. “You have got to be open-minded to this concept of rebalancing the work between human beings and the machine environments that exist both logically and physically,” he says. “If you’re not embracing and wanting it to happen and you’re resisting it, all the products and solutions we can deliver to you will not help.”

Technology and infrastructure-driven AI and machine-learning discussions are expanding beyond IT into finance and sales—meaning, technology has direct business implications. “Selling is a relationship between you and your customer, but there’s a third party—data and artificial intelligence— that can give you better insights and the ability to be more contextually aware and more responsive to your customer, says Roese. “Data, AI, and ML technologies can ultimately change the economics and the performance of all parts of the business, whether it be sales or services or engineering or IT.”

And as companies gather, analyze, and use data at the edge, autonomous operations become even more of a business necessity. “Seventy percent of the world’s data is probably going to be created and acted upon outside of data centers in the future, meaning in edges,” says Roese. “Edge and distributed topologies have huge impacts on digital transformation, but we also see that having a strong investment in autonomous systems, autonomous operations at the edge is actually almost as big of a prerequisite … to make it work.”

Show notes and references

What is autonomous operations?

Perspectives on the impact of autonomous operations

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is increasing innovation through operations. If autonomous operations are the next step in technology’s evolutionary arc, then organizations need to re-examine their IT strategies and determine how to level up human-machine partnerships, not only to improve workflows and augment existing capabilities but to increase innovation and transformation. Two words for you, operational opportunity.

My guest is John Roese, who is the global chief technology officer at Dell Technologies. John is responsible for establishing the company’s future-looking technology strategy. He is a published author and holds more than 20 pending and granted patents in areas such as policy-based networking, location-based services and security. This episode of Business Lab is produced in association with Dell Technologies. Welcome, John.

John Roese: Hi, great to be here.

Laurel: John, when we spoke last year, you clearly defined two possible paths for innovation and AI. One was a huge jump in capabilities, something that would revolutionize society, but the other path was a bit more realistic, more measured improvement with machine intelligence. That is, and I quote you, “An augmentation to the cognitive tasks that human beings typically do.” Does that still hold true?

John: Yeah, I think the evidence supports my view. We don’t have self-driving cars everywhere. There are no terminators running loose in the streets, and by and large, while we may not realize it, there’s been a progressive incorporation of machine intelligence into our lives, improving everything from how our homes operate to how batteries are maintained and kept efficient. Even in the automotive industry, our cars over the last two years have become safer because of incremental improvements, mostly driven by machine intelligence, the ability to detect objects, to make sure that you as a human don’t run into something. And so that path of incremental improvement seems to be the path that we’re on. And it’s kind of fun to pause after two years and ask how’s the world different? And if you look carefully, you will see that the world is much more autonomous today than it was two years ago. However, that change in autonomy has not resulted in dramatic changes in society that were disruptive and sudden and abrupt, which is actually the way technology should roll out most of the time.

Laurel: Exploring that concept of autonomous operations, how do you define autonomous? And why is that important for business today?

John: Yeah, yeah. The concept of autonomy or an autonomous system really is just saying it’s a function that happens below the level of human effort. The idea that things that can be done without humans exerting effort or being directly involved are generally things that have been absorbed into the realm of autonomy. And that applies to everything from IT technology, to cars, to any other example. And in general, most people understand that, but they don’t see it very often. And much like the commodity curves that we deal with where one day a technology or a product is highly differentiated and a couple years later the commodity line moved up and it’s no longer all that interesting. Imagine the world where you saw your first flat panel, high-definition TV. That was completely unique, you were willing to pay a premium, and here we are a decade later and quite frankly, it’s just accepted as the norm. The same principles apply when we start shifting things into or below that line of human effort into the autonomous infrastructure, into the autonomous operations world.

Laurel: Why are autonomous operations sort of the next model for modern IT? How do they help relieve overextended IT resources?

John: Yeah, well in the IT world, our biggest challenge right now is always correlated to quite frankly, scale, demand. We’re in a cycle where there isn’t a business in the world or an industry in the world that isn’t in a digital transformation, isn’t trying to become a more digital business, to move faster, to use technology, to use data. And all of those things correlate to just a dramatic expansion of demand on the people and the organizations and the budgets that are able to deliver technology to those enterprises. And so we have only really two choices if the demand of IT capacity, of technology adoption is growing effectively exponentially, we could either try to hire exponentially more people and do it the same way or we could do it in a different way, which is to divide up the work between people and machines in a more creative and effective way.

And so for most enterprises, I think the consensus is you want to be a digital leader. You want to go through digital transformation, you want to use data to your advantage. And if that’s true, the sheer scale of those tasks exceeds the human capacity of your IT organizations and the budget that you have to use just pure human effort to solve those problems, which inevitably leads you to looking for ways to shift the work into autonomous systems, into the infrastructure, into the technology so that that scarce resource of human capacity can still keep up with the high-level objectives, the decision-making and the things that you want the human beings to do but yet the industry that you’re in or the business that you’re a part of can actually move fast enough to be at the front end of the digital transformation. It’s a scale issue and without autonomous operations, without autonomy, without automation, it becomes impossible to keep up with the growing opportunity to become a more digital business using just human effort.

Laurel: And how much has the last two years affected this, being in this pandemic time where everyone and everything is now online and digital first?

John: Yeah, well what we sometimes talk about the fact that there were a lot of downsides of the last two years with covid, lots of human loss, lots of disruption, but one of the things that happened that people may not have been aware of is that we think the path to digital transformation of most industry accelerated by anywhere between three and five years. It just moved faster. Suddenly you found yourself in an environment where you didn’t have the luxury of using human beings to do the work. You didn’t have the ability to do it the same way. And so while you were evaluating as a business using things like robotics and automation and AI tools to make decisions faster, to process data more quickly, to reach your customers more effectively, you didn’t need to do it when it was easy to just use humans to accomplish the task in the same way. When that was suddenly taken away from you, that you didn’t have that luxury, you started to look at technology as a vehicle to accomplish those tasks.

And fundamentally what we found is technology works. It is available. And because of that the adoption cycle of using technology within our businesses accelerated dramatically. Sometimes I tell people, if you do word association before covid and then today of the phrases drone, robot, AI—two years ago, the reaction would’ve been negative on all three of those. You would’ve been maybe slightly positive over the long term. Today when we use those terms in business at least, drones are great. They can deliver things. They can analyze power lines, they can do all kinds of fantastic things that humans can’t do easily. Robots are critically important. And even as a consumer it’s okay if a robot delivers your food or your package, as long as your package shows up.

And AI is something that we now view as an augmentation, a positive aspect and not a threatening thing because we’ve started to see how it’s transformed health care, how it’s made our communication systems more intelligent, our transportation networks works better. And so very, very big shift in the last two years in terms of open mindedness to technology and the overall adoption rate of the technology. And like I said, we think it’s been a three-to-five-year acceleration of the digital transformation journey that most people were on before covid.

Laurel: Three to five years is pretty amazing. That is quite an acceleration but not every company was kind of maybe ready for it. How tech-forward does a company have to be to adopt autonomous operations?

John: Yeah. One of the bonus prizes of this last two years was that before covid, digital transformation definitely had a bell curve and there were digital leaders and digital laggards and most people were somewhere in the middle but more towards the back of the pack. You had industries where that was just one digital disrupter, Uber initially and everybody else was behind the curve. The reason for that is several years ago, in order to execute a digital transformation successfully, you had to do most of the work. There were not turnkey products available. Companies were not necessarily set up to do it for you in a way that was easy to consume without tremendous amounts of expertise inside of your company, in your organization. During the last two years because of the demand cycle, almost every company that supplies technology or can help you navigate that maybe wasn’t delivering easy-to-consume products, suddenly showed up in force.

Even at Dell, the last two years one of the biggest changes in our portfolio has been moving more and more of our portfolio to be delivered as a service, which means we take the responsibility and with it we use tremendous amounts of automation to make it easy and cost effective, but we shift the burden away from the end customer and towards the supplier or the technology itself. That shift occurred over the last two years because quite frankly, there was huge demand for it. Smarter products materialized because, candidly, we needed to have more scale and better economics and pushing the burden into the technology takes huge cost and complexity out of the system and on and on. And so, covid in this period of aggressive digital transformation actually resulted in a better supply base.

And the result of that, to get to your question, is that you don’t have to be digitally forward in terms of your capability set. You do not need a giant data science team. You do not need to develop your own software. You do not need to build your own infrastructure. You quite frankly can consume it from any number of sources of supply that are actually delivering to you highly advanced and almost turnkey outcomes for many of the situations. However, the one thing you have to be, which quite frankly still is a problem in some environments, is you have got to be open-minded to this concept of a rebalancing of the work between human beings and the machine environments that exist both logical and physically. If you’re not embracing and wanting it to happen and you’re resisting it, all the products and solutions that we can deliver to you will not help.

And so the one kind of last threshold to cross, I think, to really accelerate the entire ecosystem forward is people have to start to get comfortable and lean into this idea that inevitably the future is a much different balance between the work that people and the work that machines will do. And so the minute you start to accept that as inevitable and you start to look at how to live in that world, then you can start to tap into a far expanded supply base of technology and capabilities delivered from industry that are actually significantly easier to consume than anything we had two years ago.

Laurel: And that shift to autonomous and embracing it, granted was accelerated with the last two years but it was sort of nagging in the background, wasn’t it? Because there was a lack of skills, lack of employees, kind of an inability to find people to do the work, to keep everyone moving as quickly as possible.

John: Yeah, yeah. No, absolutely. Again, going back a couple of years ago, we would go and have a conversation with customer A in a particular industry, let’s say insurance or financial services, and you would see these spectacular things that they were doing but it was them doing them. And it was because that particular company had the resources and people in house. They had to be able to capture the talent pool to really develop their own technology or to be really down in the weeds. And then you’d go to another company in the same industry who wasn’t able to find that skillset or didn’t have the same level of human competence and they were doing nothing. And you just kind of look, boy, this is a have-and-have-not scenario. Fast forward till today, clearly, we still need smart people. That’s very helpful and important but you have examples now where customers with much smaller software development teams using low code applications and containerization and automation tools can develop really interesting software assets with a much smaller footprint.

Instead of having to have a giant data science team to develop your entire tool chain, a much smaller data science team and analytics team can actually use the platforms and capabilities that exist out there to, quite frankly, get almost better work done than what companies could do two years ago. And then from an infrastructure perspective, a company today that quite frankly has a small IT organization but is embracing the autonomous operations of the infrastructures they can consume today, can actually deliver a much bigger, more scalable infrastructure, can extend it to the edge, can have a multi-cloud strategy and can do it probably faster and better than a giant organization of experts two years ago. And so it’s definitely you’re right, it was kind of lingering out there as a theory because it was gated based on human capacity. And I think largely the progressive shift towards smarter systems, more autonomy, different consumption models, ways to shift the burden away from the customer and towards technology and the providers of that technology has actually unlocked a tremendous amount of democratization of moving forward together, as opposed to having haves and have nots.

Laurel: And that moving forward together also includes bringing in the internal business operations, so other benefits from autonomous operation include benefits for the business, as well as IT here. Things like cost savings and monitoring for cybersecurity threats.

John: Yeah. Yeah. Those are two very good examples. It’s funny, even at Dell, we have hundreds and hundreds of AI and ML projects going on at any given time across the businesses. And what we’ve found is, again, and several years ago, it was mostly a technology- and infrastructure-driven discussion. Now it’s the discussion of finance and sales, it has direct business implication. In fact, some of the hallmark projects that we talk about or things like improving our time to repair or our ability to service customers or putting our sales force on target, improving revenue performance and ability to close deals. These are totally business driven but today the people who are embracing those and are benefiting from them, understand that the reason they’re able to do that is because of advanced technology adoption.

It’s really interesting to hear the head of sales talk about AI, and that’s actually fairly common these days in companies. And if it isn’t happening in your company, you probably ought to ask why because selling is a relationship between you and your customer, but there’s a third party that can help you and that third party is data and artificial intelligence that can actually give you better insights and be more contextually aware and more responsive to your customer. And so it is fascinating to see how these technical terms like AI and machine learning and autonomous operations are now part of the business dialogue because I think most business leaders understand there’s that third party in the relationship. It’s not just them and their customer, it’s the technology that they use that can ultimately change the economics and the performance of their part of the business, whether it be sales or services or engineering or IT.

Your second part of the question, though, is around security, which for us, this is probably the first area where autonomy was not just nice to have but was existentially necessary. And the reason I say that is over the last probably four or five years, the security threat landscape as digital transformation created digital value, meaning it created a target, the threat landscape has dramatically expanded, exponentially expanded. You see the statistics of in the course of this conversation, there’s probably dozens of ransomware attacks that have happened and massive amounts of cyber threats have occurred. And the reality of it is, is that years ago we realized that there was absolutely no way that you could protect an enterprise and run a security environment without a tremendous investment and adoption of machine intelligence, autonomy in the systems, automation throughout the stack. And today, more and more we find that it’s just the status quo.

If you look at certain industries like security event and information management, people like Secureworks, a part of the Dell family, you cannot have a competent offering detecting threats if a human being had to look through the billions or trillions of threats events that are coming in. You have to outsource that to a machine and quite frankly, that’s already done. But now we’re seeing it move into the other parts of security. That’s the detection piece. The prevention and response pieces are now becoming highly autonomous. Prevention is about, well, let’s make sure that we don’t create a vulnerability. Well, it turns out that human error is probably the single source of vulnerabilities that get created or not having enough human capacity to keep your software patch, to properly inspect your code as you’re creating it, to be able to move fast and move fast with security, turns out there’s tremendous security tools in the prevention space that are allowing us to better understand our environments, make sure they meet our compliance obligations, make sure our software is developed in a secure way.

And then lastly, on the response side, when an event occurs, it turns out that it isn’t causing damage instantaneously. Even if someone clicks on an email and opens it up and starts a ransomware attack beginning, if you could move faster than the attack to mitigate it, it really doesn’t cause problems. But moving faster than an automated attack requires an automated response, which means the ability to push a button and change the behavior of your network or to push a button and isolate users or to push a button or maybe not even push a button and have an AI just do it for you. And so across the security landscape, unlike the other topics where an intelligent car is a nice to have and it’s very valuable, in the security world the absence of autonomy, the absence of AIs as a full participant in the end-to-end stack means that you’re probably at a disadvantaged security posture and at extreme risk. It’s definitely the lead horse in this shift because necessity more than anything else.

Laurel: Yeah. And I think you just phrased it absolutely perfectly. If it’s an automated attack then you needed an automated response. That does bring in tension to that relationship with humans and machines though. Sometimes you call keeping the human in the loop, but what is that conversation like with humans, staff, employees who are thinking about autonomous operations coming through and also wondering what their job looks like? How does that conversation start?

John: Yeah. There’s two questions there. One is, how do people embrace autonomy within their existing job in a way that isn’t threatening? And the other is, when autonomy takes over certain jobs, what’s left? The first one, quite frankly, back to the opening question, we really truly believe that most adoption of machine intelligence, autonomy and other technologies is really a function of incremental improvement. It’s shedding things that you as a human being just simply can’t keep up on. But what that means that you are still in the loop, you are still expressing intent, you are still authorizing the behavior to happen. It’s just instead of understanding and dealing with the micro-behaviors, you’re dealing with the macro-behaviors.

Imagine a scenario in the security world where today a ransomware attack shows up and a human being has to sift through logs to figure it out. And a human being has to figure out where the attack’s coming from, and a human being has to figure out where they could potentially mediate or dis-intermediate the attacker from the attack surface. And then finally, a human being has to go out and manually reconfigure everything to make the attack go away. That is an awful experience. It’s probably not even tenable these days. Then look at it as you’re still the security operations person but now a machine told you there’s an attack happening. You authorize that it should do something about it but in order to know what to do, you asked a machine to tell you where it’s come from to give you options about how you might react to it. And then once you decided that it was worth reacting to, you had a machine go and do the automatic reconfiguration.

Number one, you’re going to move a heck of a lot faster and you’re going to be able to move ahead of the attack, but number two, both scenarios effectively result in probably the same security operations team in terms of the number of people—it’s just one of them, the security operations team gets to go home in the evening and see their family and sleep at night and the other one, they work all night and barely keep up. In fact, they probably fall behind and their business fails. To me, it’s a very positive thing if you’re in an environment where the scale is exceeding human potential. There are no job losses, the work changes but human beings having ultimate authority of intent and decision-making, continue to be very important pieces in any kind of autonomous operation system in IT.

The second part, though, is if you had an entire team of people whose job was to run around and reconfigure manually the infrastructures you were on, guess what? Those jobs are going away. They’re not going to be necessary because candidly, they just can’t do it as fast or as effective, and they actually create risk if you don’t move fast and shift this to autonomy. And so in those cases, you have to have a very different discussion. You have to ask the question of, if those jobs go away, is something coming that replaces them that’s better? And it turns out there’s a lot of new jobs being created. They might be exact the same skillsets, but for instance, there’s a job that I think Google coined the term called an SRE, site reliability engineer, and essentially the idea behind it is think of it as the person who takes care and monitors the autonomous infrastructure. Even an autonomous infrastructure needs care and feeding.

I give this is example. If you have a Roomba vacuum cleaner, if you’ve noticed, it’s an autonomous vacuum cleaner. Guess what? If you just let it run by itself for a month, eventually it will fail because a human being occasionally has to intervene. Has to basically clean it, has to support it. An SRE in an autonomous infrastructure is kind of like that. Even an autonomous system needs to be tuned, it needs to be managed, it needs to be maintained, it needs to be upgraded occasionally. And so we’ve created entirely new skillsets, which are the caretakers of the autonomous systems. In fact, we know that already in manufacturing, where we move to autonomous robotic manufacturing, we created all kinds of new jobs. The new jobs are who writes the software for the autonomous systems, the machines? Who actually maintains them? And this pattern in manufacturing is already well underway.

And in the IT world, we will see the same pattern, new jobs being created because autonomous systems are not free of human beings; they still need human beings to tell them what to do, to tune them, to basically maintain them. And that creates a number of jobs that aren’t necessarily super high-skilled jobs. They’re within the realm of retraining someone who used to manually provision storage arrays can now maybe be an SRE to maintain the automated storage environment. And so I’m very bullish about the fact that as these systems scale, even though the amount of human effort per unit of whatever drops, the amount of human effort in aggregate is probably actually larger because of the scaling of the IT systems.

And that means that there will absolutely be more simplified jobs, and there will be new jobs and there will be some jobs that go away but when these kind of trends occur, it usually is a net positive in terms of employment and requirements for human effort. We do not have an abundance of technical people in our industries right now and my prediction is five years from now we will still need more trained people, more people working in our industry because every dimension of the amount of data, the amount of compute, the amount of connected devices is growing exponentially faster than the number of people we have on the planet.

Laurel: And I was just going to say with the adoption of cloud and edge technologies growing, the ability  to work from anywhere is definitely part of it. That means the data collected is increasing, and IT operations also have to be decentralizing and capturing that data from anywhere. What does that mean for IT? More autonomous operations, correct?

John: Yeah. In fact, edge is a great example of this. In the world where all your IT was sitting in a data center or in a cloud environment, it was pretty easy to put your people nearby. And then even if you used a lot of advanced automation technology, you could scale human effort pretty easily in an environment where everything was kind of co-located with each other. The minute you start putting things out into the real world with edge, deploying your technology back out into your stores, your hospitals, your schools, your factories, which is absolutely happening. In fact, 70% of the world’s data is probably going to be created and acted upon outside of data centers in the future, meaning in edges. The minute you start doing that, you have only two choices about how you’ll make that work.

The first is human effort. You’re going to need human beings to potentially go out there and deploy the stuff but you can actually use robotics and other services to help there, but more importantly, to operate it. If it requires human intervention and human presence to touch the devices, to interact with the devices manually, just simply based on the sheer scale of them and the fact that they’re not all in one place, we just simply not only we will not have enough people, we won’t have them in the right place at the right time to be able to do the work. One of the principles of edge platform, something Dell’s very focused on, is you have to start looking at what are the characteristics of the platform? And some of the characteristics are things like zero-touch provisioning. The system can be deployed and it can automatically provision itself with no human intervention so that it can come up and be in production. A zero-touch administration that it can self-upgrade. It can manage and operate itself.

And even zero-trust environments where you actually do not want anyone to have privilege. You want to lock the system down and have almost no human intervention are characteristics of a properly well-formed edge environment. And all of them result in an environment that doesn’t need a lot of human touch. Doesn’t need a lot of human intervention. And because of that, as we start to think about an enterprise topology that is no longer a couple of data centers and maybe some cloud services but that plus, I don’t know, we have one customer that’s got 9,000 retail stores across the world. If that’s the topology, we clearly do not want to provide a human footprint to cover 9,000 sites. We will cover maybe 8,995 of them with autonomy and the remaining five will actually have human beings.

And so we’re pretty excited about edge and these new distributed topologies because they change where data can be processed. They have huge impacts on digital transformation, but we also see that having a strong investment in autonomous systems, autonomous operations at the edge is actually almost as big of a prerequisite as it is in the security ecosystem to make it work.

Laurel: Rolling all of these technologies together, how do they all help a company’s digital transformation and just that goal to always improve innovation?

John: Speed. There’s just one. We’re in a race. Every company is in a race with somebody or themselves. And it’s a race to see who can build the more intelligent, more efficient, more effective business. And it turns out that one of the assets we have in that race is technology and specifically technology that improves the speed in which we can do things, whatever those things are. Build a product, sell a product, support a customer. And so when we think about autonomous operations and infrastructure, autonomy in general, the measure of success is does it make you move faster? Does it allow you to do the things that make your business profitable or effective or impactful at a speed faster than you could do it without it?

And whether it’s the speed in actually understanding and operating things in your business, teaching a student, building a product or it’s the speed in which you gather information and insights from those things and learn what they’re doing well and how they could be improved, i.e., analytics. Or whether it’s the speed in which you decide that you now know a way to improve them but you can rapidly build new software, put it out into production, change the infrastructure behavior, deploy it in rapidly and actually change the real world based on those insights by changing the digital world that runs them. It’s all about speed. And so, if you want to understand why you need a strong partnership with autonomous systems and AIs and MLs, it’s not because they’re friendly and nice. It’s not because they’re interesting technology. It’s because they fundamentally allow you to move faster. And if you move faster than your competitors, you are in the race and you’re likely to win it.

Laurel: Speed and scale. John, thank you so much for joining us today on what’s been a great conversation on the Business Lab.

John: No, my pleasure. Great, great discussion.

Laurel: That was John Roese, chief technology officer at Dell Technologies, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology and you can also find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.