Connect with us

Tech

A new age of data means embracing the edge

Published

on

A new age of data means embracing the edge


Artificial intelligence holds an enormous promise, but to be effective, it must learn from massive sets of data—and the more diverse the better. By learning patterns, AI tools can uncover insights and help decision-making not just in technology, but also pharmaceuticals, medicine, manufacturing, and more. However, data can’t always be shared—whether it’s personally identifiable, holds proprietary information, or to do so would be a security concern—until now.

“It’s going to be a new age.” Says Dr. Eng Lim Goh, senior vice president and CTO of artificial intelligence at Hewlett Packard Enterprise. “The world will shift from one where you have centralized data, what we’ve been used to for decades, to one where you have to be comfortable with data being everywhere.”

Data everywhere means the edge, where each device, server, and cloud instance collect massive amounts of data. One estimate has the number of connected devices at the edge increasing to 50 billion by 2022. The conundrum: how to keep collected data secure but also be able to share learnings from the data, which, in turn, helps teach AI to be smarter. Enter swarm learning.

Swarm learning, or swarm intelligence, is how swarms of bees or birds move in response to their environment. When applied to data Goh explains, there is “more peer-to-peer communications, more peer-to-peer collaboration, more peer-to-peer learning.” And Goh continues, “That’s the reason why swarm learning will become more and more important as …as the center of gravity shifts” from centralized to decentralized data.

Consider this example, says Goh. “A hospital trains their machine learning models on chest X-rays and sees a lot of tuberculosis cases, but very little of lung collapsed cases. So therefore, this neural network model, when trained, will be very sensitive to what’s detecting tuberculosis and less sensitive towards detecting lung collapse.” Goh continues, “However, we get the converse of it in another hospital. So what you really want is to have these two hospitals combine their data so that the resulting neural network model can predict both situations better. But since you can’t share that data, swarm learning comes in to help reduce that bias of both the hospitals.”

And this means, “each hospital is able to predict outcomes, with accuracy and with reduced bias, as though you have collected all the patient data globally in one place and learned from it,” says Goh.

And it’s not just hospital and patient data that must be kept secure. Goh emphasizes “What swarm learning does is to try to avoid that sharing of data, or totally prevent the sharing of data, to [a model] where you only share the insights, you share the learnings. And that’s why it is fundamentally more secure.”

Show notes and links:

Full transcript:

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is decentralized data. Whether it’s from devices, sensors, cars, the edge, if you will, the amount of data collected is growing. It can be personal and it must be protected. But is there a way to share insights and algorithms securely to help other companies and organizations and even vaccine researchers?

Two words for you: swarm learning.

My guest is Dr. Eng Lim Goh, who’s the senior vice president and CTO of artificial intelligence at Hewlett Packard Enterprise. Prior to this role, he was CTO for a majority of his 27 years at Silicon Graphics, now an HPE company. Dr. Goh was awarded NASA’s Exceptional Technology Achievement Medal for his work on AI in the International Space Station. He has also worked on numerous artificial intelligence research projects from F1 racing, to poker bots, to brain simulations. Dr. Goh holds a number of patents and had a publication land on the cover of Nature. This episode of Business Lab is produced in association with Hewlett Packard Enterprise. Welcome Dr. Goh.

Dr. Eng Lim Goh: Thank you for having me.

Laurel: So, we’ve started a new decade with a global pandemic. The urgency of finding a vaccine has allowed for greater information sharing between researchers, governments and companies. For example, the World Health Organization made the Pfizer vaccine’s mRNA sequence public to help researchers. How are you thinking about opportunities like this coming out of the pandemic?

Eng Lim: In science and medicine and others, sharing of findings is an important part of advancing science. So the traditional way is publications. The thing is, in a year, year and a half, of covid-19, there has been a surge of publications related to covid-19. One aggregator had, for example, the order of 300,000 of such documents related to covid-19 out there. It gets difficult, because of the amount of data, to be able to get what you need.

So a number of companies, organizations, started to build these natural language processing tools, AI tools, to allow you to ask very specific questions, not just search for keywords, but very specific questions so that you can get the answer that you need from this corpus of documents out there. A scientist could ask, or a researcher could ask, what is the binding energy of the SARS-CoV-2 spike protein to our ACE-2 receptor? And can be even more specific and saying, I want it in units of kcal per mol. And the system would go through. The NLP system would go through this corpus of documents and come up with an answer specific to that question, and even point to the area of the documents, where the answer could be. So this is one area. To help with sharing, you could build AI tools to help go through this enormous amount of data that has been generated.

The other area of sharing is sharing of a clinical trial data, as you have mentioned. Early last year, before any of the SARS-CoV-2 vaccine clinical trials had started, we were given the yellow fever vaccine clinical trial data. And even more specifically, the gene expression data from the volunteers of the clinical trial. And one of the goals is, can you analyze the tens of thousands of these genes being expressed by the volunteers and help predict, for each volunteer, whether he or she would get side-effects from this vaccine, and whether he or she will give good antibody response to this vaccine? So building predictive tools by sharing this clinical trial data, albeit anonymized and in a restricted way.

Laurel: When we talk about natural language processing, I think the two takeaways that we’ve taken from that very specific example are, you can build better AI tools to help the researchers. And then also, it helps build predictive tools and models.

Eng Lim: Yes, absolutely.

Laurel: So, as a specific example of what you’ve been working on for the past year, Nature Magazine recently published an article about how a collaborative approach to data insights can help these stakeholders, especially during a pandemic. What did you find out during that work?

Eng Lim: Yes. This is related, again, to the sharing point you brought about, how to share learning so that the community can advance faster. The Nature publication you mentioned, the title of it is “Swarm Learning [for Decentralized and Confidential Clinical Machine Learning]”. Let’s use the hospital example. There is this hospital, and it sees its patients, the hospital’s patients, of a certain demographic. And if it wants to build a machine learning model to predict based on patient data, say for example a patient’s CT scan data, to try and predict certain outcomes. The issue with learning in isolation like this is, you start to evolve models through this learning of your patient data biased to what’s the demographics you are seeing. Or in other ways, biased towards the type of medical devices you have.

The solution to this is to collect data from different hospitals, maybe from different regions or even different countries. And then combine all these hospitals’ data and then train the machine learning model on the combined data. The issue with this is that privacy of patient data prevents you from sharing that data. Swarm learning comes in to try and solve this, in two ways. One, instead of collecting data from these different hospitals, we allow each hospital to train their machine learning model on their own private patient data. And then occasionally, a blockchain comes in. That’s the second way. A blockchain comes in and collects all the learnings. I emphasize. The learnings, and not the patient data. Collect only the learnings and combine it with the learnings from other hospitals in other regions and other countries, average them and then send back down to all the hospitals, the updated globally combined averaged learnings.

And by learnings I mean the parameters, for example, of the neural network weights. The parameters which are the neural network weights in the machine learning model. So in this case, no patient data ever leaves an individual hospital. What leaves the hospital is only the learnings, the parameters or the neural network weights. And so, when you sent up your locally learned parameters, and what you get back from the blockchain is the global averaged parameters. And then you update your model with the global average, and then you carry on learning locally again. After a few cycles of these sharing of learnings, we’ve tested it, each hospital is able to predict, with accuracy and with reduced bias, as though you have collected all the patient data globally in one place, and learned from it.

Laurel: And the reason that blockchain is used is because it is actually a secure connection between various, in this case, machines, correct?

Eng Lim: There are two reasons, yes, why we use blockchain. The first reason is the security of it. And number two, we can keep that information private because, in a private blockchain, only participants, main participants or certified participants, are allowed in this blockchain. Now, even if the blockchain is compromised, what is only seen are the weights or the parameters of the learnings, not the private patient data, because the private patient data is not in the blockchain.

And the second reason for using a blockchain, it is as opposed to having a central custodian that does the collection of the parameters, of the learnings. Because once you appoint a custodian, an entity, that collects all these learnings, if one of the hospitals becomes that custodian, then you have a situation where that appointed custodian has more information than the rest, or has more capability than the rest. Not so much more information, but more capability than the rest. So in order to have a more equitable sharing, we use a blockchain. And in the blockchain system, what it does is that randomly appoints one of the participants as the collector, as the leader, to collect the parameters, average it and send it back down. And in the next cycle, randomly, another participant is appointed.

Laurel: So, there’s two interesting points here. One is, this project succeeds because you are not using only your own data. You are allowed to opt into this relationship to use the learnings from other researchers’ data as well. So that reduces bias. So that’s one kind of large problem solved. But then also this other interesting issue of equity and how even algorithms can perhaps be less equitable from time to time. But when you have an intentionally random algorithm in the blockchain assigning leadership for the collection of the learnings from each entity, that helps strip out any kind of possible bias as well, correct?

Eng Lim: Yes, yes, yes. Brilliant summary, Laurel. So there’s the first bias, which is, if you are learning in isolation, the hospital is learning, a neural network model, or a machine learning model, more generally, of a hospital is learning in isolation only on their own private patient data, they will be naturally biased towards the demographics they are seeing. For example, we have an example where a hospital trains their machine learning models on chest x-rays and sees a lot of tuberculosis cases. But very little of lung collapsed cases. So therefore, this neural network model, when trained, will be very sensitive to what’s detecting tuberculosis and less sensitive towards detecting lung collapse, for example. However, we get the converse of it in another hospital. So what you really want is to have these two hospitals combine their data so that the resulting neural network model can predict both situations better. But since you can’t share that data, swarm learning comes in to help reduce that bias of both the hospitals.

Laurel: All right. So we have an enormous amount of data. And it keeps growing exponentially as the edge, which is really any data generating device, system or sensor, expands. So how is decentralized data changing the way companies need to think about data?

Eng Lim: Oh, that’s a profound question. There is one estimate that says that by next year, by the year 2022, there will be 50 billion connected devices at the edge. And this is growing fast. And we’re coming to a point that we have an average of about 10 connected devices potentially collecting data, per person, in this world. Given that situation, the center of gravity will shift from the data center being the main location generating data to one where the center of gravity will be at the edge in terms of where data is generated. And this will change dynamics tremendously for enterprises. You will therefore see the need for these devices that are out there where this enormous amount of data generated at the edge with so much of these devices out there that you’ll reach a point where you cannot afford to backhaul or bring back all that data to the cloud or data center anymore.

Even with 5G, 6G and so on. The growth of data will outstrip that, will far exceed that of the growth in bandwidth of these new telecommunication capabilities. As such, you’ll reach a point where you have no choice but to push the intelligence to the edge so that you can decide what data to move back to the cloud or data center. So it’s going to be a new age. The world will shift from one where you have centralized data, what we’ve been used to for decades, to one where you have to be comfortable with data being everywhere. And when that’s the case, you need to do more peer-to-peer communications, more peer-to-peer collaboration, more peer-to-peer learning.

And that’s the reason why swarm learning will become more and more important as this progresses, as the center of gravity shifts out there from one where data is centralized, to one where data is everywhere.

Laurel: Could you talk a little bit more about how swarm intelligence is secure by design? In other words, it allows companies to share insights from data learnings with outside enterprises, or even within groups in a company, but then they don’t actually share the actual data?

Eng Lim: Yes. Fundamentally, when we want to learn from each other, one way is, we share the data so that each of us can learn from each other. What swarm learning does is to try to avoid that sharing of data, or totally prevent the sharing of data, to [a model] where you only share the insights, you share the learnings. And that’s why it is fundamentally more secure, using this approach, where data stays private in the location and never leaves that private entity. What leaves that private entity are only the learnings. And in this case, the neural network weights or the parameters of those learnings.

Now, there are people who are researching the ability to deduce the data from the learnings, it is still in research phase, but we are prepared if it ever works. And that is, in the blockchain, we do homomorphic encryption of the weights, of the parameters, of the learnings. By homomorphic, we mean when the appointed leader collects all these weights and then averages them, you can average them in the encrypted form so that if someone intercepts the blockchain, they see encrypted learnings. They don’t see the learnings themselves. But we’ve not implemented that yet, because we don’t see it necessary yet until such time we see that being able to reverse engineer the data from the learnings becomes feasible.

Laurel: And so, when we think about increasing rules and legislation surrounding data, like GDPR and California’s CCPA, there needs to be some sort of solution to privacy concerns. Do you see swarm learning as one of those possible options as companies grow the amount of data they have?

Eng Lim: Yes, as an option. First, if there is a need for edge devices to learn from each other, swarm learning is there, is useful for it. And number two, as you are learning, you do not want the data from each entity or participant in swarm learning to leave that entity. It should only stay where it is. And what leaves is only the parameters and the learnings. You see that not just in a hospital scenario, but you see that in finance. Credit card companies, for example, of course, wouldn’t want to share their customer data with another competitor credit card company. But they know that the learnings of the machine learning models locally is not as sensitive to fraud data because they are not seeing all the different kinds of fraud. Perhaps they’re seeing one kind of fraud, but a different credit card company might be seeing another kind of fraud.

Swarm learning could be used here where each credit card company keeps their customer data private, no sharing of that. But a blockchain comes in and shares the learnings, the fraud data learning, and collects all those learnings, averaged it and giving it back out to all the participating credit card companies. So this is one example. Banks could do the same. Industrial robots could do the same too.

We have an automotive customer that has tens of thousands of industrial robots, but in different countries. Industrial robots today follow instructions. But in the next generation robots, with AI, they will also learn locally, say for example, to avoid certain mistakes and not repeat them. What you can do, using swarm learning is, if these robots are in different countries where you cannot share data, sensor data from the local environment across country borders, but you’re allowed to share the learnings of avoiding these mistakes, swarm learning can therefore be applied. So you now imagine a swarm of industrial robots, across different countries, sharing learnings so that they don’t repeat the same mistakes.

So yes. In enterprise, you can see different applications of swarm learning. Finance, engineering, and of course, in healthcare, as we’ve discussed.

Laurel: How do you think companies need to start thinking differently about their actual data architecture to encourage the ability to share these insights, but not actually share the data?

Eng Lim: First and foremost, we need to be comfortable with the fact that devices that are collecting data will proliferate. And they will be at the edge where the data first lands. What’s the edge? The edge is where you have a device, and where the data first lands electronically. And if you imagine 50 billion of them next year, for example, and growing, in one estimate, we need to be comfortable with the fact that data will be everywhere. And to design your organization, design the way you use data, design the way you access data with that concept in mind, i.e., moving from one which we are used to, that is data being centralized most of the time, to one where data is everywhere. So the way you access data needs to be different now. You cannot now think of first aggregating all the data, pulling all the data, backhauling all the data from the edge to a centralized location, then work with it. We may need to switch to a scenario where we are operating on the data, learning from the data while the data are still out there.

Laurel: So, we talked a bit healthcare and manufacturing. How do you also envision the big ideas of smart cities and autonomous vehicles fitting in with the ideas of swarm intelligence?

Eng Lim: Yes, yes, yes. These are two big, big items. And very similar also, you think of a smart city, it is full of sensors, full of connected devices. You think of autonomous cars, one estimate puts it at something like 300 sensing devices in a car, all collecting data. A similar way of thinking of it, data is going to be everywhere, and collected in real time at these edge devices. For smart cities, it could be street lights. We work with one city with 200,000 street lights. And they want to make every one of these street lights smart. By smart, I mean ability to recommend decisions or even make decisions. You get to a point where, as I’ve said before, you cannot backhaul all the data all the time to the data center and make decisions after you’ve done the aggregation. A lot of times you have to make decisions where the data is collected. And therefore, things have to be smart at the edge, number one.

And if we take that step further beyond acting on instructions or acting on neural network models that have been pre-trained and then sent to the edge, you take one step beyond that, and that is, you want the edge devices to also learn on their own from the data they have collected. However, knowing that the data collected is biased to what they are only seeing, swarm learning will be needed in a peer-to-peer way for these devices to learn from each other.

So, this interconnectedness, the peer-to-peer interconnectedness of these edge devices, requires us to rethink or change the way we think about computing. Just take for example two autonomous cars. We call them connected cars to start with. Two connected cars, one in front of the other by 300 yards or 300 meters. The one in front, with lots of sensors in it, say for example in the shock absorbers, senses a pothole. And it actually can offer that sensed data that there is a pothole coming up to the cars behind. And if the cars behind switch on to automatically accept these, that pothole shows up on the car behind’s dashboard. And the car behind just pays maybe 0.10 cent for that information to the car in front.

So, you get a situation where you get these peer-to-peer sharing, in real time, without needing to send all that data first back to some central location and then send back down then the new information to the car behind. So, you want it to be peer-to-peer. So more and more, I’m not saying this is implemented yet, but this gives you an idea of how thinking can change going forward. A lot more peer-to-peer sharing, and a lot more peer-to-peer learning.

Laurel: When you think about how long we’ve worked in the technology industry to think that peer-to-peer as a phrase has come back around, where it used to mean people or even computers sharing various bits of information over the internet. Now it is devices and sensors sharing bits of information with each other. Sort of a different definition of peer-to-peer.

Eng Lim: Yeah. Thinking is changing. And peer, the word peer, peer-to-peer, meaning it has the connotation of a more equitable sharing in there. That’s the reason why a blockchain is needed in some of these cases so that there is no central custodian to average the learnings, to combine the learnings. So you want a true peer-to-peer environment. And that’s what swarm learning is built for. And now the reason for that, it’s not because we feel peer-to-peer is the next big thing and therefore we should do it. It is because of data and the proliferation of these devices that are collecting data.

Imagine tens of billions of these out there, and every one of these devices getting to be smarter and consuming less energy to be that smart and moving from one where they follow instructions or infer from the pre-trained neural network model given to them, to one where they can even advance towards learning on their own. But knowing that these devices are so many of them out there, therefore each of them are only seeing a small portion. Small is still big if you combine that all of them, 50 billion of them. But each of them is only seeing a small portion of data. And therefore, if they just learn in isolation, they’ll be highly biased towards what they’re seeing. As such, there must be some way where they can share their learnings without having to share their private data. And therefore, swarm learning. As opposed to backhauling all that data from the 50 billion edge devices back to these cloud locations, the data center locations, so they can do the combined learning.

Laurel: Which would cost certainly more than a fraction of a cent.

Eng Lim: Oh yeah. There is a saying, bandwidth, you pay for. Latency, you sweat for. So it’s cost. Bandwidth is cost.

Laurel: So as an expert in artificial intelligence, while we have you here, what are you most excited about in the coming years? What are you seeing that you’re thinking, that is going to be something big in the next five, 10 years?

Eng Lim:

Thank you, Laurel. I don’t see myself as an expert in AI, but a person that is being tasked and excited about working with customers on AI use cases and learning from them. The diversity of these different AI use cases and learning from them–some leading teams directly working on the projects and overseeing some of the projects. But in terms of the excitement, actually may seem mundane. And that is, the exciting part is that I see AI. The ability for smart systems to learn and adapt, and in many cases, provide decision support to humans. And in other more limited cases, make decisions in support of humans. The proliferation of AI is in everything we do, many things we do—certain things maybe we should limit—but in many things we do.

I mean, let’s just use the most basic of examples. How this progression could be. Let’s take a light switch. In the early days, even until today, the most basic light switch is one where it is manual. A human goes ahead, throws the switch on, and the light comes on. And throws the switch off, and the light goes off. Then we move on to the next level. If you want an analogy, more next level, where we automate that switch. We put a set of instructions on that switch with a light meter, and set the instructions to say, if the lighting in this room drops to 25% of its peak, switch on. So basically, we gave an instruction with a sensor to go with it, to the switch. And then the switch is now automatic. And then when the lighting in the room drops to 25% of its peak, of the peak illumination, it switches on the lights. So now the switch is automated.

Now we can even take a step further in that automation, by making the switch smart, in that it can have more sensors. And then through the combinations of sensors, make decisions as to whether the switch the light on. And the control all these sensors, we built a neural network model that has been pre-trained separately, and then downloaded onto the switch. This is where we are at today. The switch is now smart. Smart city, smart street lights, autonomous cars, and so on.

Now, is there another level beyond that? There is. And that is when the switch not just follows instructions or not just have a trained neural network model to decide in a way to combine all the different sensor data, to decide when to switch the light on in a more precise way. It advances further to one where it learns. That’s the key word. It learns from mistakes. What would be the example? The example would be, based on the neural network model it has, that was pre-trained previously, downloaded onto the switch, with all the settings. It turns the light on. But when the human comes in, the human says I don’t need the light on here this time around, the human switches the light off. Then the switch realizes that it actually made a decision that the human didn’t like. So after a few of these, it starts to adapt itself, learn from these. Adapt itself so that you can switch a light on to the changing human preferences. That’s the next step where you want edge devices that are collecting data at the edge to learn from those.

Then of course, if you take that even further, all the switches in this office or in a residential unit, learn from each other. That will be swarm learning. So if you then extend the switch to toasters, to fridges, to cars, to industrial robots and so on, you will see that doing this, we will clearly reduce energy consumption, reduce waste, and improve productivity. But the key must be, for human good.

Laurel: And what a wonderful way to end our conversation. Thank you so much for joining us on the Business Lab.

Eng Lim: Thank you Laurel. Much appreciated.

Laurel: That was Dr. Eng Lim Goh, senior vice president and CTO of artificial intelligence at Hewlett Packard Enterprise, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab, I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.