Connect with us

Tech

Cryptocurrency isn’t private—but with know-how, it could be

Published

on

Cryptocurrency isn’t private—but with know-how, it could be


There’s probably no such thing as perfect privacy and security online. Hackers regularly breach corporate firewalls to gain customers’ private information, and scammers constantly strive to trick us into divulging our passwords. But existing tools can provide a high level of privacy—if we use them correctly, says Mashael Al Sabah, a cybersecurity researcher at the Qatar Computing Research Institute in Doha.

The trick is understanding something about the weaknesses and limitations of technologies like blockchain or digital certificates, and not using them in ways that could play into the designs of fraudsters or malware-builders. Successful privacy is “a collaboration between the tool and the user,” Al Sabah says. It requires “using the right tool in the right way.” And testing new technology for privacy and security resilience requires what she calls a “security mindset.” Which, Al Sabah explains, is necessary when assessing new technology. “You think of the different attacks that happened before and that can happen in the future, and you try to identify the weaknesses, threats and the technology.”

There is an urgency to better understanding how technology works with allegedly anonymous technology. “People cannot be free without their privacy,” Al Sabah argues. “Freedom’s important for the development of society.” And while that may be all well and good for folks in Silicon Valley obsessed with the latest cryptocurrency, the ability to build funding structures for all is part of her focus. Al Sabah explains, “Aside from privacy, cryptocurrency can also help societies, specifically the ones with under-developed financial infrastructure.” Which is important because, “There are societies that have no financial infrastructure.”

Al Sabah made a splash in the media in 2018 by co-authoring a paper demonstrating that Bitcoin transactions are a lot less anonymous than most users assume. In the study, Al Sabah and her colleagues were able to trace purchases made on the black-market “dark web” site Silk Road back to users’ real identities simply by culling through the public Bitcoin blockchain and social media accounts for matching data. More recently, Al Sabah has also been studying phishing schemes and how to detect and avoid them.

“There’s more awareness now among users of the importance of their privacy,” Al Sabah says. And that needs to now evolve into teaching security best practices. “So, while we cannot stop new attacks, we can make them less effective and harder to achieve by adhering to best practices.”

Business Lab is hosted by Laurel Ruma, editorial director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast was produced in association with the Qatar Foundation.

Show notes and links

UNICEF Crypto Fund

“Google’s top security teams unilaterally shut down a counterterrorism operation,” MIT Technology Review, March 26, 2021

Your Sloppy Bitcoin Drug Deals Will Haunt You For Years,” Wired, January 26, 2018

Your early darknet drug buys are preserved forever in the blockchain, waiting to be connected to your real identity,” Boing Boing, January 26, 2018

In the Middle East, Women Are Breaking Through the STEM Ceiling,” The New York Times, sponsored by the Qatar Foundation

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab: the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is enhancing privacy and cybersecurity. Well, it’s an old saying by now, but it used to be that on the internet, nobody knows if you’re a dog, but that’s not quite true. Cybersecurity researchers have been able to track people through previously assumed anonymous transactions like Bitcoin, blockchain, and Tor.

Is it possible to build secure and anonymous payment and communication networks?

Two words for you: digital footprints, or is it paw prints?

My guest today is Dr. Mashael Al Sabah, who’s a senior scientist at Qatar Computing Research Institute. Dr. Al Sabah researches network security and privacy enhancing technologies, cryptocurrency, and blockchain technology. She was a computer science professor at Qatar University and her research on the topic has been published in Wired, Boing Boing, as well as academic journals. This episode of Business Lab is produced in association with Qatar Foundation. Welcome, Dr. Al Sabah.

Mashael Al Sabah: Thank you for having me.

Laurel: So, as a cybersecurity researcher, could you explain how you work? It seems that you kind of begin by identifying weaknesses, show how the vulnerabilities can be exploited and then propose defenses or countermeasures. Is that about right?

Mashael: Yeah, in general, there are multiple inspirational paths towards a certain research idea or topic. For example, you either hear about a new technology and then when you get curious about it, and as you discuss and learn about it with your colleagues, a security mindset starts to kick in and you start having questions about its security and privacy, and if it really delivers what it promises. And then this leads to experimentation to answer these questions and based on the insights and observations that we gained through experimentation, you either come up with a solution or you bring people’s attention to it. Another path is sometimes we conduct research based on problems by our stakeholders about the difficulties and real problems that they have. For example, some of our partners have huge amounts of data and as a national institute, it is our job and mandate to listen to their research problems and devise and even build in-house solutions to help them meet their requirements.

Laurel: You mentioned a security mindset. How do you define that?

Mashael: So, when you hear about a technology, you start asking questions. Does it meet the requirements it promises? Does it maintain the confidentiality of the data? Does it protect users’ privacy as it claims? And you think of the different attacks that happened before and that can happen in the future, and you try to identify the weaknesses and the threats and the technology.

Laurel: Your research has focused on parts of the internet that were built to protect users’ online privacy and anonymity like blockchain and Tor, which is the anonymous communications network, and how those protections may not be as strong as people think they are. What have you discovered?

Mashael: Successfully achieving privacy requires using the right tool in the right way, because it’s a collaboration between the tool and the user. If users are not using the tool properly, they will not get the privacy or security guarantees promised that they are seeking. For example, if you’re browsing to a page and your browser warns against expired certificates, but you connect anyway, then you’re at risk. In one of our research projects, we found that, although, for example, Tor, it does indeed provide strong privacy and anonymity guarantees, but using it together with Bitcoin can hinder users’ privacy, even though when Bitcoin was starting to get popular seven years ago or more, one of its selling points is that it provides strong privacy.

Laurel: Hmm. So, it’s interesting how a more secure network could be compromised because you then add on what seemingly was a secure network, when in fact combined, those two factors.

Mashael: Yeah, Tor, using Tor alone, it gives you the privacy guarantees, but then you use it with Bitcoin, you open some channels, compromised channels.

Laurel: Could you talk a bit more about your research on people using Bitcoin and their past transactions. For example, your colleague at QCRI said in a Wired article about this research, that quote, if you’re vulnerable now you’re vulnerable in the future. What does that mean? Why is Bitcoin particularly difficult to maintain privacy?

Mashael: So, at a high level, we were able to show that it’s possible to link users’ previous sensitive transactions to them. A lot of people think that they are completely anonymous when they use Bitcoin, and this gives them a false sense of security. In our research, what we did is that we crawled social media, like there’s popular forum for Bitcoin users called Bitcointalk.org, and we crawled Twitter as well for Bitcoin addresses that users attributed to themselves. In some forums, people share their Bitcoin addressees along with their profile information. So, now you have the public profile information, which includes usernames, emails, age, gender, city. This can be highly identifying. And you have all this information together with the Bitcoin address, and we found that there are hundreds of people that advertise their addresses online. We also crawled dark web pages for services that use Bitcoin as a payment channel. At the time of our experiments, we found that hundreds of services expose their Bitcoin receiving addresses.

Some of them are whistle blowing services like Wikileaks and they accept donations and supports. But many are also illicit services. They sell weapons and fake IDs and so on. Now, we have two databases, the users and their Bitcoin addresses and the services, and their Bitcoin addresses. How did we link them? We used the Bitcoin blockchain, which is transparent and available online. Anyone can download it and can analyze it. So, we downloaded it and the structure of the Bitcoin blockchain links addressees through the transactions. So if there’s a transaction that’s happened at any point in time in the past between any two addresses, you will be able to find a link between them. And indeed, from our two data sets, we found links between users and hidden services, including some illicit services, like the Pirate Bay and the Silk Road. The blockchain is a transparent ledger and it’s an append-only block. So historical data cannot be deleted and these links between users and services cannot be removed.

Laurel: So, we get what happens to everyone’s data now that you’ve made this link and you’ve made it clear that it’s available. Did any of these services take any kind of countermeasures to prevent that kind of not-anonymous information being broadcast.

Mashael: I think over the years, those services realize that Bitcoin is not as anonymous as they thought it was. So, they engage in different practices that can make it harder to track down or link users to them. For example, some of them use mixing services and some of them use a different address per transaction, as opposed to using just one address for their service. And that makes it harder to link. There are also other alternative cryptocurrencies that are, that have been researched. They have shown that they are, they provide stronger anonymity like Zcash, for example. So, there’s a more awareness now. That said, still a lot of the payments happen or take place through Bitcoin, including even ransomware.

Laurel: So, QCRI is one of the Qatar Foundation’s research institutes and the Qatar Foundation’s goals are to advance pioneering research in areas of national priority for Qatar and to support sustainable development and economic diversification goals that have the potential to benefit the entire world. So, from that perspective, why is it important to have access to secure and anonymous payment and communication systems? Why is this important to society?

Mashael: Such technologies are important because they provide people with freedom online, to browse and carry out transactions freely without feeling the feeling of being watched. Right now, when you are aware that you are being tracked and all your searches are cached, and your information is shared with advertisers, it can feel restrictive for users because personally, I feel likeit might make me censor myself and it can limit your options, the user’s options. However, when privacy tools protect you from trackers, users feel more liberated to search about personal issues, such as suspected diseases or such as their own sensitive private issues.

People cannot be free without their privacy. Freedom’s important for the development of society. Aside from privacy, cryptocurrency can also help societies with, specifically the ones with under-developed financial infrastructure. There are societies that have no financial infrastructure and people have no bank accounts. So, cryptocurrency can play a role in easing their hardships and improve their lives. I recently heard that UNICEF also has launched  CryptoFund to receive donations and cryptocurrencies because transferring through cryptocurrencies has a very low overhead in terms of transfer time cost.

Laurel: That’s actually quite interesting, especially when there is an emergency and UNICEF would need funds as quickly as possible. Not only would they save money by using an alternate banking transaction, but then they would also be able to use the money as quickly as possible.

Mashael: Exactly, yeah, the overhead was low, and the money transfer was fast. And it’s all trackable.

Laurel: Do you see cryptocurrencies being an alternative, actually coming through and playing a central role in the stage of banking like this, because people are seeing it as a more validated way to move money from one place to another?

Mashael: I don’t think it can completely replace traditional banking systems, but it can complement it. It can meet some requirements and it can help, as I said, the societies that do not have, or do have an underdeveloped financial infrastructure. So, I think it can complement existing systems.

Laurel: And I find it also interesting, as you mentioned, the privacy and how important privacy is for freedom. And commercially, we’ve found that we’re tracked pretty much everywhere we go on the internet by ads and cookies and other ways to kind of keep, keep in touch with what we are interested in and what we might buy next. And there was quite a bit of controversy, a number of years ago, of how trackers could tell whether a woman was pregnant by just the various sites she visited and would then start targeting her with specific ads. Do you see, other than for commercial purposes, more strict ways of, strict meaning improved privacy, for consumers of the internet as they go throughout the internet. Do you see privacy as being one of those things that consumers start to look for more and more?

Mashael: I think there’s definitely more, there’s more awareness now among users of the importance of their privacy. There’s more awareness.There has been leaks about governments tracking their citizens and other, and their data, and there’s information about several companies archiving and aggregating users’ data and so on. So, definitely people are more aware and for example, recently when WhatsApp decided to change their privacy policy, we noticed a backlash. Many people, many users moved to using different other apps, like Signal, with better privacy policies.

Laurel: What is the biggest challenge of keeping up with exploits? Whether they are through networking infrastructure or cryptocurrencies.

Mashael: So, attacks are carried out for political or economic reasons and as long as there is a gain or profits for the attacker, they will never stop. So, there will always be the zero-day attacks. The main challenge, I think, is to get people to adhere to the best practices. For example, many successful attacks and data leaks are based on default or easy passwords, or they could be based on failure to periodically patch their systems. So, while we cannot stop new attacks, we can make them less effective and harder to achieve by adhering to best practices.

Laurel: How are phishing attacks evolving? What methods are cyber attackers using to trick people into giving away private information or downloading malware?

Mashael: So, recent research has shown that phishing attacks show no sign of slowing down. Although the number of malwares are going down compared to previous years, phishing is going up. They use various, the phishers use various techniques. For example, one technique, a common technique, is called squatting, where attackers register domains, that resemble popular domains so they can appear more legit for users. For example, there’s PayPal.com. So, they register something similar to that, “PayPall/” with an extra L or with a typo in it, so it can appear more legit to users.

They also use social engineering tactics to be more effective. Phishers can often try to trigger the fast decision-making processes of our brains, and they achieve that by sending emails containing links to offers, or in general, urgent opportunities. For example, “Sign up for the covid vaccine, limited quantities,” something like that. So, they give users a sense of urgency. And then users visit the links and are encouraged to sign up by entering private information. Sometimes in these links, they end up downloading also malware, which makes the problem worse. In our research, we have also observed that the number of phishing domains obtaining TLS certificates has been increasing over the years. And again, they obtain digital certificates to appear more legit to users and because browsers may not connect to the domain or warn users of the domain isn’t using TLS.

Laurel: So, the bad actors are making themselves look more legit with these digital certificates. When in fact, all they’re doing is tricking the kind of automatic systems to be able to get past them, so they seem legitimate.

Mashael: Yeah, and now there are some browsers that have made it mandatory for domains to obtain certificates in order to connect to them. So, to reach a wider base of victims, it’s kind of mandatory now to obtain these certificates and it’s easy to get them because they’re free. There are certificate authorities that provide them in an automated way, free, like Let’s Encrypt, for example. So, it’s very easy for them to get certificates and look more legit.

Laurel: Why have phishing threats become a bigger problem during the covid-19 pandemic?

Mashael: When you have the pandemic, there is the fear element, which can trigger poor decisions and users want to know more about a developing story. So, in that case, they are more likely to let their guard down and visit pages that claim to present new sources of information. So, the whole situation can be more fruitful for attackers. And indeed, even early in the pandemic, around the end of March 2020, there were tens of thousands of coronavirus related spam attacks that were observed. And we observed hundreds of thousands of newly registered domains that were also related to the pandemic, that appeared to have been registered for malicious reasons.

Laurel: So, when you publish research about vulnerabilities, are you hoping that it’ll inspire people to take more countermeasures or are you thinking it’ll lead to redesign of systems entirely to make them more secure or are you hoping both will happen?

Mashael: So, when we publish research about vulnerabilities, actually both. There’s a consensus in the cyber security research community, that’s researching threats is very valuable because it brings attention to weaknesses that can possibly result in compromises or in privacy invasions if they were discovered by attackers first. That way, people can be more cautious and can take stronger countermeasures by educating themselves better. Also, with such research, when you bring the attention to a certain weakness or vulnerability, you can also start thinking of, or suggest, countermeasures and overall enhance the system.

Laurel: So, when you do find an exploit, what’s the process for alerting the interested parties? For example, recently in the news, Google exposed Western governments’ hacking operation. But there must be a standard protocol with such sensitive issues, especially when governments are involved.

Mashael: So, in QCRI we inform our partners and we write detailed reports. We have labs and we deploy in-house built systems and tools that can help them process, analyze and discover such events themselves as well.

Laurel: And that’s definitely particularly helpful and ties back to the Qatar Foundation’s goals of enriching society because cybersecurity requires massive amounts of collaborations from a number of parties, correct?

Mashael: Yeah, absolutely. I mean, it’s like I said before, it’s our mandate to serve the community and that’s why, since the beginning of  the establishment of our Institute, we worked hard on establishing relations with the different government agencies and different stakeholders in the country and we carefully identified the research directions that are needed for the country, to serve the country first and to serve society.

Laurel: What are you working on right now?

Mashael: So, right now I’m working on a couple of research projects. One of them is related to phishing. We have observed that, like I said before, that more and more phishing domains are obtaining digital certificates to appear more legit. And so, Google has the certificate transparency project where it’s basically servers that publish the new upcoming domains and their certificates. So, it’s a resource for us to identify upcoming new domains and understand if they can be possibly for malicious or phishing purposes.

So, we use available intelligence to identify if they’re phishing or not. It’s been a successful approach. We’re able to use machine learning and classify with a very high accuracy, more than 97%, that a domain is indeed, would be used for phishing sometimes even before they are available online, just from looking at its certificate and other infrastructure information.

I’m also working on identifying malware that uses anonymous communication. More and more malware use proxies or VPNs and Tor to evade detection, because it’s very hard, usually botnets or infected machines, they get their commands from a certain centralized machine. And if it’s deployed on a public IP, it would be easy for network administrators to identify it and block connections to it. That’s why botnet masters now deploy their command and control server as a Tor hidden service. So, it’s anonymous and it’s easy for the infected machines to connect to it and get the commands and get the communication but it’s hard for take down operations. So, we’re working on traffic analysis techniques in order to identify such connections and this is based on infections that we’ve found in logs of our stakeholders. So, it’s based on a real need and a requirement from our partners.

Laurel: It sounds like you’re using a number of new and different techniques, but as you mentioned in collaboration and partnership, which makes all the difference when you can really tackle a problem with a number of partners here. Do you have any suggestions of how people, consumers, can be more careful using the internet, or are there other new technologies that could help secure communications and financial transactions?

Mashael: So, I think in general, it’s the responsibility of users to ensure that their privacy is maintained with more education and awareness. When they share data, they have to be informed on how their data will be handled and understand the possible consequences of data loss or data aggregation and processing and sharing by the different companies online. People can continue to use the available technologies, as long as they understand the privacy and security guarantees and accept them.

Laurel: And that’s always the tough part.

Mashael: Yeah, that’s true.

Laurel: Well, this has been a fantastic conversation, Dr. Al Sabah, I thank you very much.

Mashael: Thank you for having me, Laurel.

Laurel: That was Dr. Mashael Al Sabah, a senior scientist at Qatar Computing Research Institute, who I spoke with from Cambridge, Massachusetts, home of MIT and MIT Technology Review overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.