Connect with us

Tech

How Twitter’s “Teacher Li” became the central hub of China protest information

Published

on

How Twitter’s “Teacher Li” became the central hub of China protest information


It’s hard to describe the feeling that came after. It’s like everyone is coming to you and all kinds of information from all over the world is converging toward you and [people are] telling you: Hey, what’s happening here; hey, what’s happening there; do you know, this is what’s happening in Guangzhou; I’m in Wuhan, Wuhan is doing this; I’m in Beijing, and I’m following the big group and walking together. Suddenly all the real-time information is being submitted to me, and I don’t know how to describe that feeling. But there was also no time to think about it. 

My heart was beating very fast, and my hands and my brain were constantly switching between several software programs—because you know, you can’t save a video with Twitter’s web version. So I was constantly switching software, editing the video, exporting it, and then posting it on Twitter. [Editor’s note: Li adds subtitles, blocks out account information, and compiles shorter videos into one.] By the end, there was no time to edit the videos anymore. If someone shot and sent over a 12-second WeChat video, I would just use it as is. That’s it. 

I got the largest amount of [private messages] around 6:00 p.m. on Sunday night. At that time, there were many people on the street in five major cities in China: Beijing, Shanghai, Chengdu, Wuhan, and Guangzhou. So I basically was receiving a dozen private messages every second. In the end, I couldn’t even screen the information anymore. I saw it, I clicked on it, and if it was worth posting, I posted it.

People all over the country are telling me about their real-time situations. In order for more people not to be in danger, they went to the [protest] sites themselves and sent me what was going on there. Like, some followers were riding bikes near the presidential palace in Nanjing, taking pictures, and telling me about the situation in the city. And then they asked me to inform everyone to be cautious. I think that’s a really moving thing.

It’s like I have gradually become an anchor sitting in a TV studio, getting endless information from reporters on the scene all over the country. For example, on Monday in Hangzhou, there were five or six people updating me on the latest news simultaneously. But there was a break because all of them were fleeing when the police cleared the venue. 

On the importance of staying objective 

There are a lot of tweets that embellish the truth. From their point of view, they think it’s the right thing to do. They think you have to maximize the outrage so that there can be a revolt. But for me, I think we need reliable information. We need to know what’s really going on, and that’s the most important thing. If we were doing it for the emotion, then in the end I really would have been part of the “foreign influence,” right? 

But if there is a news account outside China that can record what’s happening objectively, in real time, and accurately, then people inside the Great Firewall won’t have doubts anymore. At this moment, in this quite extreme situation of a continuous news blackout, to be able to have an account that can keep posting news from all over the country at a speed of almost one tweet every few seconds is actually a morale boost for everyone. 

Chinese people grow up with patriotism, so they become shy or don’t dare to say something directly or oppose something directly. That’s why the crowd was singing the national anthem and waving the red flag, the national flag [during protests]. You have to understand that the Chinese people are patriotic. Even when they are demanding things [from the government], they do it with that sentiment. 

Tech

The Download: trapped by grief algorithms, and image AI privacy issues

Published

on

When my dad was sick, I started Googling grief. Then I couldn’t escape it.


—Tate Ryan-Mosley, senior tech policy reporter

I’ve always been a super-Googler, coping with uncertainty by trying to learn as much as I can about whatever might be coming. That included my father’s throat cancer.

I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, intentionally and unintentionally consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials.

Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself from what the algorithms were serving me. I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us? Read the full story.

AI models spit out photos of real people and copyrighted images

The news: Image generation models can be prompted to produce identifiable photos of real people, medical images, and copyrighted work by artists, according to new research. 

How they did it: Researchers prompted Stable Diffusion and Google’s Imagen with captions for images, such as a person’s name, many times. Then they analyzed whether any of the generated images matched original images in the model’s database. The group managed to extract over 100 replicas of images in the AI’s training set.

Why it matters: The finding could strengthen the case for artists who are currently suing AI companies for copyright violations, and could potentially threaten the human subjects’ privacy. It could also have implications for startups wanting to use generative AI models in health care, as it shows that these systems risk leaking sensitive private information. Read the full story.

Continue Reading

Tech

When my dad was sick, I started Googling grief. Then I couldn’t escape it.

Published

on

When my dad was sick, I started Googling grief. Then I couldn’t escape it.


I am a mostly visual thinker, and thoughts pose as scenes in the theater of my mind. When my many supportive family members, friends, and colleagues asked how I was doing, I’d see myself on a cliff, transfixed by an omniscient fog just past its edge. I’m there on the brink, with my parents and sisters, searching for a way down. In the scene, there is no sound or urgency and I am waiting for it to swallow me. I’m searching for shapes and navigational clues, but it’s so huge and gray and boundless. 

I wanted to take that fog and put it under a microscope. I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, perusing personal disaster while I waited for coffee or watched Netflix. How will it feel? How will I manage it?

I started, intentionally and unintentionally, consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials. It was as if the internet secretly teamed up with my compulsions and started indulging my own worst fantasies; the algorithms were a sort of priest, offering confession and communion. 

Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss. 

I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us? 

I’m well aware of the power of algorithms—I’ve written about the mental-health impact of Instagram filters, the polarizing effect of Big Tech’s infatuation with engagement, and the strange ways that advertisers target specific audiences. But in my haze of panic and searching, I initially felt that my algorithms were a force for good. (Yes, I’m calling them “my” algorithms, because while I realize the code is uniform, the output is so intensely personal that they feel like mine.) They seemed to be working with me, helping me find stories of people managing tragedy, making me feel less alone and more capable. 

In my haze of panic and searching, I initially felt that my algorithms were a force for good. They seemed to be working with me, making me feel less alone and more capable. 

In reality, I was intimately and intensely experiencing the effects of an advertising-driven internet, which Ethan Zuckerman, the renowned internet ethicist and professor of public policy, information, and communication at the University of Massachusetts at Amherst, famously called “the Internet’s Original Sin” in a 2014 Atlantic piece. In the story, he explained the advertising model that brings revenue to content sites that are most equipped to target the right audience at the right time and at scale. This, of course, requires “moving deeper into the world of surveillance,” he wrote. This incentive structure is now known as “surveillance capitalism.” 

Understanding how exactly to maximize the engagement of each user on a platform is the formula for revenue, and it’s the foundation for the current economic model of the web. 

Continue Reading

Tech

AI models spit out photos of real people and copyrighted images

Published

on

AI models spit out photos of real people and copyrighted images


Stable Diffusion is open source, meaning anyone can analyze and investigate it. Imagen is closed, but Google granted the researchers access. Singh says the work is a great example of how important it is to give research access to these models for analysis, and he argues that companies should be similarly transparent with other AI models, such as OpenAI’s ChatGPT. 

However, while the results are impressive, they come with some caveats. The images the researchers managed to extract appeared multiple times in the training data or were highly unusual relative to other images in the data set, says Florian Tramèr, an assistant professor of computer science at ETH Zürich, who was part of the group. 

People who look unusual or have unusual names are at higher risk of being memorized, says Tramèr.

The researchers were only able to extract relatively few exact copies of individuals’ photos from the AI model: just one in a million images were copies, according to Webster.

But that’s still worrying, Tramèr says: “I really hope that no one’s going to look at these results and say ‘Oh, actually, these numbers aren’t that bad if it’s just one in a million.’” 

“The fact that they’re bigger than zero is what matters,” he adds.

Continue Reading

Copyright © 2021 Vitamin Patches Online.