Upon arrival in a town or city, my first concern was to find the area where Black Americans lived and worked. This was almost always easy to do: drive away from rich residential and business areas toward the edges of town, where indicators of success were replaced with the stigma of neglect. If I had trouble finding these places, I would visit the town police station, telling them that I was a photographer with expensive cameras, and handed over a highlighter marker, asking them to circle areas I should avoid. Of course, I did the opposite.
You can’t take pictures from your car—you have to be on foot. Walking with a big tripod-mounted camera on your shoulder allows a reciprocal process of issuing an invitation to be looked at in return for an opportunity to look. I would approach my potential subjects, explain in as detailed a manner as possible what I had seen, and ask for permission to take a photograph. Of course, small talk—where was I from, who would see the photograph, why I selected them—would sometimes ensue. Often permission was granted with no discussion at all. Looking is a two-way street. Not only is the photographer looking, but the potential subject is looking too. What the subject sees carries great weight. For some reason, people would see me positively. I am not sure if it was my race, gender, physicality, dress, demeanor, or anything else. If in a day I asked 20 people for permission to make photographs, 19 would say yes.
I must have taken, I don’t know, 30 trips, maybe more. From the spring of ’83 on my first trip, every chance I had, any break from teaching, I would be out on the road. So it would be typical in a given summer for me to make four or five trips.
The image in Marion, Arkansas (1985) is a really interesting distillation of some of the driving themes in your work. This type of placement of the subject against a form of architecture is quite interesting. Thinking about who we are in relation to our government, the law, and the ways in which the social and political come together, even in benign ways—like this person who is sweeping, cleaning up the parking lot and the surrounding landscape: it makes the rules of life here quite clear. Do you remember the encounter you had with this person?
It was really minimal. I had no particular reason to go to Marion; I just drove through. It was the middle of the day, maybe a little later, and I saw the words that just really stopped me. I thought of the architecture of the building, and I thought of Walker Evans’s photographs of architecture in the South. So I approached and I noticed the man out there and the oversize Cadillac, and I thought, “Okay, obedience, yielding one’s will, subjugation, wealth,” and you know, all of those things were immediate. I asked him if he would pose, and he said sure, and that was it. That was the extent of our interaction. He just continued to sweep, and I set up the camera. My prerogative was to pay homage to Evans, so I kept the building rectilinear. I didn’t want any vanishing points in the architecture, so I did a few view-camera movements to reestablish the orthogonal relationship of the lines. Then I shifted the camera left and right to position the man sweeping and the Cadillac in order to balance them against the neoclassical Greek architecture.
What drew you to the subject of the young man in front of the tree in Untitled (ca. mid 1980s) and this kind of cross-like composition?
In the way that most of my pictures of people start, I saw him and my radar alerted me that he could be good. When I say that, what I mean is: I think this person has the possibility of sustaining interest in a photograph. In a way it is related to fiction writing—establishing a character who can be fleshed out, who could have a more significant role in that portion of a story. I just saw him and thought that his posture, his carriage, his physicality, his musculature, what he was wearing—it was interesting to me. I approached and asked if I could photograph him, and I recall we engaged in some chitchat. I explained who I was and what I was doing and he said, “Sure, fine,” and then he asked, “Well, what do you want me to do?” I just sort of glanced around and said, “Why don’t you lean up against that tree?” because I noticed that there were two cars that were more or less twins, and I thought I could do something.
Then I got underneath the dark cloth and I began to move the camera back and forth and to swing it left and right to figure out how close I wanted to be, how big he should be in the frame. I could have moved in quite close to really draw attention to his facial features, his eyes, the musculature around his shoulders—that would have been another picture. I decided to be at this intermediate distance, and then I got out from underneath the dark cloth and was standing next to the camera, ready to take the picture, and for some reason he reached up. I hadn’t noticed the ropes. And I thought, “Oh my God,” so I said, “Wait a minute!” I had to reinsert the dark slide, which protects the film, to take the film out of the back of the camera, get back under the dark cloth, reopen the lens, and look to make sure that I was including the rope. And I was, so I didn’t have to readjust the camera—it was just by dumb luck that the top edge of the photograph was exactly where it should be. I reinserted the film, closed the shutter, pulled the dark slide, came back out and I asked him to look at the glass of the lens. What that does in a photograph is it re-creates somebody, in conversation, looking you straight in the eye, not looking at your forehead, not looking at your ear, not looking at your nose—direct contact.
The Download: trapped by grief algorithms, and image AI privacy issues
—Tate Ryan-Mosley, senior tech policy reporter
I’ve always been a super-Googler, coping with uncertainty by trying to learn as much as I can about whatever might be coming. That included my father’s throat cancer.
I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, intentionally and unintentionally consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials.
Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself from what the algorithms were serving me. I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us? Read the full story.
AI models spit out photos of real people and copyrighted images
The news: Image generation models can be prompted to produce identifiable photos of real people, medical images, and copyrighted work by artists, according to new research.
How they did it: Researchers prompted Stable Diffusion and Google’s Imagen with captions for images, such as a person’s name, many times. Then they analyzed whether any of the generated images matched original images in the model’s database. The group managed to extract over 100 replicas of images in the AI’s training set.
Why it matters: The finding could strengthen the case for artists who are currently suing AI companies for copyright violations, and could potentially threaten the human subjects’ privacy. It could also have implications for startups wanting to use generative AI models in health care, as it shows that these systems risk leaking sensitive private information. Read the full story.
When my dad was sick, I started Googling grief. Then I couldn’t escape it.
I am a mostly visual thinker, and thoughts pose as scenes in the theater of my mind. When my many supportive family members, friends, and colleagues asked how I was doing, I’d see myself on a cliff, transfixed by an omniscient fog just past its edge. I’m there on the brink, with my parents and sisters, searching for a way down. In the scene, there is no sound or urgency and I am waiting for it to swallow me. I’m searching for shapes and navigational clues, but it’s so huge and gray and boundless.
I wanted to take that fog and put it under a microscope. I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, perusing personal disaster while I waited for coffee or watched Netflix. How will it feel? How will I manage it?
I started, intentionally and unintentionally, consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials. It was as if the internet secretly teamed up with my compulsions and started indulging my own worst fantasies; the algorithms were a sort of priest, offering confession and communion.
Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss.
I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us?
I’m well aware of the power of algorithms—I’ve written about the mental-health impact of Instagram filters, the polarizing effect of Big Tech’s infatuation with engagement, and the strange ways that advertisers target specific audiences. But in my haze of panic and searching, I initially felt that my algorithms were a force for good. (Yes, I’m calling them “my” algorithms, because while I realize the code is uniform, the output is so intensely personal that they feel like mine.) They seemed to be working with me, helping me find stories of people managing tragedy, making me feel less alone and more capable.
In reality, I was intimately and intensely experiencing the effects of an advertising-driven internet, which Ethan Zuckerman, the renowned internet ethicist and professor of public policy, information, and communication at the University of Massachusetts at Amherst, famously called “the Internet’s Original Sin” in a 2014 Atlantic piece. In the story, he explained the advertising model that brings revenue to content sites that are most equipped to target the right audience at the right time and at scale. This, of course, requires “moving deeper into the world of surveillance,” he wrote. This incentive structure is now known as “surveillance capitalism.”
Understanding how exactly to maximize the engagement of each user on a platform is the formula for revenue, and it’s the foundation for the current economic model of the web.
AI models spit out photos of real people and copyrighted images
Stable Diffusion is open source, meaning anyone can analyze and investigate it. Imagen is closed, but Google granted the researchers access. Singh says the work is a great example of how important it is to give research access to these models for analysis, and he argues that companies should be similarly transparent with other AI models, such as OpenAI’s ChatGPT.
However, while the results are impressive, they come with some caveats. The images the researchers managed to extract appeared multiple times in the training data or were highly unusual relative to other images in the data set, says Florian Tramèr, an assistant professor of computer science at ETH Zürich, who was part of the group.
People who look unusual or have unusual names are at higher risk of being memorized, says Tramèr.
The researchers were only able to extract relatively few exact copies of individuals’ photos from the AI model: just one in a million images were copies, according to Webster.
But that’s still worrying, Tramèr says: “I really hope that no one’s going to look at these results and say ‘Oh, actually, these numbers aren’t that bad if it’s just one in a million.’”
“The fact that they’re bigger than zero is what matters,” he adds.