Connect with us

Tech

Podcast: Beating the AI hiring machines

Published

on

Podcast: Beating the AI hiring machines


When it comes to hiring, it’s increasingly becoming an AI’s world, we’re just working in it. In this, the final episode of Season 2, and the conclusion of our series on AI and hiring, we take a look at how AI-based systems are increasingly playing gatekeeper in the hiring process—screening out applicants by the millions, based on little more than what they see in your resume. But we aren’t powerless against the machines. In fact, an increasing number of people and services are designed to help you play by—and in some cases bend—their rules to give you an edge.

We Meet: 

  • Jamaal Eggleston, Work Readiness Instructor, The HOPE Program
  • Ian Siegel, CEO, ZipRecruiter
  • Sami Mäkeläinen, Head of Strategic Foresight, Telstra
  • Salil Pande, CEO, VMock
  • Gracy Sarkissian, Interim Executive Director, Wasserman Center for Career Development, New York University

We Talked To: 

  • Jamaal Eggleston, Work Readiness Instructor, The HOPE Program
  • Students and Teachers from The HOPE Program in Brooklyn, NY
  • Jonathan Kestenbaum, Co-founder & Managing Director of Talent Tech Labs
  • Josh Bersin, Global Industry Analyst
  • Brian Kropp, Vice President Research, Gartner
  • Ian Siegel, CEO, ZipRecruiter
  • Sami Mäkeläinen, Head of Strategic Foresight, Telstra
  • Salil Pande, CEO, VMock
  • Kiran Pande, Co-Founder, VMock
  • Gracy Sarkissian, Interim Executive Director, Wasserman Center for Career Development, New York University

Sounds From:

Credits

  • This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green and Karen Hao. We’re edited by Michael Reilly.

Transcript

Synthetic Jennifer: Hey everyone! This is NOT Jennifer Strong.  

It’s actually a deepfake version of her voice. 

To wrap up our hiring series, the two of us took turns doing the same job interview, because she was curious if the automated interviewer would notice. And, how it would grade each of us.

[beat / music]

So, human Jennifer beat me as a better match for the job posting, but just by a little bit.    

This deepfake? It got better personality scores. Because, according to this hiring software, this fake voice is more spontaneous.

It also got ranked as more innovative and strategic, while Jennifer is more passionate, and she’s better at working with others.

[Beat/ Music transition]

Jennifer: Artificial intelligence is increasingly used in the hiring process. 

(And this is the real Jennifer. Just, by the way.)

And these days algorithms decide whether a resume gets seen by a human, gauge personalities based on how people talk or play video games, and might even interview you. 

In a world where you no longer prepare for those interviews by putting your best foot forward—what does it mean to present your best ‘digital self’? 

Sot: Youtube clips montage: Vlogger 1: Want to know three easy hacks to significantly improve your performance on video interviews like HireVue, Spark Hire, or VidCruiter? Vlogger 2: Please do make sure you watch this from beginning to end, because I want to help you to pass your interview. Vlogger 3: And if you understand the key concepts, you can beat that algorithm and get the job. So let’s get started.

Jennifer: We look at just how far job seekers are willing to go to beat these tools.

Gracy Sarkissian: So there are all sorts of crazy stories about what students have done in the past to get their resume past the applicant tracking system. But what we do is we make sure that students know what to expect and are prepared to be successful. 

Jennifer: That success is measured by algorithms across a whole host of variables, from automated resume screeners attempting to predict an applicant’s job performance, to one-way video interviews,  where everything from a candidate’s word choice to their facial expressions might be analyzed. 

Ian Siegel: Literally this is one of those instances where conventional wisdom will kill you in your search for a job. And it’s such a shame because I think even many of the experts don’t realize how the industry is actually working today.

Jennifer: You can’t dress to impress an algorithm. So, what does it look like to game an automated system?  

Sami Makelainen: What if you  just had the AI interview an AI, could that be done? Could it be done now? Could it be done in the future? I mean—it’s fairly clear that in the not too distant future, you will have this kind of a much more common ability to develop artificial entities that look pretty much exactly like humans and act very much like humans. Or could we use one of these things to do the interviews for us? 

Jennifer: And in the absence of meaningful rules and regulation, where do we draw the line?

I’m Jennifer Strong, and in this final episode of a four-part series on AI and hiring we explore how we’re adapting to the automated process of finding a job.

[SHOW ID]

[TITLES]

Anonymous Jobseeker: These AIs or artificial intelligent robots are reading resumes through a parser. So if your resume is not up to par, it won’t go through to the next steps. 

Jennifer: That’s the job seeker we’ve followed throughout this series. She asked us to call her Sally but that’s not her real name. She’s critiquing the hiring practices of potential employers… and she fears it could impact her career. 

In a previous episode, she told us how she applied for close to 150 jobs before landing one and how she encountered AI at several points in the process.

Like Sally, the first time you might see AI during a job search is with a resume parser, or screener. It sorts and chooses which ones get passed along to the next stage of the hiring process. 

She suspected her resume wasn’t getting through.

And she did some further research, after she got her hands on some of this technology.

Anonymous Jobseeker: So right now, when I put my resume through, it reads me as a software engineer, with a hint of data analysis, which is my field. So that’s fine. 

Jennifer: A friend of hers is also working on this problem. He’s testing a different tool that puts a percentage match on how qualified it judges each resume to be for a given job.

Anonymous Jobseeker: He has another parser where it gives you your percentage. So he’s been asking other people who are data scientists and already far in the field for their resume and theirs go through 80% to 90%.  

Jennifer: They’re even testing templates they find online, just to see what happens and if that formatting helps.

But so far, when they fill out those templates they’ve all received a low match score—under 40-percent qualified.

Anonymous Jobseeker: If you just Google resume templates, if you need help with your resume, we tested those whatever popped up. And we realized the templates aren’t good. So, when you put the templates inside the parser, no matter what job you are, you’re still at that 40 or under 40. So, there’s a problem with the machine reading it. 

Jennifer: Sally is a programmer. She knows how to go about finding and testing this type of software. But, most of us don’t. We’re unlikely to know if these algorithms are reading our resume in the way we intended, and extracting the ‘right’ skills.

Anonymous Jobseeker: If you fill out a job application online and it says convert resume. And if, once you convert your resume, if the boxes aren’t filled in to what your resume is stating, then you know, your percentage is low. And that makes a lot of sense because when I was applying to like Goldman Sachs or Capital One, like bank industries and stuff when I pick, take the, um, information from my resume, it was never correct. And I always had to fill in the rest of the stuff to match with my resume.

Jennifer: She says when she made this discovery, it finally clicked.

And she wishes she understood how this worked before she started applying for jobs, because it would have helped with her imposter syndrome.

Anonymous Jobseeker: So everybody that doesn’t know about this doesn’t have a chance, ‘cause they don’t even know.

Jennifer: Over the course of this reporting we found a number of different groups trying to get under the hood of these systems. Whether to help themselves, or others, adapt and engage with these tools.

And, we visited a workforce readiness program in New York City called The Hope Program. Many of its participants have dealt with homelessness, substance abuse and long-term unemployment. 

Jamaal Eggleston: You see all the hoops, these students have to jump through just to land the job, where I hate to say another segment of the population might not have to go through as many hoops. So, I think it’s up to us to put on our armor and to combat it, because these are good people we’re talking about here. So it’s really become my life’s journey to help them. And we have to fight back. Too many good people were being left to the wayside.  

Jennifer: Jamaal Eggleston is known to his students as Mr. E. And, he says they’re struggling with the growing use of personality testing and other forms of automation in hiring.

Jamaal Eggleston: They come back frustrated. There’s a really big issue of not hearing back at all. It’s almost as if you do an application and your application goes into the matrix and it’s gone forever. Or you will get the automatic reply which is not very personable, and it gives no information. 

Jennifer: To him, it represents an uphill battle for students already at a disadvantage. 

Jamaal Eggleston: When it comes to their personality tests, they feel as if they’re being tricked, because it’ll be the same question, but phrased three different types of ways. It’s coming from creators, who do not share a cultural background at all with some of the applicants. 

Jennifer: So, he says he downloads examples of these personality tests, analyzes them, then uses what he finds to help train his students.

Jamaal Eggleston: So I’ll give them the three different phrasings of that question. So they’ll know what to look out for. If you’ve ever been in this situation, how would you handle it? And they know instantly that I taught them once a question is phrased that way. It’s going to be a behavioral question. So it’s something that they should look out for in a personality test and to take their time.

Jennifer: And they take these tests as part of their job training. Their results are projected onto a whiteboard during class and discussed as a group. 

Jamaal Eggleston: If these companies only knew, you know, all the great people that they excluded because of these practices. And they would have been a great breath of fresh air. They would have been hard capable workers, but because of these biases, whether it’s from the person who programmed the algorithms, or the algorithms themselves, that excluded these people, if they only knew, they would be kicking themselves, you know, wow, okay the person doesn’t have the same color skin as mine. They might talk with a different dialect or accent, but you know what, they came here and they worked their tail off.

[Musical transition]

Ian Siegel: If there are job seekers out there in the world who love searching for work—I have never met them. And if there are employers who feel like they are experts at recruiting—I have also never met them. Neither side is trained in the activity that they are engaging in.

Ian Siegel: My name is Ian Siegel. I am the CEO and co-founder of ZipRecruiter. 

Jennifer: It’s an AI powered marketplace where companies post jobs and people look for work.

Ian Siegel: Millions of businesses post jobs on our site every month. And tens of millions of job seekers look for work on our site every month. And we used AI to play the role of active matchmaker between them.

Jennifer: When we spoke to him at the start of this series, he told us the vast majority of resumes are now screened by a machine first, before a human enters the process.

And he believes anyone using traditional advice to create a resume is at risk of not making it through to the next round of the hiring process, because the audience for resumes is now algorithms.

Ian Siegel: All that advice you got about how to write a resume, is wrong. It’s no longer write something that stands out, use a beautiful design printed on vellum, use extraordinary prose to try to dress up your accomplishments, forget all that. You want to write like a caveman in the shortest, crispest words you can. You want to be declarative and quantitative, because software is trying to figure out who you are to decide whether you will be put in front of a human. And that’s the majority of jobs in America right now today.

Jennifer: Like others, he found problems with these tools that extract information from resumes.

So, the company built its own.

And he has some advice on getting a resume through.

Ian Siegel: Be explicit, and then if you have a skill, declare it. Ideally declare how you learned it. So I learned the skill by going through this certification process, here is my certification or my license number to validate that I have this skill. Because there are multiple industries, like if you’re a nurse, as long as you have a nursing license, you’re hired. There’s a desperate need for more nurses in America right now. If you’re a truck driver, if you have a truck driver’s license number you’re hired. So like your whole resume could be that one piece of information, ‘cause the rest really doesn’t matter to the employer. So, just make sure that you list all your skills as concretely and with as much evidence to support your expertise as possible.

Jennifer: And longer term, he sees a new way of recruiting becoming the norm.

Ian Siegel: There is a sensible way for this to all work, and that is the employer should go first. The employer should look at active job seekers inmarket, and pick the ones that they would like to see apply. Invite them to apply or directly recruit them. That’s a great experience. Job seekers hate applying to jobs, but guess what? They love getting recruited, and who wouldn’t? It’s literally like getting picked up at a bar. It’s being told you’re desirable and special. It just makes sense and puts everybody in the right headspace. Then the employer is winning because by recruiting, they’re going first, they’re expressing interest, which means they’re increasing the odds that they are going to get a positive response, because that person’s going to be so flattered by the fact that the employer went first. So it’s just a better, more efficient way for this to work. 

[Musical transition]

Jennifer: As part of this investigation we’ve been learning about a bunch of tools meant to help job seekers maximize their chances of success.

Hilke Schellmann is a reporting partner on this series. She’s also a professor of journalism who reports on this topic.

So, Hilke, what did you find about the tricks people are using to try and get an edge?

Hilke: So, one of the things I found is a whole niche industry of folks sharing ‘assessment secrets’ with one another online. 

Sot: Youtube clips montage2: Speaker 1: In this video today, we’re going to be talking about how you can pass your psychometric test, first time round. Speaker 2: Look into the camera, not look at the screen.  Speaker 3: Be expressive when you talk and change your voice tone when you speak, remember the AI will look for inconsistencies in what you say and how you behave. Speaker 2: And you then reveal the results of your actions and the results should always be positive. So whenever you get asked a question that says, tell me about a time when you. Or describe a situation you were in. See, it’s a behavioral type interview question and you have to give a specific situation.

Hilke: So, there are also the usual quora discussions and subreddits talking about the questions job seekers have encountered in video interviews, or, how to beat these games. And then, there’s some hiring vendors which offer candidates a chance to do AI mock interviews, before the big day.

Jennifer: Candidates can practice alone in a room., by talking into the camera and trying to convince someone, or a machine, that they’re the best candidate for the job?

Hilke: Yeah. Job seekers can also see their personality profiles. But there is a limit to how helpful this is, since most candidates won’t know what questions they will be asked. Like, for example I found one company that listed the seven stage hiring process at Amazon, that very clearly explained what candidates had to do. That company has also built AI games similar to what job seekers are being asked to play in the real world. So, the job seekers can train on those games ahead of time, (for a fee of course).

Jennifer: And you looked into a lot of companies that do this, did you anything interesting? 

Hilke: So apparently some job candidates who don’t have all the skills the job description asks for, they put the skills they lack in white on the resume. So it’s invisible to a human, but a computer would recognize the skills. Job seekers hope to get on the yes pile by doing this, and recruiters get frustrated by this.

Jennifer: Alright, might this be a way of leveling the playing field for job applicants who have less power now against AI. Or, is it kind of cheating and giving some applicants an edge over others?

Hilke: Well, some people who practice these assessments do get an edge over others, because they know what to expect now. But ,it’s not because they have practiced and practiced to work out how to get the high score (like in a video game), because that’s not how these assessments work.

These games are trying to assess your personality and ‘to win’ essentially, the algorithm compares your traits, to the traits of employees who already work at that firm. If you have similar personality traits, you advance to the next round in the hiring process. But the catch is, no one knows what those traits are. So I don’t know if you can call it cheating, when you don’t even really know the rules of the game you are playing.

Jennifer: And we don’t know exactly how AI scores job seekers, so, the people giving this advice, they might not know either.

Hilke: Yeah, and if that advice is inaccurate, it might even backfire for job seekers. But, I understand the anxiety people have around these new tools and their desire to understand how this works. And obviously that bit of practice might calm them on the big day… 

Jennifer: But like any other cat and mouse game, it’s only a matter of time before people use automation to fight back against this automation.

Hilke: That’s exactly what I was thinking. 

[Musical transition]

Jennifer: So you tested this out in a video interview, using just plain text to speech software to respond to the questions asked. 

Hilke: Yeah, I used a deepfake computer generated audio file to see if I could trick the interview software into believing that the deepfake is a human. 

[SOT: Hilke speaking]: And so the first question is, please introduce yourself. Please introduce yourself, deepfake. 

Computer-generated audio: My name is Hilke Shellman. I am an Emmy award-winning reporter and journalism professor at New York university. I have been a journalist for over a decade. 

Jennifer: Ok and the deep fake voice doesn’t have a face, so there’s no video here, and the system still gives it a score. 

Hilke: Yeah. The deepfake scored a 79% match score with the job. That’s actually pretty high. It also got a personality analysis, which told me that the deepfake is very innovative and not very consistent. It’s pretty social and not very reserved. 

Jennifer: Right.

Hilke: Yeah and the weirdest part was that I then tested it again, this time reading the same text with my actual voice.

Jennifer: And, what happened?

Hilke: Ahh, welk. The computer-generated voice actually scored higher than me reading the same text! 

Jennifer:  Wow. Sounds like you might want to consider taking your audio avatar on the road. 

Hilke: I guess so. 

[Musical transition] 

Jennifer: But we aren’t the only ones with this idea.

Sami Mäkeläinen: What if you just had the AI interview an AI?

Jennifer: Sami Mäkeläinen is an executive at Telstra, which is an Australian telecom company. 

Sami Mäkeläinen: Could that be done? Could it be done now? Could it be done in the future? I mean it’s fairly clear that in the not too distant future, you will have this kind of a, much more common ability to develop artificial entities that look like look pretty much exactly like humans, and act very much like humans. I thought that, well, could we use one of these things to do the interviews for us? 

Jennifer: He has a background in software engineering and his job is to study the implications of future tech trends.

Just out of curiosity, he and a few colleagues decided to test whether AI interviewers would recognize the difference between interviewing a human or another machine.

So they took a well-known AI interview system which uses video (he didn’t want to reveal which), and he paired it up with an avatar.

Sami Mäkeläinen: We just had a AI interview system. And we deployed an AI digital human, digital avatar, digital twin, (if you want to call it that), to sort of act as the mouthpiece for the human being interviewed. So you know the words that the avatar spoke came from humans, it was not a language model, or AI behind that part.

Jennifer: In other words, they wrote a script and it was performed by a deepfake. 

So, a fake voice on a fake video answered the questions posed by an AI interviewer.

And after about a dozen tests, how did this AI job candidate do?

Sami Mäkeläinen: Well, did it flunk the interview? No, it didn’t. It was fine from the AI interviewer perspective. It was as if it was interviewing anybody else.

Jennifer: They tested the same words, two ways. One spoken by a human, and one spoken by the avatar. And he says the outcome was similar for both. 

And, he has thoughts on what might happen next.

Sami Mäkeläinen: So say a few years from now, you’ll be able to have a very realistic looking digital twin of yourself, audio visual representation of you essentially. You can imagine a whole range of use cases for that. You could have it sit in, you know, a boring, large meeting for you that could uh and umm at the right intervals. You could use it in, you know, virtual gaming or gaming and virtual presence kind of an environment. Or you could use it for taking interviews for you. 

Jennifer: Though he’s not aware of others testing this technology with digital humans just yet.  And, if Hollywood movies can’t easily pull this off, he feels like there’s little danger the rest of us are going to be deploying avatars to do our bidding any time soon.  

But the fact the hiring tool couldn’t recognize it was interviewing a machine is a problem. And it means the software still has a way to go. 

Sami Mäkeläinen: So I suppose, ideally when when you have a system that ostensibly is interviewing a human, you would kind of want to make sure that it’s the human that you think you’re interviewing at the other end. Otherwise you would just hire a friend to do the AI interview for you, and it’d probably be far more convincing than an AI would be currently. There’s a whole range of things that these systems could do to verify that, you know, they are talking to who they think they are talking, but how exactly that will be developed is again, something that is to be determined. 

Jennifer: He says they don’t have any plans to test further, but if they did, he has thoughts about what they might try.

Sami Mäkeläinen: We didn’t dig deeper into can we possibly tweak the scores by optimizing facial expressions, or tone of voice or, you know, emotion or things like that? That’s not something that we delved into it. And, it was just, it was just a very simple, kind of a proof of concept. 

Jennifer:  And he thinks we also have to remember some of this isn’t new.

Sami Mäkeläinen: We’ve sort of been gaming, the interviews forever. Like when you have a human interview, you have even courses on how to behave there, what to say, what to do, what to wear. We will increasingly be utilizing, ‘quote unquote’ intelligent agents to do our bidding for us.

Jennifer:  But he says it’s important to realize hiring was never perfect to begin with.

 Sami Mäkeläinen: It’s easy to sort of start blaming the AI and the use of AI for many of these situations. And in many cases it’s warrantied, right? I don’t think anybody can say that it was a perfect process to begin with and, you know, then we come to like, how do we deploy these systems? How do we use them, how much responsibility do we give to them? The devil is always in the details. So on one level, I would want to completely agree that the cost of getting hiring wrong is too high. But on the other hand, we’ve essentially gotten it wrong as a society for decades. 

Jennifer:  In a moment, we look at some of what’s being done at the university level, to help students get ready to engage with these systems, when we come back.

[Midroll]

Jennifer: This new era in hiring can feel a little overwhelming for people looking for a job, who don’t always know how and when they’re being tested, or what exactly they’re being tested for. 

People are looking for ways to better prepare to engage with these AI systems, and it’s moved beyond individual curiosity and grassroots organizing. AI companies are also in this space, providing tools and training for job seekers. 

One of them is a company called VMock, which has business deals with hundreds of colleges and universities. Its AI-based software corrects hundreds of resumes to be more easily read by machines, and gives feedback on video interviews. 

Salil Pande: And in that first glance, if you actually went to the no pile, then the story is over. You might be the smartest kid that is coming out of your undergraduate program. You’re gone, you’re not going to get the second chance. The world has moved on to a very fast cycle, and it’s blip and you either yes or no.

Jennifer: Salil Pande is one of the company’s founders.

He says even just a few years ago, every step in the hiring process was done by a human. That’s no longer the case, especially for companies that hire a lot of recent college graduates and people with less professional experience, because that makes it harder for hiring managers to know who is the best person for the job.  

Salil Pande: Eventually when there is a high probability of success, that’s when human to human time interaction is happening, which means that early part, which was the rejection part has already been given to technology that, Hey, technology filter me the right resume, filter me the right, uh, LinkedIn profile, filter me the good pitches and also do some psychometric tests and everything put it all together for me. And then once all of this is done go schedule an interview for me, and that’s when I’m going to go, boom, one hour interview, I’m done.

Jennifer: VMock’s mission is to prepare students for a hiring field where their resumes and video interviews have to appeal to AI first.

Salil Pande: If you have not optimized your resume for that job description, the applicant tracking system that actually is kind of like working around that job description may not filter you into the yes pile. You may be in the no pile or a maybe pile. So, you have to think about how you’re going to just go through this early process where you’re going to deal with applicant tracking system. You’re going to deal with ah artificial intelligence system that is going to recognize your, your interviews, and everything else. What’s a good pitch? How do you highlight your top skills? What skills recruiters are looking for? What skills do you currently have? How do you present your skills when you don’t have the skill, but you have something else that could be taken as an example of that other skill, and you can actually present.

Jennifer: Pande says that career centers at universities are outmatched by the technology now employed by many large companies. That’s where he says VMock’s AI can help students beat the AI they’re encountering when they look for their first job. 

And one school using it is New York University.

Gracy Sarkissian: So students are encountering these systems early, earlier and earlier on. And I would say, you know, career centers are trying to keep up with these changes so that we can prepare our students more effectively when they don’t know what to expect. I think it’s this big unknown to students. And so our job is to demystify it a little bit. 

Jennifer: Gracy Sarkissian leads the Career Center at NYU. 

She says she brought in VMock to make the time career coaches have with students more efficient. 

Gracy Sarkissian: And once you integrate that feedback, you’ll see the score go up. So it just gives students some practice at not only getting feedback, but also seeing how a system might react to react or respond to their resume.

Jennifer: And she has some advice for job seekers trying to impress both AI and humans.

Gracy Sarkissian: Some students tell me, you know, I did what you guys told me to do. I made sure that my resume was filled with keywords. And now it sounds like, kind of like a cheesy marketing document. And so what I say, I understand, I hear you. Have two versions of your resume. Have the one that you’re going to apply to when you go through systems and have one that you are going to hand to someone, if you meet with someone and you want to impress them. And so that has helped students kind of say, okay, I get it. This is something that I have to do so that my resume gets picked up. 

Jennifer: Her team also prepares students for one way video interviews. 

Gracy Sarkissian: We don’t realize how much input we get when we’re having a one-on-one conversation with someone, or you’re, even if it’s a group or panel interview. You are looking at people in the eye, you are getting positive feedback. You might get negative feedback that might make you adjust your question. If you were nervous, there’s a good chance that you’ll feel a little empathy from someone in the room. Whereas when we’re interviewing with AI, it feels like a stranger, right? It feels like a stranger without a face. It’s a blank screen. And oftentimes you’re staring at yourself and so it can be a lonely process I think, um, for some of our students. 

Jennifer: It’s one of the reasons why she believes, in a tight labor market, employers might want to rethink some of these strategies, especially if they want to attract top talent.

Gracy Sarkissian: You know, we know Gen-Z students are, are a values driven generation, right? They want to make sure that they can connect with the culture of the organization. That the mission and values of the organization are, are in line with those. And that’s something that’s difficult to assess when you were interviewing in a virtual way. When you’re not meeting people, when you’re not speaking to people at an interview, when you’re not walking through an office and just kind of seeing work happen.  

Jennifer: But in a world where millions of companies receive millions of applications, tailoring to individuals isn’t something that scales.

And that lands us back in a position we’ve been before, blackbox decision-making, applied to everyone, leading to unintended consequences.

As we wrap up the second season of this podcast—and our four-part investigation of how AI is being used to make hiring decisions—we see the promise of using algorithms. But the reporting makes clear this is an emerging industry with many moving parts, and at least a few tools that just aren’t there yet. And in some cases, might actually do the opposite of what they intend. 

We’ve seen systems with bias against women, and people with disabilities, even a tool that predicts people named Jared will be successful on the job. Other tools rated candidates highly on their English language skills, though the recordings didn’t contain one word of English. We also uploaded recordings that had nothing to do with the interview questions asked, but were rated as a match for the skills required to do the job.

With little oversight, there’s also little transparency about what goes on inside the black box, and why the software makes the decisions it makes. Companies that build these tools aren’t required to tell anyone how their systems work, or why they should be trusted.

The good news? In many ways, we’re still at the beginning. And there’s opportunity to build better systems, if we’re honest about what’s not working, where the machines are coming up short, and if we make a decision not to value scale, efficiency, or speed above all.

[CREDITS]

Jennifer:  This miniseries on hiring was reported by Hilke Schellmann and produced by me, Emma Cillekens, Anthony Green, and Karen Hao. We’re edited by Michael Reilly.

That’s it for Season Two, we’re going to take a break and see you back here in the Fall.

Thanks so much for listening. I’m Jennifer Strong.

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.