Connect with us

Tech

Citizen如何通过招募亚裔老年人来重塑自己

Published

on

Josephine Josephine Hui, 75, at a local security event in Oakland sitting in a gymnasium as an audience member at a community meeting


Editor’s note: This is a translation of a story about how the crime-tracking app Citizen has been giving away free subscriptions to elderly Asians in the Bay Area. Find the English language version here.

本文是与普利策中心的人工智能问责网络合作撰写的。

当外面天黑的时候,约瑟芬·赵(Josephine Zhao)哪怕只是走几个街区就能回到旧金山的家,有时也会多叫一双“眼睛”——字面意义的眼睛。

赵打开手机上的Citizen App,通过一个名为“实时监控”的功能,与该平台的一个客服人员建立联系。而该平台也可以通过网络追踪到赵的GPS位置,客服只要点击另一个按钮,就可以得到打开她手机摄像头的授权。这样该平台就可以“看到我所看到的东西”,赵说。通常来说,她甚至不会和客服人员进行对话,但她知道“这时有人和我一起走”,这会让赵感到安心一些。

这是赵最近采取的最新安全措施之一:她也避免乘坐公共交通工具,以及在城市里走路的时候,会在她的钥匙链上挂着一个长长的尖头装置。这个装置是一个浅粉色的塑料制品,必要的时候会变成一个武器。

但在她看来,Citizen这样一个允许用户报告和跟踪附近犯罪通知的超级本地应用程序是她最好的保护手段之一,这种数据驱动的DIY安全措施能够保护一个长期被忽视的群体。

“我们在教育、公共安全、住房、交通方面上的需求,都没有得到满足和关切。就好像我们不重要一样。”赵说,她目前也是多家教育非政府组织的代课教师和社区联络员,“我们的需求没有得到尊重,我们的需求没有得到满足,人们到处都轻视我们。”

“我真的相信Citizen是一个维持社会正义和种族正义的工具。”

“我们必须实施一些行动来保护我们的社群,”她补充道。“Citizen是最完美的工具。”

在当地持续发生基于种族的攻击、以及一系列针对亚裔居民的大规模枪击事件之后,许多亚裔和太平洋岛民(AAPI,Asian-American and Pacific Islander)社群的居民们都告诉《麻省理工科技评论》他们欢迎这款应用程序,认为它可以解决反亚仇恨带给他们的焦虑。

对于这些受到严重创伤的人们来说,Citizen成为了让他们获得安心的一种方式。

Citizen的转型

对于这款应用来说,这种积极的反响似乎有些奇怪。毕竟因放大了人们对犯罪的幻想,并帮助白人居民实行种族门禁,它长期以来一直都在遭受着批评的声音。Citizen最初被命名为“治安警员”,因为它有一段曲折的历史:苹果应用商店在该款应用2016年推出后的一周内就将其下架,因为它违反了苹果的《开发者审查指南》,该指南规定应用程序不得鼓励身体伤害。2021年,该公司的首席执行官要求他的员工悬赏3万美元,寻找一名他误认为在洛杉矶纵火的人,这在当时成为了头条新闻。而且该款应用的客户也经常因发表种族主义言论而受到批评。

正是在这种情况下,这款应用现在正在积极地争取像赵这样的用户。从2022年9月开始,通过社区团体如奥克兰华埠商会(Oakland Chinatown Chamber of Commerce)或者旧金山美国华商总会(Chinese American Association of Commerce in San Francisco)组织的活动,Citizen一直在湾区招募中国裔和其他亚裔居民,其中包括许多老年人,他们加入服务可以免费获得价值240美元的一年高级订阅服务。(虽然该应用程序的免费版本会向用户发送值得注意的事件警报,但要是想获得与Citizen雇员实时连线监控服务,则需要更高级的版本)。目前,赵直接与Citizen合作,帮助将其应用程序界面翻译成中文,并帮助其在她的人际圈中进行宣传。

该应用程序的最终目标,是想从该地区的AAPI社群招募2万名新用户,这可以带来相当于价值约500万美元的一年付费订阅。Citizen组织的产品负责人达雷尔·斯通(Darrell Stone)表示,目前已经有700人注册了他们的应用程序。

旧金山湾区的项目也是对应用程序更广泛改造的测试,它成功地吸引一些可能经常得不到警察保护的弱势群体,从亚特兰大的黑人跨性别社群到芝加哥地区的帮派暴力受害者。“我真的相信Citizen是一个维持社会正义和种族正义的工具,”特雷弗·钱德勒(Trevor Chandler)说,他在去年担任Citizen组织的政府事务和公共政策主管时,领导了该应用程序在旧金山湾区的试点项目。

但是,一些与湾区亚裔社群合作的倡导者,以及专注于弱势人群中的不实信息研究领域的专家,却怀疑这种快速危险预警技术是否真正解决了核心问题,即它是否真的能人们更安全,而不仅仅是让他们感觉更安全一点。除此之外,他们还怀疑Citizen应用程序是否有时会让事情变得更糟,因为它可能会放大对这个社群的偏见,特别是在全球疫情大流行给地方全国的亚裔社群带来无尽创伤的时候。

“几乎每天你都可以在任何社交媒体上看到该款应用程序向群众征集的信息,在整个技术生态圈中被疯狂和快速地传播,在我看来这完全是不正常的,”倡导亚裔社群的社会、政治和经济福祉的非营利组织OCA的公共事务副总裁肯德尔·小佐井(Kendall Kosai)说。

他说,他在自己的手机上安装了Citizen,并对一些用户针对某些事件提交的偏见评论而感到吃惊。“这对我们社群居民的心理到底有什么样的影响呢?”他提问道,“很明显,这一切可能很快就会失控。”

获得“正确的信息”

“我很高兴能使用它,”49岁的爱丽丝·金(Alice Kim)说,她和丈夫在旧金山北部的里士满区经营着一家名为Joe’s Ice Cream的冰淇淋店,该区域的大约三分之一人口是亚裔,金表示最近会看到各种破坏事件和汽车盗窃案件的增加。

和许多其他亚裔美国人一样,金氏夫妇觉得,对他们安全的担忧在很长一段时间里都被置若罔闻,基本上被当地政客忽视了。“感觉他们生活在另一个世界,”爱丽丝的丈夫肖恩·金(Sean Kim)说。

在2021年的几个月里,他们的商店发生了三次企图闯入事件,当爱丽丝说她要求人们不要使用卫生间时,人们甚至几次向她扔垃圾,或者开始争吵。

“每天早上我来上班的时候都会有点焦虑,我的商店有没有被盗窃,会不会又看到一扇破损的窗户,”爱丽丝告诉我,“尤其在疫情期间,我感觉非常紧张和不安全。”

2022年秋天,爱丽丝让肖恩在她的手机上安装了Citizen应用程序,他之前一直向爱丽丝说明该款应用程序的各种好处。在该应用程序开始向AAPI社群宣传前,肖恩就一直在使用Citizen应用程序,并且当他的朋友赵给他们一个免费试用的高级版本时,他果断地升级了该款应用程序。

肖恩认为Citizen比其它本地信息应用程序如NextDoor更可靠,因为他感觉到Citizen所提供的消息似乎是得到了验证。(除了依赖各种公共数据来源的紧急情况信息外,Citizen员工表示,他们还会在发布犯罪信息之前对用户报告的犯罪信息进行审查。)

“我们在尝试要求人们仔细检查微信群中所转发的信息,”因为“这些信息有时会造成其他人恐慌。”

“我认为越来越多的人使用Citizen,是因为很多人来核实这些信息。”肖恩继续解释说, “所以至少我知道,哦,那不是一声枪响。如果没有这个应用程序,我听到了一声枪响的时候,我完全不知道发生了什么事。我觉得这是一个有效的工具。我知道正确的信息,这让我感觉很安全。”

对爱丽丝来说,能够通过Citizen的高级功能与客服建立联系,可以解决一些可能没有达到真正犯罪门槛、但却让她感觉很不安全问题的一种方式。在应用程序的地图上,红点表示严重事件的报告,比如有人被车撞了或被武器袭击了;黄点表示较温和的一些预警信息,比如报告有武装人员或检测到气体气味,灰点表示值得注意但没有威胁性的问题,比如丢失的宠物。

和金一家人一样,湾区的许多亚裔居民们都积极接受监控,因为他们觉得长期以来都被忽视了。AAPI社群的居民已经在旧金山和奥克兰的华埠组织了各种自发的巡逻活动(尽管金氏夫妇还没有参与其中)。这对夫妇支持一项有争议的法案,该法案允许警方在业主允许的情况下,在24小时内调取私人监控录像。肖恩和爱丽丝还和其他小企业主谈到了安装私人监控设备的问题,附近奥克兰的华埠企业主们也采取了这一措施。对他们来说,Citizen只不过是另一个密切关注他们周围发生的事情的工具。

钱德勒认为,围绕Citizen的许多负面言论都忽略了这一观点,而且像金氏夫妇这样的一些核心用户,之所以依赖这一工具,是因为他们生活的家门口就面临着犯罪。

“Citizen和它的付费版本并不是一款万灵药,它不会解决世界上所有的问题,也不会阻止世界各地的犯罪的发生。它不是为了这些,”钱德勒说,“但这款应用程序成为了让边缘化社群表达他们的声音的一种非常强大的方式。”

“可惜的是,他们的助手里没有人会说中文”

 “虽然Citizen的想法很棒。但因为我们社群的独特性,我确实带着一种善意的怀疑态度来看待这个问题,”OCA的小佐井说。“我一直在想的一件事是,它对最脆弱的成员的可及性到底是怎样的?”

他指出,美国的亚裔社群包括“50个不同的种族和100种不同的语言”,而且“不同的社区围绕这些公共安全问题,与当地执法部门进行着不同的互动。”

目前,Citizen只支持英语操作界面。奥克兰华埠商会的执行主任陈巧伦(Jessica Chen)说,要想真正有效,它必须使用中文或其他亚洲语言提供服务。(Citizen的斯通在一封电子邮件中表示,它正在“积极投资”自然语言处理技术,“将使我们能够实时地将应用程序翻译成不同的语言”,但他没有提供这些举措的细节或时间表。)

在实践层面上,当一个群体的成员对使用科技和获取信息有不同程度的熟悉度时,很难帮助他们采用同一种技术,当英语还不是他们的第一语言时就更难了。特别是对于英语非母语的老年人,从注册这个平台、到理解平台所发布的消息都是非常困难的。

“我有时间教他们吗?以及我是合适的教他们的人吗?”陈问。

75岁的约瑟芬·惠(Josephine Hui)已经在奥克兰生活了40年,她是一名金融教育工作者,经常通勤到华埠工作。最近,她和其他几位老人在一次由Citizen主办的活动上了解到这款应用程序,该活动由关注奥克兰安全问题的非营利组织亚裔犯罪委员会(Asian Committee on Crime)和奥克兰华埠商会联合举办。她在应用程序中看到了奥克兰警察局的公共安全介绍。 

75岁的约瑟芬·许(Josephine Hui)在奥克兰的一个安全活动上。
LAM THUY VO

“我认为对于任何街上的行人来说,Citizen都是一个很棒的应用程序,”她说道,“可惜的是,他们的助手里没有人会说中文。”

不过,她说她渴望学习如何使用这款应用。她说,疫情期间她感到孤立,被困在家里,随着针对亚裔居民的攻击增多,她担心自己的安全。

但在她使用这款应用程序之前,她遇到了一个障碍:当她试图安装它时,她已经不记得自己的苹果账户密码了。

混乱的信息

作为奥克兰华埠商会的主席,陈锡澎(Carl Chan)一直在推动更多的安全措施来保护华埠的居民,并感谢社群居民的推广。

然而,对于很多老年人来说,这款App的系统语言并非他们的母语,因此陈经常要帮助他们学习怎么使用。他担心,如果信息不能被翻译成中文或越南语等语言,一些人可能会误解Citizen的警报。他还担心,如果这些老年人没有获得适当的培训,他们可能会错误地将其他地点的警报误认为是本地区的情报而传递到其他平台,这些不实信息的传播会造成不必要的恐惧。

“我们试图要求人们仔细检查微信群中所转发的信息,”陈说,因为“这些信息有时会造成其他人恐慌。”

迪尼·西特拉(Diani Citra)在美国笔会工作,专门处理亚裔社群的不实信息问题,她也担心这种有关犯罪的密集信息的传播会适得其反,使已经受到创伤的人群更加焦虑。

西特拉表示,像Citizen这样的应用程序可以帮助填补一群处于“信息荒漠”的人的信息空白,这些人可能是因为主流媒体没有关注他们,或者因为他们没有收到适合自己母语的信息。

“对许多被边缘化的社群来说,了解犯罪信息是十分有必要的,我们没有得到与我们的安全有关的社群信息。因为现在没有人提供任何信息,我们也没有资格要求他们不去别的地方获取这些信息,”她说,但使用这款应用仍然可能会产生一种“放大的危险感”。

虽然钱德勒说Citizen会不断验证其发布的信息,但亚裔居民会将从这里接收到的信息进一步传播到碎片性的新闻网站和社交平台媒体系统,如WhatsApp,微信,Viber等等。这些平台往往已经充斥着有误导性和分裂性的关于反亚仇恨的信息。

“原本是个例的事情可能会被视为是一种大趋势。”

例如,根据2022年8月一份关于亚太美国人全国委员会和虚假信息防御联盟(National Council of Asian Pacific Americans and the Disinfo Defense League)的虚假信息调查报告,越来越多的新闻聚合平台在收集犯罪者是黑人、受害者是亚裔的犯罪信息。

报告称,这些媒体有时会用更具挑衅性的标题重写新闻文章,或将旧事件当作主流媒体瞒报黑人反亚裔犯罪的证据,其目的往往是推动反黑人叙事,并将亚裔受害者的身份武器化。

报告写道:“主流媒体和新闻机构缺乏对亚裔美国人的报道,给一些单独强调其‘亲亚裔’性质的网络消息源头和平台留下了空间……这些源头助长了一些有问题的叙事,这些宣传报道围绕着女性歧视、反黑人种族主义和仇外心理进行展开。”

虽然还没有证据表明像这样的宣传信息已经在Citizen上占据上风,但西特拉说,当本来就更容易受到错误信息和分裂性叙述影响的亚裔老人看到没有背景的犯罪信息时更容易变得恐慌。 (Citizen没有回答这一系列的后续问题,包括关于该应用程序上可能出现的错误信息。)西特拉警告说:“原本是个例的事情可能会被视为是一种大趋势。”

Citizen可以改变吗?

在美国,当警察处境和治安局势已经很紧张的时候,Citizen一直在向AAPI社群示好。很多Citizen正在争取的社群都不信任警察部门或不愿与他们合作。(事实上,一些组织者告诉我,许多亚裔社群成员会避免报警来报告事件。)

“我们有时对创造一个即时的、能让情况稍微好转的解决方案感到非常兴奋,但我们对结构性的长期解决方案考虑得不够多。”

从理论上讲,对于那些通常感到被官方政府机构辜负,但仍然面临很多安全问题的人,像Citizen这样的技术可以代表一个有用的垫脚石。

不过,就在不久前,Citizen还被批评其创造了一种“恐惧文化”,鼓励人们使用私警。一名前员工曾描述该应用的主流用户是那些会写“极其种族歧视”的评论的人。

钱德勒认为,这些描述忽视了Citizen这类应用程序庞大的用户基础,这些人可能需要该应用程序提供的服务来追踪他们附近的犯罪情况,因为现实就是如此,他们的周围就是犯罪事件频发。在他看来,对于那些没有生活在安全社区的“特权”的用户来说,该应用程序可以是一个强大的信息传播工具。

举例来说,钱德勒引用了他在芝加哥的工作经历。他说,统计数据上来看,南区不如北区安全,那里的一些人每天都不得不生活在犯罪的现实之中。那里的居民告诉他,他们依靠该应用程序来确保他们的家庭安全,例如,了解是否发生了枪击或车祸,这些往往可能升级为更大的冲突。

这些芝加哥的用户“不是被 Citizen 告诉他们应该感到恐惧,”钱德勒说,“他们本来就感到恐惧。”

Trevor Chandler at a safety event for the AAPI community in Oakland
特雷弗·钱德勒(Trevor Chandler)在奥克兰的AAPI社群举办的一个安全活动上
LAM THUY VO

2022年秋冬,钱德勒一直在与湾区的政客和社区组织者进行合作,他正在与另一位当地市长和附近的组织进行交流,为他们所在地区的苗裔和越南裔社区带来Citizen的免费使用账户。在年底之前,他推动Citizen扩展到萨克拉门托县,这里的亚裔居民占比很高。

但展望未来,目前还不清楚该公司将继续向该项目投入多少资金。2023年1月初,钱德勒和其他33名员工被解雇了

钱德勒最近发短信表示:“我很自豪能通过我们与社群伙伴的合作,不仅提高人们对AAPI社群仇恨犯罪意识,还提供切实可行的解决方案。”“我很难过,作为一名前Citizen员工,我再也不能再继续参与其中了。”

钱德勒说,该公司将坚持其承诺,为湾区的亚裔居民提供2万份免费的付费订阅服务,斯通证实,该公司“将继续推广和支持该计划”。但钱德勒也表示,他不确定是否会有其他人继续参与这个项目。

对于经常为纽约市的亚裔居民提供自卫课程的组织Soar Over Hate的主席健次·琼斯( Kenji Jones)来说,对社群的持续承诺是很重要的。他受到Citizen在湾区推广项目的鼓舞,尤其是为应用程序的用户设置一个随时待命的客服的想法“非常好”。但他也担心,免费试用服务只会持续一年,可能许多低收入的亚裔居民无法续期。

“那一年之后会发生什么呢?这是一家盈利性的公司。所以这是为了赚更多的钱。他们是在从这个群体中获利,尤其是这个群体现在感到非常危险。所以我认为,对我来说,只有一年的试用是相当不道德的,”琼斯说。

他补充道:“我们有时对创造一个即时的、能让情况稍微好转的解决方案感到非常兴奋,但我们对结构性的长期解决方案考虑得不够多。”

琼斯还指出,他的组织提供的一些最重要的课程是帮助人们树立自信,他担心使用这款应用可能会破坏这些感觉,这可能会让人们“对自己的安全更加焦虑和恐惧”。

作为亚裔人,“我认为我们中的很多人已经习惯于感到渺小,”他说,“我认为很多人需要的是信心,而这不是一款应用程序能够给你带来的。”

林·瑞·武(Lam Thuy Vo)是一名记者,她将数据分析与实地报道结合起来,以研究制度和政策如何影响个人行为。她目前也是布朗大学的信息未来研究员普利策中心的人工智能问责研究员,以及克雷格·纽马克新闻研究生院的驻校数据记者

感谢 MIT TR China 的张智为本文提供翻译支持。

Tech

Why I became a TechTrekker

Published

on

group jumps into the air with snowy mountains in the background


My senior spring in high school, I decided to defer my MIT enrollment by a year. I had always planned to take a gap year, but after receiving the silver tube in the mail and seeing all my college-bound friends plan out their classes and dorm decor, I got cold feet. Every time I mentioned my plans, I was met with questions like “But what about school?” and “MIT is cool with this?”

Yeah. MIT totally is. Postponing your MIT start date is as simple as clicking a checkbox. 

Sofia Pronina (right) was among those who hiked to the Katla Glacier during this year’s TechTrek to Iceland.

COURTESY PHOTO

Now, having finished my first year of classes, I’m really grateful that I stuck with my decision to delay MIT, as I realized that having a full year of unstructured time is a gift. I could let my creative juices run. Pick up hobbies for fun. Do cool things like work at an AI startup and teach myself how to create latte art. My favorite part of the year, however, was backpacking across Europe. I traveled through Austria, Slovakia, Russia, Spain, France, the UK, Greece, Italy, Germany, Poland, Romania, and Hungary. 

Moreover, despite my fear that I’d be losing a valuable year, traveling turned out to be the most productive thing I could have done with my time. I got to explore different cultures, meet new people from all over the world, and gain unique perspectives that I couldn’t have gotten otherwise. My travels throughout Europe allowed me to leave my comfort zone and expand my understanding of the greater human experience. 

“In Iceland there’s less focus on hustle culture, and this relaxed approach to work-life balance ends up fostering creativity. This was a wild revelation to a bunch of MIT students.”

When I became a full-time student last fall, I realized that StartLabs, the premier undergraduate entrepreneurship club on campus, gives MIT undergrads a similar opportunity to expand their horizons and experience new things. I immediately signed up. At StartLabs, we host fireside chats and ideathons throughout the year. But our flagship event is our annual TechTrek over spring break. In previous years, StartLabs has gone on TechTrek trips to Germany, Switzerland, and Israel. On these fully funded trips, StartLabs members have visited and collaborated with industry leaders, incubators, startups, and academic institutions. They take these treks both to connect with the global startup sphere and to build closer relationships within the club itself.

Most important, however, the process of organizing the TechTrek is itself an expedited introduction to entrepreneurship. The trip is entirely planned by StartLabs members; we figure out travel logistics, find sponsors, and then discover ways to optimize our funding. 

two students soaking in a hot spring in Iceland

COURTESY PHOTO

In organizing this year’s trip to Iceland, we had to learn how to delegate roles to all the planners and how to maintain morale when making this trip a reality seemed to be an impossible task. We woke up extra early to take 6 a.m. calls with Icelandic founders and sponsors. We came up with options for different levels of sponsorship, used pattern recognition to deduce the email addresses of hundreds of potential contacts at organizations we wanted to visit, and all got scrappy with utilizing our LinkedIn connections.

And as any good entrepreneur must, we had to learn how to be lean and maximize our resources. To stretch our food budget, we planned all our incubator and company visits around lunchtime in hopes of getting fed, played human Tetris as we fit 16 people into a six-person Airbnb, and emailed grocery stores to get their nearly expired foods for a discount. We even made a deal with the local bus company to give us free tickets in exchange for a story post on our Instagram account. 

Continue Reading

Tech

The Download: spying keyboard software, and why boring AI is best

Published

on

🧠


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How ubiquitous keyboard software puts hundreds of millions of Chinese users at risk

For millions of Chinese people, the first software they download onto devices is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes. 

QWERTY keyboards are inefficient as many Chinese characters share the same latinized spelling. As a result, many switch to smart, localized keyboard apps to save time and frustration. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones. 

But a recent report by the Citizen Lab, a University of Toronto–affiliated research group, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole. Read the full story. 

—Zeyi Yang

Why we should all be rooting for boring AI

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. It hopes they could improve intelligence and operational planning. 

But those might not be the right use cases, writes our senior AI reporter Melissa Heikkila. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases. 

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. The DoD’s best bet is to apply generative AI to more mundane things like Excel, email, or word processing. Read the full story. 

This story is from The Algorithm, Melissa’s weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The ice cores that will let us look 1.5 million years into the past

To better understand the role atmospheric carbon dioxide plays in Earth’s climate cycles, scientists have long turned to ice cores drilled in Antarctica, where snow layers accumulate and compact over hundreds of thousands of years, trapping samples of ancient air in a lattice of bubbles that serve as tiny time capsules. 

By analyzing those cores, scientists can connect greenhouse-gas concentrations with temperatures going back 800,000 years. Now, a new European-led initiative hopes to eventually retrieve the oldest core yet, dating back 1.5 million years. But that impressive feat is still only the first step. Once they’ve done that, they’ll have to figure out how they’re going to extract the air from the ice. Read the full story.

—Christian Elliott

This story is from the latest edition of our print magazine, set to go live tomorrow. Subscribe today for as low as $8/month to ensure you receive full access to the new Ethics issue and in-depth stories on experimental drugs, AI assisted warfare, microfinance, and more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How AI got dragged into the culture wars
Fears about ‘woke’ AI fundamentally misunderstand how it works. Yet they’re gaining traction. (The Guardian
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)
 
2 Researchers are racing to understand a new coronavirus variant 
It’s unlikely to be cause for concern, but it shows this virus still has plenty of tricks up its sleeve. (Nature)
Covid hasn’t entirely gone away—here’s where we stand. (MIT Technology Review)
+ Why we can’t afford to stop monitoring it. (Ars Technica)
 
3 How Hilary became such a monster storm
Much of it is down to unusually hot sea surface temperatures. (Wired $)
+ The era of simultaneous climate disasters is here to stay. (Axios)
People are donning cooling vests so they can work through the heat. (Wired $)
 
4 Brain privacy is set to become important 
Scientists are getting better at decoding our brain data. It’s surely only a matter of time before others want a peek. (The Atlantic $)
How your brain data could be used against you. (MIT Technology Review)
 
5 How Nvidia built such a big competitive advantage in AI chips
Today it accounts for 70% of all AI chip sales—and an even greater share for training generative models. (NYT $)
The chips it’s selling to China are less effective due to US export controls. (Ars Technica)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)
 
6 Inside the complex world of dissociative identity disorder on TikTok 
Reducing stigma is great, but doctors fear people are self-diagnosing or even imitating the disorder. (The Verge)
 
7 What TikTok might have to give up to keep operating in the US
This shows just how hollow the authorities’ purported data-collection concerns really are. (Forbes)
 
8 Soldiers in Ukraine are playing World of Tanks on their phones
It’s eerily similar to the war they are themselves fighting, but they say it helps them to dissociate from the horror. (NYT $)
 
9 Conspiracy theorists are sharing mad ideas on what causes wildfires
But it’s all just a convoluted way to try to avoid having to tackle climate change. (Slate $)
 
10 Christie’s accidentally leaked the location of tons of valuable art 🖼📍
Seemingly thanks to the metadata that often automatically attaches to smartphone photos. (WP $)

Quote of the day

“Is it going to take people dying for something to move forward?”

—An anonymous air traffic controller warns that staffing shortages in their industry, plus other factors, are starting to threaten passenger safety, the New York Times reports.

The big story

Inside effective altruism, where the far future counts a lot more than the present

" "

VICTOR KERLOW

October 2022

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied methods for calculating the answer.

It’s no surprise that effective altruisms’ ideas have long faced criticism for reflecting white Western saviorism, alongside an avoidance of structural problems in favor of abstract math. And as believers pour even greater amounts of money into the movement’s increasingly sci-fi ideals, such charges are only intensifying. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch Andrew Scott’s electrifying reading of the 1965 commencement address ‘Choose One of Five’ by Edith Sampson.
+ Here’s how Metallica makes sure its live performances ROCK. ($)
+ Cannot deal with this utterly ludicrous wooden vehicle
+ Learn about a weird and wonderful new instrument called a harpejji.



Continue Reading

Tech

Why we should all be rooting for boring AI

Published

on

Why we should all be rooting for boring AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilities, privacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

Continue Reading

Copyright © 2021 Vitamin Patches Online.