Tech

Meta has built a massive new language AI—and it’s giving it away for free

Published

on


Pineau helped change how research is published in several of the largest conferences, introducing a checklist of things that researchers must submit alongside their results, including code and details about how experiments are run. Since she joined Meta (then Facebook) in 2017, she has championed that culture in its AI lab. 

“That commitment to open science is why I’m here,” she says. “I wouldn’t be here on any other terms.”

Ultimately, Pineau wants to change how we judge AI. “What we call state-of-the-art nowadays can’t just be about performance,” she says. “It has to be state-of-the-art in terms of responsibility as well.”

Still, giving away a large language model is a bold move for Meta. “I can’t tell you that there’s no risk of this model producing language that we’re not proud of,” says Pineau. “It will.”

Weighing the risks

Margaret Mitchell, one of the AI ethics researchers Google forced out in 2020, who is now at Hugging Face, sees the release of OPT as a positive move. But she thinks there are limits to transparency. Has the language model been tested with sufficient rigor? Do the foreseeable benefits outweigh the foreseeable harms—such as the generation of misinformation, or racist and misogynistic language? 

“Releasing a large language model to the world where a wide audience is likely to use it, or be affected by its output, comes with responsibilities,” she says. Mitchell notes that this model will be able to generate harmful content not only by itself, but through downstream applications that researchers build on top of it.

Meta AI audited OPT to remove some harmful behaviors, but the point is to release a model that researchers can learn from, warts and all, says Pineau.

“There were a lot of conversations about how to do that in a way that lets us sleep at night, knowing that there’s a non-zero risk in terms of reputation, a non-zero risk in terms of harm,” she says. She dismisses the idea that you should not release a model because it’s too dangerous—which is the reason OpenAI gave for not releasing GPT-3’s predecessor, GPT-2. “I understand the weaknesses of these models, but that’s not a research mindset,” she says.

Copyright © 2021 Vitamin Patches Online.