3 min read
Meta is restructuring of its "Responsible AI" team, reflecting a strategic shift in its approach to artificial intelligence development. The move, first reported by The Information, seems to take a distributed approach amid a roiling landscape where the role of AI in society is increasingly scrutinized.
Meta told the news outlet that it is integrating its Responsible AI team members into separate divisions across the company. A spokesperson from Meta said that this decision will embed AI safety considerations more directly into the development of core products and technologies.
The impact of generative AI has raised alarms worldwide, ranging from people worried about the lack of privacy to leaders worrying about a potential AI apocalypse.
Meta offered assurances that its decision to disperse its Responsible AI team does not completely mean that the company doesn’t want to develop AI responsibly.
“We continue to prioritize and invest in safe and responsible AI development,” a spokesperson said. The move seems to spread its efforts out instead of having a centralized entity to oversee ethics in AI development.
David Evan Harris, a former researcher at Meta, has raised alarms about the potential misuse of AI technologies.
“After working on the civic integrity team at Facebook, I went on to manage research teams working on responsible AI, chronicling the potential harms of AI and seeking ways to make it more safe and fair for society,” David said in an opinion piece published The Guardian in June. “I saw how my employer’s own AI systems could facilitate housing discrimination, make racist associations, and exclude women from seeing job listings visible to men.”
He also warns about the dangers of deepfakes, and even the possibility of open source LLMs being used by malicious actors to spread misinformation. The role of the now dismantled team was to prevent these things from happening.
This restructuring coincides with the company's initiative to streamline operations, as part of its “year of efficiency,” as termed by CEO Mark Zuckerberg. The move is seen as a response to the increasing importance of generative AI tools in tech, a sector in which Meta has been investing heavily.
Meta recently rolled out generative AI tools for advertisers, showcasing its commitment to AI-driven solutions. Its portfolio includes the highly popular open source large language model "Llama 2," a text-to-video generator, an inpaint tool, and a brand new AI assistant—along with the rumors of an upcoming Llama 3 next year.
The industry has seen significant developments recently, including Sam Altman's departure as OpenAI's CEO. The board, focusing on aligning the company's mission with AI safety and alignment, saw OpenAI Chief Scientist Ilya Sutskever play a crucial role in the transition. Industry watchers suspected a conflict between people who want to slow down AI development, known as “decels,” and those eager to press forward faster.
Although Emmet Shear, OpenAI’s new interim CEO, clarified that “the board did not remove Sam over any specific disagreement on safety,” speculations about the level of concern that those in charge have about alignment and safety remain.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.