Artificial intelligence (AI) doesn't directly threaten jobs, argued Meredith Whitaker, president of privacy-preserving messaging app Signal—but it does provide a "powerful pretext" to negatively impact on working conditions.

"You need to be on the lookout for claims that AI is a magic technology that actually replaces workers, because that is not really borne out." Whitaker argued at Web Summit 2023 in Lisbon.

Whitaker, who co-founded the AI Now Institute and researches the social implications of artificial intelligence, pointed to the recent Hollywood writer's strike and the fight over the introduction of generative artificial intelligence into the creative process. She argued that, "AI often doesn't replace workers," since, "it takes a huge number of labourers to develop and maintain and adjust the outputs of these systems so they actually make sense in our complex world."

However, she cautioned that, "AI does provide a powerful pretext for bosses and institutions and boards of directors to degrade the working conditions of the workers who are managing these AI systems." She pointed to ride-sharing app Uber as a firm that talked about "changing the landscape of transportation," accusing it of engaging in "labor law arbitrage, and a degradation of employment status around the world."

"I think we can't talk about AI as the agents here, AI isn't doing this," Whitaker said, pointing the finger at "studio executives in Hollywood" and big tech firms like Microsoft, Google and Meta. Those executives, she said, will make artificial intelligence tools and APIs available to businesses that will use them to increase growth, profits and productivity. That, she argued, translates to "cutting benefits, wages and workers,  hiring people back as contractors, making them edit AI-generated output instead of writing themselves."

Signal and AI

Signal, for its part, has no plans to integrate artificial intelligence. Whitaker noted that AI draws on user data, and Signal "goes out of its way to have as little data as possible."

Said Whitaker: "We don't know who you are, who you're talking to, we don't have your contact list, we don't know the contents of your conversations, we don't know what media file you're sending."

"I agree with people in general that AI is not trustworthy," Whitaker said, pointing out that it's a tool developed by a handful of large corporations, used by employers, police services, and governments. "We are not the users of AI, we are the subjects of AI; we need to be very clear about that position," she added.

AD

Edited by Stacy Elliott.

Stay on top of crypto news, get daily updates in your inbox.