AI for communicators: What’s new and what matters
The latest on risks, regulation and uses.
AI continues to shape our world in ways big and small. From misleading imagery to new attempts at regulation and big changes in how newsrooms use AI, there’s no shortage of big stories.
Here’s what communicators need to know.
AI risks
One of the biggest concerns about generative AI is the possibility of building bias into machine learning systems that can influence output. It appears that Google may have overcorrected for this possibility with the image generation tools in its newly renamed AI tool Gemini.
The New York Times reported that Google temporarily suspended Gemini’s ability to generate images of people after the tool returned a number of AI-generated images that fumbled the ball by over-indexing on including women and people of color, even when this led to historical misrepresentations or simply refusing to show white people.
Among the missteps, Gemini returned images of Asian women and Black men in Nazi uniforms when asked to show a German soldier in 1943, and refused to show images of white couples when asked.
In a statement posted to X, Google’s Comms team wrote, “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
This issue highlights Google’s challenges to overcome the biases present on the broader internet, which fuels its AI generation tool, without going too far in the other direction.
Finally, a reminder that what comes from generative AI is often made of pure imagination.
Business Insider reports that families were enticed with beautiful, AI-generated fantasies of a candy-filled extravaganza that nodded to Willy Wonka. But families in Scotland forked over the equivalent of $44 for a barren warehouse with a few banners taped to the walls, photos revealed.
Police were called to an ‘immersive’ Willy Wonka Experience after families showed up to an ’empty warehouse’
The event reportedly charged $40 for entry, advertised with AI art, and said it would be a ‘journey filled with wondrous creations and enchanting surprises at every… pic.twitter.com/udz8KeWVxQ
— Culture Crave 🍿 (@CultureCrave) February 27, 2024
It’s a sad reminder that unscrupulous people will continue using AI in ways big and small, all eroding at trust overall. Expect warier, more suspicious consumers moving forward as we all begin to question what’s real and what’s illusion.
Regulation
Microsoft’s AI partnerships are once more under scrutiny by regulators. This time, the tech giant’s collaboration with the French Mistral AI has drawn the attention of the EU, Reuters reported. Microsoft invested $16 million into the startup in hopes of incorporating Mistral’s models into its Azure platform. Some EU lawmakers are already demanding an investigation as Microsoft seems set to gain even more power in the AI space. Investigations are already underway due to Microsoft’s stake in OpenAI, maker of ChatGPT.
But the investigations reveal broader cracks in the EU’s views toward AI. As Reuters reports:
Alongside Germany and Italy, France also pushed for exemptions for companies making generative AI models, to protect European startups such as Mistral from over-regulation.
“That story seems to have been a front for American-influenced big tech lobby,” said Kim van Sparrentak, an MEP who worked closely on the AI Act. “The Act almost collapsed under the guise of no rules for ‘European champions’, and now look. European regulators have been played.”
A third MEP, Alexandra Geese, told Reuters the announcement raised legitimate questions over Mistral and the French government’s behaviour during the negotiations.
“There is a concentration of money and power here like the world has never seen, and I think this warrants an investigation.”
In the United States, Congress has created a bipartisan task force focused on AI and how to combat the negative implications, like deepfakes and job loss, even as the nation acts as an international leader in the development of the field, NBC News reported. Twelve members from each party will join the task force.
But don’t expect sweeping legislative priorities out of the task force. NBC News describes the task force’s mission as “writing a comprehensive report that will include guiding principles, recommendations and policy proposals developed with help from House committees of jurisdiction.”
Some think Congress isn’t moving fast enough to put recommendations and policies into effect, so they’re taking matters into their own hands. California, the largest state in the nation, intends to roll out legislation in the near future to regulate AI in the state, which is home to many tech companies.
“I would love to have one unified, federal law that effectively addresses AI safety. Congress has not passed such a law. Congress has not even come close to passing such a law,” California Democratic state Senator Scott Wiener, of San Francisco, told NPR.
The California measure, Senate Bill 1047, would require companies building the largest and most powerful AI models to test for safety before releasing those models to the public.
AI companies would have to tell the state about testing protocols, guardrails and if the tech causes “critical harm,” California’s attorney general could sue.
Wiener says his legislation draws heavily on the Biden Administration’s 2023 executive order on AI.
This floats the very real possibility that America could see a patchwork of regulations in the AI space if Congress doesn’t get its act together – and soon.
AI use cases
Finally, we know what’s scary about AI, we know what governments wants to do with AI, but how are companies using AI today?
The news industry continues to be especially interested in AI. Politico published an interview with Oxford doctoral candidate Felix M. Simon about how AI has already descended on the industry, impacting everything from article recommendations in news apps to, yes, how, the news gets made.
Simple, non-terrifying use cases include giving AI long-form content and having it digest the piece into bullet points for easy consumption, or having an AI-generated voice read an article aloud. But the more frightening possibilities include using AI to replace human reporters, to churn out mass quantities of stories instead of focusing on quality, and Big Tech fully taking control of media through its ownership of AI.
In related news, Google is paying small news publishers to use its AI tools to create content, Adweek reported. The independent publishers will receive sums in the five-figure range to post content over the course of a year. The tool, which is not currently available for public use, indexes recent reports, such as from government agencies, and summarizes them for easy publication.
“In partnership with news publishers, especially smaller publishers, we’re in the early stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work,” reads a statement from Google shared with Adweek. “These tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”
Still, it seems naive to think that these tools won’t replace at least some journalists, no matter what everyone would like to believe.
Lending company Klarna says its use of AI has enabled it to replace 700 human employees – coincidentally, the company says, the same number of people it recently laid off. Fast Company reports that Klarna has gone all-in on AI for customer service, where it currently accounts for two-thirds of all customer conversations, with similar satisfaction ratings as to humans.
Whether you view this all as inevitable progress, nightmare fuel or a bit of both, there is likely no escaping the AI onslaught. That’s according to JPMorgan Chase CEO Jamie Dimon.
“This is not hype,” Dimon told CNBC.” This is real. When we had the internet bubble the first time around … that was hype. This is not hype. It’s real. People are deploying it at different speeds, but it will handle a tremendous amount of stuff.”
Guess we’ll find out.
What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!
Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.