AI news for communicators: What’s new and notable

What you need to know about the latest research and developments on AI risk and regulation.

Last week on “The Daily Show,” Mark Cuban suggested that the AI race is ultimately a matter of power, saying that “ nothing will give you more power than military and AI.”

British Historian Lord Acton would have offered a fitting response with his famous maxim, “Absolute power corrupts absolutely. ” And as communicators continue to see the battle between private company lobbying efforts, state regulation, and federal regulation play out in real-time, it’s hard to argue with Cuban’s sentiment. 

In notable news for communicators, a controversial California AI regulation bill moves toward a vote at the end of the month, the Democratic National Convention takes over Chicago amid an influx of deepfakes attempting to sway voter sentiment about the 2024 presidential election.

Here’s what communicators need to know about AI this week.

Risks 

With the DNC hitting Chicago this week, coverage is fixated on the surrogates, speeches and memorable moments leading up to Vice President Kamala Harris’ formal acceptance of the presidential nomination Thursday. 

While the November elections will bring about many historic firsts, the widespread applications of deepfake technology to misrepresent candidates and positions is also unprecedented. 

On Monday, Microsoft hosted a luncheon at Chicago’s Drake Hotel to train people on detecting deceptive AI content and using tools that can help deepfakes as AI-manipulated media becomes more widespread.

The Chicago Sun-Times reports:

“This is a global challenge and opportunity,” says Ginny Badanes, general manager of Microsoft’s Democracy Forward Program. “While we’re, of course, thinking a lot about the U.S. election because it’s right in front of us, and it’s obviously hugely consequential, it’s important to look back at the big elections that have happened.”

Badanes says one of the most troubling political deepfake attacks worldwide happened in October in Slovakia just two days before the election for a seat in parliament in the central European country. AI technology was used to create a fake recording of a top political candidate bragging about rigging the election. It went viral. And the candidate lost by a slim margin.

In a report this month, Microsoft warned that figures in Russia were “targeting the U.S. election with distinctive video forgeries.”

These myriad examples highlight a troubling pattern of bad actors attempting to drive voter behavior. This plays out as an AI-assisted evolution of the microtargeting campaign that weaponized the psychographic profiles of Facebook users to flood their feeds with disinformation ahead of the 2016 election.

Once again, the bad actors are both foreign and domestic. Trump falsely implied that Taylor Swift endorsed him this week by posting fake images of Swift and her fans in pro-Trump garb. Last week, Elon Musk released image generation capabilities on Grok, his AI chatbot on X, which allows users to generate AI images with little filters or guidelines. As Rolling Stone reports, it didn’t go well

This may get worse before it gets better, which could explain why The Verge reports that the San Francisco City Attorney’s office is suing 16 of the most popular “AI undressing” websites that do exactly what it sounds like they do.

It may also explain why the world of finance is starting to recognize how risky of an investment AI is in its currently unregulated state.

Marketplace reports that the Eurekahedge AI Hedge fund has lagged in the S&P 500, “proving that the machines aren’t learning from their investing mistakes.”

Meanwhile, a new report from LLM evaluation platform Arize found that one in five Fortune 500 companies now mention generative AI or LLMs in their annual reports. Among them, researchers found a 473.5% increase in the number of companies that framed AI as a risk factor since 2022.

What could a benchmark for AI risk evaluation look like? Bo Li, an associate professor at the University of Chicago, has led a group of colleagues across several universities to develop a taxonomy of AI risks and a benchmark for evaluating which LLMs break the rules most.

Li and the team analyzed government AI regulations and guidelines in the U.S., China and the EU alongside the usage policies of 16 major AI companies. 

WIRED reports:

Understanding the risk landscape, as well as the pros and cons of specific models, may become increasingly important for companies looking to deploy AI in certain markets or for certain use cases. A company looking to use a LLM for customer service, for instance, might care more about a model’s propensity to produce offensive language when provoked than how capable it is of designing a nuclear device.

Bo says the analysis also reveals some interesting issues with how AI is being developed and regulated. For instance, the researchers found government rules to be less comprehensive than companies’ policies overall, suggesting that there is room for regulations to be tightened.

The analysis also suggests that some companies could do more to ensure their models are safe. “If you test some models against a company’s own policies, they are not necessarily compliant,” Bo says. “This means there is a lot of room for them to improve.”

This conclusion underscores the impact that corporate communicators can make on shaping internal AI policies and defining responsible use cases. You are the glue that can hold your organization’s AI efforts together as they scale. 

Much like a crisis plan has stakeholders across business functions, your internal AI strategy should start with a task force that engages heads across departments and functions to ensure every leader is communicating guidelines, procedures and use cases from the same playbook– while serving as your eyes and ears to identify emerging risks. 

Regulation

Last Thursday, the California State Assembly’s Appropriations Committee voted to endorse an amended version of a bill that would require companies to test the safety of their AI tech before releasing anything to the public. Bill S.B. 1047 would let the state’s attorney general sue companies if their AI caused harm, including deaths or mass property damage. A formal vote is expected by the end of the month.

Unsurprisingly, the tech industry is fiercely debating the details of the bill.

The New York Times reports:

Senator Scott Wiener, the author of the bill, made several concessions in an effort to appease tech industry critics like OpenAI, Meta and Google. The changes also reflect some suggestions made by another prominent start-up, Anthropic.

The bill would no longer create a new agency for A.I. safety, instead shifting regulatory duties to the existing California Government Operations Agency. And companies would be liable for violating the law only if their technologies caused real harm or imminent dangers to public safety. Previously, the bill allowed for companies to be punished for failing to adhere to safety regulations even if no harm had yet occurred.

“The new amendments reflect months of constructive dialogue with industry, start-up and academic stakeholders,” said Dan Hendrycks, a founder of the nonprofit Center for A.I. Safety in San Francisco, which helped write the bill.

A Google spokesperson said the company’s previous concerns “still stand.” Anthropic said it was still reviewing the changes. OpenAI and Meta declined to comment on the amended bill.

Mr. Wiener said in a statement on Thursday that “we can advance both innovation and safety; the two are not mutually exclusive.” He said he believed the amendments addressed many of the tech industry’s concerns.

Late last week, California Congresswoman Nancy Pelosi issued a statement sharing her concerns about the bill. Pelosi cited Biden’s AI efforts and warned against stifling innovation. 

“The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed,” Pelosi said.  

Pelosi cited the work of top AI researchers and thought leaders decrying the bill, but offers little in the realm of next steps for the advancement of federal regulation. 

In response, California senator and bill sponsor Scott Wiener, disagreed with Pelosi. 

“The bill requires only the largest AI developers to do what each and every one of them has repeatedly committed to do: Perform basic safety testing on massively powerful AI models,” Wiener added.

This disconnect highlights the frustrating push and pull between those who warn against taking an accelerationist mentality with AI and those who publicly cite the stifling of innovation -–a key talking point of those doing AI policy and lobbying work on behalf of big tech. 

It also speaks to the limits of thought leadership. Consider the op-ed published last month by Amazon SVP of Global Public Policy and General Counsel David Zapolsky that calls for an alignment on a global responsible AI policy. The piece emphasizes Amazon’s willingness to collaborate with the government on “voluntary commitments,” emphasizes the company’s research and deployment of responsible use safeguards in its products and convincingly positions Amazon as the stewards of responsible AI reform.

While this piece does a fantastic job positioning Amazon as an industry leader, it also doesn’t mention federal regulation once. The idea of private-public collaboration being a sufficient substitute for formal regulation surfaces indirectly through multiple mentions of collaboration, though, setting a precedent for the recent AI lobbyist influx on The Hill. 

The number of lobbyists hired to lobby the White House on AI-related issues grew from 323 in the first quarter to 931 by the fourth quarter,” reminds Public Citizen. 

As more companies stand up their philosophies on responsible AI use at the expense of government oversight, it’s crucial to understand what daylight exists between your company’s external claims about the efficacy of its responsible AI efforts and how those efforts are playing out on the inside.

If you’re at an organization large enough to have public affairs or public policy colleagues in the fold, this is a reminder that aligning your public affairs and corp comms efforts with your internal efforts is a crucial step to mitigating risk. 

Those who are truly able to regulate their deployment and use cases internally will be able to explain how and source guidelines for ethical use cases, continued learning and so much more. True thought leadership will not take the form of product promotion, but showing the work through actions and results.  

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.