AI risk and regulation news: What communicators need to know

Newsom vetoes landmark AI bill, OpenAI restructures and more.

This week, an influential AI bill is vetoed by California Gov. Gavin Newsom, the federal government goes all in on nuclear energy to power AI, top OpenAI execs leave as the company restructures to make money and more.

Read on to find out what communicators need to be aware of this week in the latest chaotic, captivating chapter of AI’s growing influence.

AI goes nuclear

Another week, another existential concern over AI’s impact on our national energy infrastructure. As AI use grows, so will the need to feed it energy.

Yesterday, the U.S. Department of Energy announced that it had finalized a $1.5 billion loan to restart Michigan’s Holtec Palisades nuclear plant.

Last month, Microsoft announced it would buy Pennsylvania’s Three Mile Island nuclear plant in a $1.6 billion deal with Constellation Energy to open the reactor in 2028, reported Reuters, which would produce energy exclusively for the tech giant.

Given that Three Mile Island was the site of the worst nuclear meltdown in American history according to the Nuclear Regulatory Commission, concerns from the scientific community surrounding the reintroduction of nuclear power as a viable and sustainable energy source are vast.

The science journal Nature captured these concerns, which include the need for a robust safety protocol framework and strict regulations around reactor operations, something that the NRC is primed to update and enforce. Of course, those will soon need to exist in relation to how big tech is scaling its AI product development, a matter in a regulatory holding pattern all its own.  

For what it’s worth, Biden did address the nuclear risks to the power grid alongside the scaling of AI infrastructure in an April progress update to his executive order on AI. 

While the DOE’s push for nuclear energy, and Biden’s executive order, demonstrate what public-private partnerships around AI can look like, questions of how these partnerships can coexist alongside strict regulation (both AI and nuclear) persist. 

Communicators working in the energy industry should pay attention to nuclear energy’s going influence in the coming years.

Open AI restructures as a for-profit company 

Meanwhile, a restructuring at Open AI raised questions over how the most popular genAI company can be a positive influence on responsible use.

ChatGPT maker OpenAI announced plans to restructure its business to become a for-profit benefit corporation and remove control from its non-profit board last Thursday. CEO Sam Altman will receive equity for the first time.

Reuters reports:

The OpenAI non-profit will continue to exist and own a minority stake in the for-profit company, the sources said. The move could also have implications for how the company manages AI risks in a new governance structure.

“We remain focused on building AI that benefits everyone, and we’re working with our board to ensure that we’re best positioned to succeed in our mission. The non-profit is core to our mission and will continue to exist,” an OpenAI spokesperson said.

Last Wednesday, OpenAI CTO Mira Murati, Chief Research Officer Bob McGrew and Vice President of Research Barret Zoph announced their plans to resign. Murati announced her resignation in a memo to employees, reported The New York Times, that she later shared on X:

While her stated reasons for stepping away are vague, Murati’s focus on collaboration and “our quest to improve human well-being” are a notable divergence from the reported reasons for OpenAI restructuring to become more attractive to investors.

Three weeks ago, Altman left OpenAI’s safety board,  which was created in May to oversee critical safety decisions around its products and operations. Last month, Altman joined the Washington lobbying group Business Software Alliance, reported Semafor. The global group pushes a focus on “responsible AI” for enterprise business, a buzzword evangelized in owned media white papers across the world.

The restructure, and Altman’s move, raise questions of what the governance structure for the company will look like, andOpenAI will position its commitments to “responsible AI” as a means to mitigate risk amid its changes.

California Gov. Gavin Newsom vetoes AI safety bill

It’s unclear how much pressure there will be for voluntary risk reporting to occur, however. Over the weekend, California Gov. Gavin Newsom vetoed AI safety bill S.B. 1047 after it was approved last month in the State Assembly. The bill would have required that big tech test the safety of its systems or models before public release, and given the state the right to sue companies over serious harm caused by the tech. 

According to the New York Times:

Mr. Newsom said that the bill was flawed because it focused too much on regulating the biggest A.I. systems, known as frontier models, without considering potential risks and harms from the technology. He said that legislators should go back to rewrite it for the next session.

“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Mr. Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”

Newsom’s language echoes that of California Speaker Nancy Pelosi, who previously said in a statement that the bill was “well-intentioned but ill-informed”.

The bill was backed by many AI safety groups, top experts in the field and Californians themselves. It was opposed by almost all of the tech industry players, many of which are headquartered in California.

Vox reports:

Now that Newsom has killed the bill, he may face some sticks of his own. A poll from the pro-SB1047 AI Policy Institute finds that 60 percent of voters are prepared to blame him for future AI-related incidents if he vetoes SB 1047. In fact, they’d punish him at the ballot box if he runs for higher office: 40 percent of California voters say they would be less likely to vote for Newsom in a future presidential primary election if he vetoes the bill.

California has been a bellwether in digital privacy legislation and consumer protections, and many hoped this legislation would set a precedent for AI companies to be held accountable akin to the European Union’s landmark AI act. Instead, Newsom’s office is downplaying the lack of stringent state regulation on AI safety at the enterprise level and focusing on other AI laws that he did sign.

The same day as the veto, a blog post on Newsom’s website noted that he signed 17 other bills into law over the past 30 days, including legislation to crack down on deepfakes to combat disinformation, mandate watermarks, and protect children. 

California lags behind other states in enacting laws to address these issues. “In the absence of federal legislation, Colorado, Maryland, Illinois and other states have enacted laws to require disclosures of A.I.-generated ‘deepfake’ videos in political ads, ban the use of facial recognition and other A.I. tools in hiring and protect consumers from discrimination in A.I. models,” reported The New York Times

The post emphasizes Newsom’s push for risk assessment data, listing out the many experts it has partnered with over the past year who “will help lead California’s effort to develop responsible guardrails for the deployment of GenAI”. They include “godmother of AI” Dr. Fei-Fei Li, Tino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes, dean of the College of Computing, Data Science, and Society at UC Berkeley.

However you feel about California’s role in AI regulation, this post is an artful example of how to center your positive strides in a message while downplaying the controversial decisions. 

The FTC cracks down on fraud with ‘Operation AI Comply’

Elsewhere, the government is taking new steps to combat bad actors. Last week, the Federal Trade Commission launched “Operation AI Comply” to target companies that produce AI tools used for deceptive purposes like leaving fake reviews.

“Using AI tools to trick, mislead, or defraud people is illegal,” outgoing FTC Chair Lina M. Khan said in a press release. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

All the more reason for communicators to influence strategy inside and out– overseeing internal governance through guideline creation and government affairs while connecting that work back to the organization’s messaging, promises and positioning of AI use cases in its products. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.