AI election interference wasn’t a thing, Trump vows to undo Biden’s AI plans

How Trump may affect federal AI regulation, the future of AI at work and more.

Our first roundup of AI news since Donald Trump won the presidency is full of theories and speculation as to what effect he will have on federal AI regulation, the future of AI at work and much more.

We’re also looking at new deals and developments for AI at work, including more public-private partnerships and other regulatory developments.

Let’s dive in.

Why AI didn’t interfere with the 2024 elections

We’ve seen several examples of AI-generated deepfakes and misinformation attempting to influence elections over the past year, including robocalls in President Biden’s voice during the New Hampshire primary urging democratic voters to stay home, a deepfake of Taylor Swift falsely endorsing Trump and more.

When all was said and done, there were no credible reports of AI misinformation on voting day. Maybe the $6 million fine and over two dozen criminal charges that the political correspondent behind those calls was hit with had something to do with it.

Open AI may deserve some credit, too. According to a blog post on its website updated on Nov. 8, the company behind ChatGPT “implemented safeguards to direct people to reliable sources of information, prevent deepfakes, and counter efforts by malicious actors.”

Those included:

Elevating authoritative sources of information. “Through our collaboration with the National Association of Secretaries of State (NASS), we directed people asking ChatGPT specific questions about voting in the U.S., like where or how to vote, to CanIVote.org⁠(opens in a new window),” explained OpenAI. “In the month leading up to the election, roughly 1 million ChatGPT responses directed people to CanIVote.org⁠(opens in a new window).” The tool also prompted people asking for election results on election day to check outlets like Associated Press and Reuters.
Manually weeding out bias. “In addition to our efforts to direct people to reliable sources of information, we also worked to ensure ChatGPT did not express political preferences or recommend candidates even when asked explicitly,” OpenAI wrote.
Preventing deepfakes. OpenAI programed ChatGPT to refuse requests for generated images of real people, including politicians. “In the month leading up to Election Day, we estimate that ChatGPT rejected over 250,000 requests to generate DALL·E images of President-elect Trump, Vice President Harris, Vice President-elect Vance, President Biden, and Governor Walz.”
Disrupting threat actors. OpenAI has documented what it learns about bad actors since the spring. “In May⁠, we began publicly sharing information on our disruptions, and published additional reports in August⁠ and October⁠(opens in a new window),” it wrote.

There’s much to learn from how OpenAI kept the narrative of its election integrity efforts afloat. When working with products and solutions in a regulated space (or a space that’s soon to be regulated), documenting your progress chronologically lets stakeholders know that your commitments are proactive and not one-and-done PR efforts. Sharing the results of your findings externally also sends a signal that you’re willing to communicate with a level of transparency around your process.

Perhaps most importantly, listing your actions taken and publishing them at a strategic time in the most direct format—a bulleted list—is a clear way to show the impact of your work. It’s even more effective than having your CEO or General Counsel pen a white paper waxing philosophical about responsible AI. The latter approach, taken by many other tech companies to prove they are capable of regulating their own products and practices, ultimately just serves longform thought leadership PR for the protections built into your offerings.

The public wants results.

Trump’s pledge to undo Biden’s AI regulation progress

Though slow to start, President Biden eventually followed through on his Oct. ’23 Executive AI Order with a fact sheet detailing its key accomplishments one year in that include actions taken to manage safety and security risks by requiring companies developing AI models to report on how they’re trained and secured, create privacy and equity protections for American workers that address biases, and much more.

TechCrunch reports that many prominent conservatives equate these efforts with an attack on free speech, and speculate that the Trump administration will seek to undue much of the EA’s progress:

They accuse the Biden administration of attempting to steer AI development with liberal notions about disinformation and bias; Senator Ted Cruz (R-TX) recently slammed [the National Institute of Standards and Technology’s] “woke AI ‘safety’ standards” as a “plan to control speech” based on “amorphous” social harms.

“When I’m re-elected,” Trump said at a rally in Cedar Rapids, Iowa, last December, “I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one.”

While it’s unclear what would replace Biden’s AI order, Trump’s focus on free speech suggests that the EA’s federal efforts to regulate bias would be abandoned. AI biases can impact the hiring process , clinical algorithms that inform medical treatment and much more.

This should concern communicators who not only serve diverse workforces, but multigenerational workforces, too. A recent study from Better Together found that younger respondents aged 18-29 were especially sensitive to discriminatory content generation, with more than 60% expressing their concerns over racist, sexist or classist results being generated by AI outputs.

As the Ragan community often explores how communicators work across business lines and functions can secure influence, consider the huge opportunity that scaling AI adoption across your workplace presents. Training your workforce on these tools should also include training them to train the models by pointing out when responses are potentially problematic and prompting the models why. This empowers employees to feel better about combatting the bias by giving them some control over mitigating it.

While AI promises countless benefits to workplace productivity and strategy, training it to unlearn any biased and racist habits is a reputational safeguard that will protect your organization—and the communications function—in the long run.

The latest public-private AI partnership brings secure AI to US intelligence and defense agencies

Whether or not Elon Musk leads Trump’s AI policy as one AI advocacy group is pushing for, a Trump presidency will likely alter how the federal government works with private sector players to scale its security and power.

It’s unlikely that the man who rose to prominence with his bestseller “The Art of the Deal” will abandon his love of making deals, but who those deals are with may change.

Trump will have much to untangle. This past May, Biden’s White House also made a big announcement to bolster America’s nuclear infrastructure that was later expanded to include plans to reopen the infamous Three Mile Island nuclear plant as a means to power Microsoft’s data centers for AI.

Two days after the election, AI company Anthropic and data analytics provider Palantir announced a partnership with Amazon Web Services (AWS) to give U.S. intelligence and defense agencies access to AWS’ Claude 3 and 3.5 generative AI models.

According to the press release:

The partnership facilitates the responsible application of AI, enabling the use of Claude within Palantir’s products to support government operations such as processing vast amounts of complex data rapidly, elevating data driven insights, identifying patterns and trends more effectively, streamlining document review and preparation, and helping U.S. officials to make more informed decisions in time-sensitive situations while preserving their decision-making authorities.

“Our partnership with Anthropic and AWS provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions,” said Shyam Sankar, Chief Technology Officer, Palantir. “Palantir is proud to be the first industry partner to bring Claude models to classified environments.”

“We’re proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations,” said Kate Earle Jensen, Head of Sales and Partnerships, Anthropic.

The first AI tool approved for use in these highly classified environments is a big deal. Solidified in the absence of formal federal regulation, this news sends a signal to those in both the public and private sectors that Claude’s safety protocols are secure enough to merit enterprise-wide adoption for those in heavily regulated and classified spaces. If you’re in one of those spaces and still struggling to earn budget approval or executive buy-in, sharing this partnership may help.

It’s also a reminder that these public-private partnerships are the fruit born from record amounts of AI lobbying efforts in Washington. “The number of lobbyists hired to lobby the White House on AI-related issues grew from 323 in the first quarter to 931 by the fourth quarter” of 2023, reminds Public Citizen.

That influx is hard to ignore as federal regulation keeps stalling and the most thorough California AI regulation, proposing AI manufacturers test the safety of their tools before taking them to market, was killed by Gov. Gavin Newsom months earlier

Understanding the patchwork of state AI regulations in the markets where you operate is one of the best ways to stay above board as potential new use cases present themselves. The best way is sound internal governance, though, including guidelines and rules of play for internal and external use cases vetted with leaders across all business and product lines that automation touches, with you steering the collaboration from your communications seat.

Other AI developments that matter to communicators include:

• Salesforce is hiring 1,000 salespeople for an AI agent push, reports Yahoo Finance. Coupled with Cisco’s planned WebEX AI agent, Nvidia’s partnerships to develop agents for Accenture and Deloitte clients and other developments, the future of AI will look like autonomous assistants working in the background to fulfill the tasks and goals you set for them.
• Lionsgate inked an agreement with AI firm Runway to train a new AI model on Lionsgate’s film and TV library for future project development, according to The Hollywood Reporter.
Bloomberg Law reported that OpenAI defeated Raw Story’s lawsuit against the company, which claimed OpenAI trained ChatGPT on Raw Story’s reporting.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Follow him on LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.