AI for communicators: What’s new and what matters

A new OpenAI model was unveiled and California passed new AI regulations.

AI tools and regulations continue to advance at a startling rate. Let’s catch you up quick.

Tools and business cases

AI-generated video continues to be a shiny bauble on the horizon. Adobe has announced a limited release of Adobe Firefly Video Model later this year. The tool will reportedly offer both text and image prompts and allow users to specify the camera angle, motion and other aspects to get the perfect shot. It also comes with the assurance that itis only trained on Adobe-approved images, and thus will come without the copyright complications some other tools pose.

The downside? Videos are limited to just 5 seconds. Another tool, dubbed Generative Extended, will allow the extension of existing clips through the use of AI. That will be available only through Premiere Pro. 

Depending on Firefly Video’s release date, this could be one of the first publicly available, reputable video AI tools. While OpenAI announced its own Sora model months ago, it remains in testing with no release date announced. 

And just as AI video is set to gain traction, Instagram and Facebook are set to make its labeling of AI-edited content less obvious to the casual scroller. Rather than appearing directly below the user’s name, the tag will now be tucked away in a menu. However, this only applies to AI edited content, not AI generated content. Still, it’s a slippery slope and it can be difficult to tell where one ends and the other begins.

Meta has also publicly admitted to training its LLM on all publicly available Facebook and Instagram posts made by adults, dating all the way back to 2007. Yes, that means your cringey college musings after that one philosophy class were used to feed an AI model. While there are opt-outs available in some areas, such as the EU and Brazil, Facebook has by and large already devoured your content to feed the voracious appetite of AI models. 

OpenAI, creator of ChatGPT, has created a new tool, OpenAI o1, that focuses on math and coding prompts. OpenAI says the tool spends“more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”

While this high-end, scientifically focused tool may not be a fit for most communicators, other departments may use these tools – which means communicators will be in charge of explaining the how and why of the tech internally and externally. 

In a quirkier use of AI, Google is testing a tool that allows you to create podcasts based on your notes. It’s an outcropping of notetaking app NotebookLM, creating two AI-generated “hosts” who can discuss your research and draw connections. According to The Verge, they’re fairly lifelike, with casual speech and enough smarts to discuss the topic in a way that’s interesting. This could be a great tool for creating internal podcasts for those with small budgets and no recording equipment. 

On a higher level, the Harvard Business Review examined the use of AI to help formulate business strategy. It found that the tool, while often lacking specifics on a business, is useful for identifying blind spots that human workers may miss. For instance, the AI was prompted to assist a small agricultural research firm in identifying what factors may impact their business in the future:

However, with clever prompting, gen AI tools can provide the team with food for thought. We framed the prompt as “What will impact the future demand for our services?” The tool highlighted seven factors, from “sustainability and climate change” to “changing consumer preferences” and “global population growth.” These drivers help Keith’s team think more broadly about demand.

In all cases, the AI required careful oversight from humans and sometimes produced laughable results. Still, it can help ensure a broad view of challenges rather than the sometimes myopic viewpoints of those who are entrenched in a particular field. 

OpenAI o1 will be a subscription tool, like many other high-end models today. But New York Magazine reports that despite the plethora of whizz-bang new tools on the market, tech companies are still trying to determine how to earn back the billions they’re investing, save a standard subscription model that’s currently “a race to the bottom.” 

ChatGPT has a free version, as do Meta and Google’s AI models. While upsell versions are available, it’s hard to ask people to pay for something they’ve become accustomed to using for free – just ask the journalism industry. But AI investment is eye-wateringly expensive. Eventually, money will have to be made.

Nandan Nilekani, co-founder of Infosys, believes that these models will become “commoditized” and the value will shift from the model itself to the tech stack behind it.

This will be especially true for B2B AI, Nilekani said.

“Consumer AI you can get up a chatbot and start working,” he told CNBC. “Enterprise AI requires firms to reinvent themselves internally. So it’s a longer haul, but definitely it’s a huge thing happening right now.” 

Regulation and risk 

The onslaught of new LLMs, tools and business use cases makes mitigating risk a priority for communicators in both the government and private sector.

As omnipresent recording artist Taylor Swift made headlines last week after endorsing Vice President Kamala Harris for president, she explained that the Trump campaign’s use of her likeness in AI deepfakes informed her endorsement.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote on Instagram. “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.” 

This isn’t the first time that Swift has been subjected to the damage AI deepfakes– earlier this year, fake pornographic images of Swift were widely circulated on X.  

Last week, the Biden-Harris administration announced a series of voluntary commitments from AI model developers to combat the creation of non-consensual intimate images of adults and sexually explicit material of children. 

According to the White House, these steps include:

    • Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI commit to responsibly sourcing their datasets and safeguarding them from image-based sexual abuse. 
    • Adobe, Anthropic, Cohere, Microsoft, and OpenAI commit to incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse.  
    • Adobe, Anthropic, Cohere, Microsoft, and OpenAI, when appropriate and depending on the purpose of the model, commit to removing nude images from AI training datasets.

While these actions sound great on paper, the lack of specifics and use of phrases like “responsibly sourcing” and “when appropriate” raise the question of who will ultimately make these determinations, and how a volunteer process can hold these companies accountable to change.

Swift’s words, meanwhile, underscore how much the rapid, unchecked acceleration of AI use cases exists as an existential issue for voters in affected industries. California Gov. Gavin Newsom understands this, which is why he signed two California bills aimed at giving performers and other artists more protection over how their digital likeness is used, even after their death.

According to Dateline:

A.B. 1836 expands the scope of the state’s postmortem right of publicity, including the use of digital replicas, meaning that an estate’s permission would be needed to use such technology to recreate the voice and likeness of a deceased person. There are exceptions for news, public affairs and sports broadcasts, as well as for other uses like satire, comment, criticism and parody, and for certain documentary, biographical or historical projects.

The other bill, A.B. 2602, bolsters protections for artists in contract agreements over the use of their digital likenesses. 

Newsom didn’t yet move on Bill SB 1047, though, which includes rules that require AI companies to share their plans to protect against manipulation of infrastructure.  He has until Sept. 30th to sign, veto or allow these other proposals to become law without his signature. Union SAG-AFTRA, the National Organization for Women and Fund Her all sent letters supporting the bill.

This whole dance is ultimately an audience-first exercise that will underscore just who Newsom’s audience is – is it his constituents, the big tech companies pumping billions into the state’s infrastructure, or a mix of both? The power of state governments to set a precedent that the federal government can model national regulation around cannot be understated. 

However Newsom responds, the pressure from California arrives at a time when Washington is proposing similar regulations. Last Monday, the U.S. Commerce Department said it was considering implementing detailed reporting requirements for advanced AI developers and cloud-computing providers to ensure their tech is safe and reliant against cyberattacks.

Reuters reports:

The proposal from the department’s Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of “frontier” AI models and computing clusters.

It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons.

That may explain why several tech executives met with the White House last week to discuss how AI data centers impact the country’s energy and infrastructure. The who’s-who list included Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei and Google President Ruth Porat along with leaders from Microsoft and several American utility companies.

Last month, Altman joined the Washington lobbying group Business Software Alliance, reported Semafor. The global group pushes a focus on “responsible AI” for enterprise business, a buzzword evangelized in owned media white papers across the world. 

Microsoft provides the most recent example of this, explaining its partnership with G42, an AI-focused holding group based in Abu Dhabi, as an example of how responsible AI can be implemented in the region.

Last week, Altman left OpenAI’s safety board, which was created this past May to oversee critical safety decisions around its products and operations. It’s part of the board’s larger stated commitment to independence, transparency and external collaboration. The board will be chaired by current OpenAI board members including Carnegie Mellon professor Zico Kolter, Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and ex-Sony EVP Nicole Seligman. 

Understood through the lens of a push for independence, Altman’s leaving the board soon after joining a lobbying group accentuates the major push and pull between effective internal accountability and federal oversight–companies. Voluntary actions like signing orders or publishing white papers are one way of showing ‘responsible AI use’ while still allowing companies to avoid more stringent regulation.

Meanwhile, several pioneering AI scientists called for a coordinated global partnership to address risk, telling The New York Times that “loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.” This response would empower watchdogs at the local and national levels to work in lockstep with one another.

We’re already seeing what a regulatory response looks like amid reports that Ireland’s Data Protection Commission is investigating Google’s Pathways Language Model 2 to determine if its policies pose a larger threat to individuals represented in the datasets. 

While a coordinated effort between the EU and the US may seem far-fetched for now, this idea is a reminder you have the power to influence regulation and policy at your organization and weigh in on the risks and rewards of strategic AI investments, before anything is decided at the federal level.

That doesn’t always mean influencing policies and guidelines, either. If a leader is going around ike Oracle co-founder Larry Ellison and touting their vision for expansive AI as a surveillance tool,  you can point to the inevitable blowback as a reason to vet their thought leadership takes first.

Positioning yourself as a guardian of reputation starts with mitigating risk. That’s when starting conversations around statements like Ellison’s surveillance state take or Altman’s resignation from OpenAI’s safety board forms a foundation for knowledge sharing that shapes sound best practices and empowers your company to move along the AI maturity curve responsibly. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.