AI for communicators: What’s new and what matters

A beloved social media tool skyrockets in price due to AI; California passes groundbreaking regulation bill.

AI for communicators


The recent Labor Day holiday has many of us thinking about how AI will impact the future of work. There are arguments to be made about whether the rise of the tech will help or hurt jobs – it’s a sought-after skill for new hires, but one company is using AI as a pretext for cutting thousands of roles. And in the short-term, the rapid expansion of technology is making at least some tools used by workers more expensive.

Here’s what communicators need to know about AI this week.

Tools

Many tech companies continue to go all-in on AI – and are charging for the shiny new features.

Canva, a beloved tool of social media managers, has ratcheted up prices up to 300% in some cases, The Verge reported. Some Canva Teams subscribers report prices leaping from $120 per year for a five-person team to $500. Some of those lower prices were legacy, grandfathered rates, but nonetheless, it’s an eye-watering increase that Canva attributes in part to new AI-driven design tools. But will users find that worth such a massive price increase? 

Canva’s price hikes could be a response to the need for companies to recoup some of their huge investments in AI. As CNN put it after Nvidia’s strong earnings report nonetheless earned shrugs: “As the thrill of the initial AI buzz starts to fade, Wall Street is (finally) getting a little more clear-eyed about the actual value of the technology and, more importantly, how it’s going to actually generate revenue for the companies promoting it.” 

While Canva seems to be answering that question through consumer-borne price hikes, OpenAI is trying to keep investment from companies flowing in. It’s a major pivot for a company founded as a nonprofit that now requires an estimated $7 billion per year to operate, compared to just $2 billion in revenue. Some worry that the pursuit of profits and investment is coming at the expense of user and data safety. 

Meanwhile, Google is launching or relaunching a number of new tools designed to establish its role as a major player in the AI space. Users can once again ask the Gemini model to create images of people – an ability that had been shut down for months after the image generator returned bizarre, ahistorical results and appeared to have difficulties creating images of white people when asked. While it’s great to have another tool available, Google’s AI woes have been mounting as multiple models have proven to be not ready for primetime upon launch. Will new troubles crop up? 

Google is also expanding the availability of its Gmail chatbot, which can help surface items in your inbox, from web only to its Android app – though the tool is only available to premium subscribers.

While using AI to search your inbox is a fairly understandable application, some new frontiers of AI are raising eyebrows. “Emotion AI” is when bots learn to read human emotion, according to TechCrunch. This goes beyond the sentiment analysis that’s been a popular tool on social media and media monitoring for years, reading not just text but also human expressions, tone of voice and more. 

While this has broad applications for customer service, media monitoring and more, it also raises deep questions about privacy and how well anyone, including robots, can actually read human emotion. 

Another double-edged sword of AI use is evidenced use of AI news anchors in Venezuela, Reuters reports. 

As the nation launches a crackdown on journalists after a highly disputed election, a Colombian nonprofit uses AI avatars to share the news  without endangering real people. The project’s leader says it’s to “circumvent the persecution and increasing repression” against journalists. And while that usage is certainly noble, it isn’t hard to imagine a repressive regime doing the exact opposite, using AI puppets to spread misinformation without revealing their identity or the source of their journalism to the world.

 

 

Risks 

Many  journalism organizations aren’t keen for their work to be used by AI models – at least not without proper pay. Several leading news sites have allowed for their websites to be crawled for years, usually to help with search engine rankings. 

Now those same robots are being used to feed LLMs and news sources, especially paywalled sites, then locking the door by restricting where on their sites these bots can crawl

Apple specifically created an opt-out method that allows sites to continue to be crawled for existing purposes – think search – without allowing the content to be used in AI training. And major news sites are opting out in droves, holding out for specific agreements that will allow them to be paid for their work.

This creates a larger issue. AI models are insatiable, demanding a constant influx of content to continue to learn, grow and meet user needs. But as legitimate sources of human-created content are shut off and AI-created content spreads, AI models are increasingly trained on more AI content, creating an odd content ouroboros. If it trains too much on AI content that features hallucinations, we can see a model that becomes detached from reality and experiences “model collapse.”

That’s bad. But it seems in some ways inevitable as more and more AI content takes over the internet and legitimate publishers (understandably) want to be paid for their work.

But even outside of model collapse, users must be vigilant about trusting today’s models. A recent case of weird AI behavior went viral this week when it was found that ChatGPT was unable to count how many times the letter “R” appears in “strawberry.” It’s three, for the record, yet ChatGPT insisted there were only two. Anecdotally, this reporter has had problems getting ChatGPT to accurately count words, even when confronted with a precise word count. 

It’s a reminder that while technology can seem intelligent and confident, it’s often confidently wrong. 

Kevin Roose, tech columnist for the New York Times, also discovered this week just how difficult it is to change AI’s mind about something. In this case, the subject was himself: Roose rocketed to fame last year when Microsoft’s AI bot fell in love with him and tried to convince him to leave his wife. 

As a result, many AI models don’t seem too keen on Roose, with one even declaring, “I hate Kevin Roose.”

But changing that viewpoint was difficult. Roose’s options were getting websites to publish friendly stories showing that he wasn’t antagonistic toward AI (in other words, public relations) or creating his own website with friendly transcripts between him and chatbots, which AI models would eventually crawl and learn. A quicker and dirtier approach involved leaving “secret messages” for AI in white text on his website, as well as specific sequences designed to return more positive responses.

On the one hand, manipulating AI bots is likely to become the domain of PR professionals in the near future, which could be a boon for the profession. On the other hand, this shows just how easily manipulated AI bots can be – for good and for evil.

And even when used with positive intent, AI can still return problematic results. A study featured in Nature found that AI models exhibited strong dialect prejudice that penalizes people for their use of African American Vernacular English, a dialect frequently used by Black people in the United States. “Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death,” the study finds.

This is what happens when technology is trained on so much human writing: it’s going to pick up the flaws and prejudices of humans as well. Without strong oversight, it’s likely to cause major problems for marginalized people. 

 Finally, there is debate over what role AI is having in the U.S. presidential elections. Former president Donald Trump himself appeared to be taken in by a deepfake where Taylor Swift endorsed him (no such thing ever happened), sharing it on his Truth Social platform. AI is being used by both camps’ supporters, sometimes to generate obviously fake imagery, such as Trump as a body builder, while some are more subtle. 

But despite its undeniable presence in the election, it isn’t clear that AI is actually reshaping much in the race. State actors, such as Russia, are using the tools to try to manipulate the public, yes, but a report from Meta indicated that the gains were incremental and this year’s election isn’t significantly different from any other in regards to disinformation. 

But that’s only true for now. Vigilance is always required. 

Regulation

While some continue to question the influence of deepfakes on our democratic process, California took major steps last week to protect workers from being exploited by deepfakes.

California Assembly Bill 2602 was passed in the California Senate and Assembly last week to regulate the use of Gen AI for performers, including those on-screen and those who lend their voices or bodily likeness to audiobooks and videogames. 

While the bipartisan support the bill enjoyed is rare, rarer still is the lack of opposition from industry groups, including the Motion Picture Association, which represents Netflix, Paramount Studios, Sony, Warner Bros. and Disney, according to NPR

The bill also includes rules that require AI companies to share their plans to protect against manipulation of infrastructure. 

NPR reports:

The legislation was also supported by the union SAG-AFTRA, whose chief negotiator, Duncan Crabtree-Ireland, points out that the bill had bipartisan support and was not opposed by industry groups such as the Motion Picture Association, which represents studios such as Netflix, Paramount Pictures, Sony, Warner Bros., and Disney. A representative for the MPA says the organization is neutral on the bill.

Bill S.B. 1047 also advanced. That bill would require AI companies to share safety proposals to protect infrastructure against manipulation, according to NPR.

The AP reports:

“It’s time that Big Tech plays by some kind of a rule, not a lot, but something,” Republican Assembly member Devon Mathis said in support of the bill Wednesday. “The last thing we need is for a power grid to go out, for water systems to go out.”

The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm.

California Democratic Governor Gavin Newsom has until Sept. 30th to sign, veto or allow these proposals to become law without his signature. This puts all eyes on Newsom to either ratify or kill the potential laws that multiple stakeholders have different perspectives on. 

Given the opposition from major California employers like Google, there is a chance Newsom vetoes S.B. 1047, Vox reported

And while tech giants oppose the bill, we have a hint at what they’d like to see happen at the federal level instead.

Last Thursday, the U.S. AI Safety Institute announced it came to a testing and evaluation agreement with OpenAI and Anthropic, according to CNBC, that allows the institute to “receive access to major new models from each company prior to and following their initial public release.” 

Established after the Biden-Harris administration’s executive order on AI was issued last fall, the Institute exists as part of the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST).

According to the NIST:

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute. 

If this public-private partnership agreement seems vague on details and methodology, that’s because it is. The lack of detail underscores a major criticism that Biden’s executive order was light on specifics and mechanisms for enforcement. 

The outsized push from big tech to settle regulation at the federal level makes sense when one considers the outsized investments most major companies have made in lobbyists and public affairs specialists.

“The number of lobbyists hired to lobby the White House on AI-related issues grew from 323 in the first quarter to 931 by the fourth quarter,” reports Public Citizen.  

For communicators, this push and pull is a reminder that regulation and responsible use must start internally – and that, whatever happens in California by the end of the month, waiting for tangible direction from either federal or state governments may be a path to stalled progress.

Without some required reporting and oversight, regulators will continue to struggle with the pace of AI developments. But what would responsible safety measures look like in practice?

A recent report from the Financial Times looks at the EU’s AI Act, which was ratified this past spring, to answer this question. The report notes that the AI Act ties systemic risk to the power of computing metrics, and says this won’t cut it.

According to FT:

The trouble is that this relates to the power used for training. That could rise, or even fall, once it is deployed. It is also a somewhat spurious number: there are many other determinants, including data quality and chain of thought reasoning, which can boost performance without requiring extra training compute power. It will also date quickly: today’s big number could be mainstream next year. 

When the efficacy and accuracy of a risk management strategy depends largely on how you measure potential risks, agreeing on standardized parameters for responsible reporting and sharing of data remains an opportunity.

While many consider the EU’s AI Act a model that the rest of the world will follow (similar to Global Data Protection Regulation or GDPR), the recent push in California suggests that the state’s outsized investments in AI are propelling it to lead by example even faster. 

AI at work

While thinking about how to deploy AI responsibly often comes back to secure internal use cases, a recent report from Slingshot found that nearly two-thirds of employees primarily use AI to double-check their work. That’s higher than the number of workers using AI for initial research, workflow management and data analysis.

“While employers have specific intentions for AI in the workplace, it’s clear that they’re not aligned with employees’ current use of AI. Much of this comes down to employees’ education and training around AI tools,” Slingshot Founder Dean Guida said in a press release. 

This may account for a slight dip in US-based jobs that require AI skills, as measured by Stanford University’s annual AI Index Report. 

The report also looked at which AI skills were most sought after, which industries will rely on them the most and which states are leading in AI-based jobs.

The Oregon Capital Chronicle sifted through the report and found:

Generative AI skills, or the ability to build algorithms that produce text, images or other data when prompted, were sought after most, with nearly 60% of AI-related jobs requiring those skills. Large language modeling, or building technology that can generate and translate text, was second in demand, with 18% of AI jobs citing the need for those skills.

The industries that require these skills run the gamut — the information industry ranked first with 4.63% of jobs while professional, scientific and technical services came in second with 3.33%. The financial and insurance industries followed with 2.94%, and manufacturing came in fourth with 2.48%.

California — home to Silicon Valley — had 15.3%, or 70,630 of the country’s AI-related jobs posted in 2023. It was followed by Texas at 7.9%, or 36,413 jobs. Virginia was third, with 5.3%, or 24,417 of AI jobs.

This outsized presence of generative AI skills emphasizes that many jobs that don’t require a technical knowledge of language modeling or building will still involve the tech in some fashion.

The BBC reports that Klarna plans to get rid of almost half of its employees by implementing AI in marketing and customer service. It reduced its workforce from 5,000 to 3,800 over the past year, and wants to slash that number to 2,000.

While CIO’s reporting frames this plan as Klarna “helping reduce payroll in a big way,” it also warns against the risk associated with such rapid cuts and acceleration:

Responding to the company’s AI plans, Terra Higginson, principal research director at Info-Tech Research Group, said Wednesday, “AI is here to enhance employee success, not render them obsolete. A key trend for 2025 will be AI serving as an assistant rather than a replacement. It can remove the drudgery of mundane, monotonous, and stressful tasks.”

“(Organizations) that are thinking of making such drastic cuts should look into the well-proven productivity paradox and tread carefully,” she said. “There is a lot of backlash against companies that are making cuts like this.”

Higginson’s words are a reminder that the reputational risk of layoffs surrounding AI is real. As AI sputters through the maturity curve at work, it also reaches an inflection point. How organizations do or don’t communicate their use cases and connections to the talent pipeline will inevitably shape their employer brand.

This is also a timely reminder that, whether or not your comms role sits in HR, now is the time to study up on how your state regulates the use of AI in employment practices. 

Beginning in January 2026, an amendment to the Illinois Human Rights Act will introduce strict guidelines prohibiting AI-based decisions on hiring or promotion. Such behavior is framed as an act of discrimination.

This builds on the trend of the Colorado AI Act, which more broadly focused on the public sector when it was signed into law this past May, and specifically prohibits algorithmic discrimination for any “consequential decision.”

While you work with HR and IT partners to navigate bias in AI, remember that training employees on how to use these schools isn’t just a neat feature of your employer brand, but a vital step to ensure your talent is trained to keep your business competitive in the market.

BI reports:

Ravin Jesuthasan, a coauthor of “The Skills-Powered Organization” and the global leader for transformation services at the consulting firm Mercer, told BI that chief human-resources officers and other leaders would need to think of training — particularly around AI — as something that’s just as important as, for example, building a factory.

“Everyone needs to be really facile with AI,” he said. “It’s a nonnegotiable because every piece of work is going to be affected.”

He said experimenting with AI was a good start but not a viable long-term strategy. More organizations are becoming deliberate in how they invest, he added. That might look like identifying well-defined areas where they will deploy AI so that everyone involved uses the technology.

Jesuthasan’s words offer the latest reminder that comms is in a key position to coordinate experimentation efforts and investments in tech with an allocated investment in training that includes not only a platform for instruction and education, but time itself -– dedicated time for incoming talent to train on the tools and use cases during onboarding and dedicated time for high-performers to upskill.

Treating this as an investment with equal weight will ultimately enhance your employer brand, protect your reputation and future-proof your organization all at once.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.