AI for communicators: What’s new and what’s next

New LLMs proliferate but content withers.

AI for communicators

The tech goes fast and the regulation goes slow.

That could be the opening sentence for nearly any version of this story, but it seems especially apt this week as Apple rolls out a new LLM, Meta looks to take the crown for most popular model in the world and regulation continues to chug along without much oomph.

Here’s what communicators need to know about AI this week.

Tools and advancements

The past few weeks have been one of the most consequential in America’s recent history, with the attempted assassination of Donald Trump to Joe Biden’s choice not to seek reelection. 

But if you were trying to catch up on the news via AI chatbot, you might have been left in the cold. Some chatbots were hopelessly behind in the news, even claiming that the attempted assassination was “misinformation” and refusing to answer questions about who was running for president, according to the Washington Post

Some bots fared better than others, namely Microsoft’s Copilot, which includes plentiful links to news sources. But it reveals the dangers in trusting AI as a search engine, especially for breaking news. 

While this particular use case is lagging behind, others are zooming ahead with tons of new features and technological advancements. Adobe is more deeply integrating AI tools into its classic suite of Photoshop and Illustrator, allowing users to create images, textures and other assets using text prompts, TechCrunch reports. While this could help experienced designers save time, it also raises the fear of those same experienced designers being replaced by fast, low-cost AI solutions. Designers also have concerns over how their intellectual property could be used to feed AI models. 

 

 

Samsung  also released a new sketch-to-image tool that allows you to draw a doodle that can then be illustrated  using generative AI. This can be fun when it’s just a sketch, but it can warp reality in some worrying ways when you can add an AI-generated element to an existing photo. 

You’ll only hear more about these weighty issues in the coming weeks and months. 

LLM laggard Apple is finally working on its own tool, the rolls-off-the-tongue DCLM-Baseline-7B. That 7B stands for “7 billion,” or how many parameters it was trained on. ZDNet reports that it performs competitively against other models and is truly open source, allowing  other organizations to build on Apple’s work.

We’ll have to see exactly how Apple integrates this model into other projects. 

Meanwhile, Meta has its sights set on the AI throne currently occupied by OpenAI’s ChatGPT. The company recently released Llama version 3.1, the newest version of its open-source tool. The company claims that the new version outperforms major competitors like ChatGPT and Claude by several metrics. Meta, which has heavily incorporated the tool into its social platform like Facebook and Instagram, predicts that it will become the most-used AI platform in the world. Given Meta’s reach, that would make sense. But the question is, what is the demand for generative AI and search as part of a social platform?

Risks 

In news, that’s equal parts depressing and unsurprising, deepfakes of Vice President and presumptive Democratic presidential candidate Kamala Harris began to re-circulate quickly after she stepped into her new role.

The video combines real footage of Harris giving a speech at Howard University with manipulated audio intended to make her sound as if she is slurring her words and speaking in nonsensical circles, Mashable reported

TikTok pulled the video down, but not before it racked up more than 4 million views. X, which has no prohibition against misinformation and deepfakes, allowed the video to remain up, albeit with a Community Note that identifies it for what it is. It’s a reminder of the power of lies to spread around the world before the truth gets its pants on, as well as the brand dangers inherent on X. 

But the AI industry is exposing others to risks as well through unvetted use of data. Figma’s “Make Designs” tool had to be pulled from the market after users asked it  create a weather app and discovered it spit out an example  eerily similar to Apple’s iconic Weather app.

If a user were to take that app to market, they could wind up in serious legal trouble.

Figma acknowledges that some designs the tool was trained on weren’t vetted carefully enough. That’s cold comfort to companies who might rely on generative AI to provide designs and data they can trust. 

Relatedly, AI chatbot Perplexity is being accused of plagiarism by Condé Nast, claiming that the tool is using the magazine company’s reporting without permission or credit. While there’s just a cease-and-desist letter at this stage, it’s safe to guess that a lawsuit may soon follow.

In response to that deluge of lawsuits, many generative AI companies are working carefully to provide the vetted, trustworthy, approved content that businesses demand. Some, like Getty, are paying real human photographers to take pictures that can feed their AI models and ensure that every bit of information in the model is on the up-and-up. 

That, in turn, puts AI companies without those same resources in a bind when it comes time to train their models. According to researchers from the Data Provenance Initiative, 5% of all data and 25% of high-quality data has been restricted from use in AI models. As LLMs require a steady stream of data to stay up-to-date, this will pose new challenges for AI companies, forcing them to pay, adapt or die. 

But even paying can cause controversy. A group of academics were outraged to discover their content had been sold by their publisher without their permission to Microsoft for use in AI. They were neither asked nor informed about the deal, according to reports. The importance of communicating how data will be used to all parties involved will only become more vital. 

Investors are beginning to suspect we’re in an AI bubble as big tech companies pour more and more cash into AI investments that have yet to pay off and startups proliferate and earn tons of funding. 

Now, this doesn’t mean AI will disappear or cease to be a hot technology any more than the internet disappeared during the dot-com bubble. But it does mean that the easy days of slapping “AI” onto a product or company name and raking in the dough may be coming to an end, even as many of us still strive to figure out how to incorporate these tools in our day-to-day workflow. 

Regulation

Coverage of Kamala Harris’ campaign launch is awash with information on of she stands on the issues that matter to American voters. Of course, that includes AI regulation.

TechCrunch highlights Harris’ roots as San Francisco’s district attorney and California’s attorney general before becoming a senator in 2016.

According to TechCrunch:

Some of the industry’s critics have complained that she didn’t do enough as attorney general to curb the power of tech giants as they grew.

At the same time, she has been willing to criticize tech CEOs and call for more regulation. As a senator, she pressed the big social networks over misinformation. During the 2020 presidential campaign, when rival Elizabeth Warren was calling for the breakup of big tech, Harris was asked whether companies like Amazon, Google and Facebook should be broken up. She instead said they should be “regulated in a way that we can ensure the American consumer can be certain that their privacy is not being compromised.”

As vice president, Harris has also spoken about the potential for regulating AI, saying that she and President Biden “reject the false choice that suggests we can either protect the public or advance innovation.”

Five senators sent a letter to OpenAI on Monday asking for context around its safety and employment practices following a group whistleblower complaint that alleged the company prevented staff from warning regulators about the risks its AI advancements posed.

The Hill reports:

Led by Sen. Brian Schatz (D-Hawaii), the group of mostly Democratic senators asked OpenAI CEO Sam Altman about the AI startup’s public commitments to safety, as well as its treatment of current and former employees who voice concerns. 

“Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems,” Schatz, alongside Sens. Ben Ray Lujan (D-N.M.), Peter Welch (D-Vt.), Mark Warner (D-Va.) and Angus King (I-Maine), wrote in Monday’s letter. 

“This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies,” they continued. 

Last week, OpenAIjoined several tech companies including Nvidia, Google, Microsoft, Amazon, Intel and others to form the Coalition for Secure AI (CoSAI), which will aim to “address a ‘fragmented landscape of AI security’ by providing access to open-source methodologies, frameworks, and tools,” according to Verge.

Functioning within the nonprofit Organization for the Advancement of Structured Information Standards (OASIS), CoSAI will focus on three goals: Developing AI security best practices, addressing the challenges of AI and securing AI applications. The details still seem a little vague.

It’s worth noting that CoSAI’s aims fail to address calls for federal regulation in lieu of a formalized working group, and lawmakers will be watching to see what specific best practices the group comes up with. 

This shouldn’t suggest that some CoSAI members aren’t advocating for regulation, too. Earlier this week, Amazon SVP of Global Public Policy and General Counsel David Zapolsky posted an article on the company’s website advocating for global regulation – framing the need as a matter of economic prosperity and security. 

“It’s now very clear we can have rules that protect against risks, while also ensuring we don’t hinder innovation,” Zapolsky wrote.  “But we still need to secure global alignment on responsible AI measures to protect U.S. economic prosperity and security.”

Zapolsky’s suggestions include:

  • Standardized commitments about responsible AI deployment, like Amazon’s including invisible watermarks in its image generation tool to reduce the spread of disinformation
  • Uniform transparency from tech companies around how they are developing and deploying AI. Zapolosky notes that Amazon Web Services (AWS) created AI service cards to let customers know about the limitations of its tech along with responsible AI best practices they can use to build applications safely.

While aspects of Zapolosky’s letter read as a promotional recap of Amazon’s progress in the space, showing the company’s work and using that work as a catalyst for a larger conversation about regulation may be what bridges the current disconnect between big tech companies who think they can solve it themselves and a federal government that seems unable to move at the rapid pace of AI acceleration.

MIT Technology Review Senior Reporter Melissa Heikkilä reported that one year ago, on July 21, 2023, seven leading AI companies including Amazon, Google, Microsoft and OpenAI committed to developing eight voluntary commitments for developing safe and responsible AI.

On the anniversary of that commitment, Heikkilä asked the companies for details on their progress and asked experts to weigh in: 

Their replies show that the tech sector has made some welcome progress, with big caveats.

“One year on, we see some good practices towards their own products, but [they’re] nowhere near where we need them to be in terms of good governance or protection of rights at large,” says Merve Hickok, the president and research director of the Center for AI and Digital Policy, who reviewed the companies’ replies as requested by MIT Technology Review. Many of these companies continue to push unsubstantiated claims about their products, such as saying that they can supersede human intelligence and capabilities, adds Hickok. 

But it’s not clear what the commitments have changed and whether the companies would have implemented these measures anyway, says Rishi Bommasani, the society lead at the Stanford Center for Research on Foundation Models, who also reviewed the responses for MIT Technology Review.  

It’s no surprise that formal regulations continue to stall. As previously reported, AI lobbying has surged drastically YOY and leading tech companies have demonstrated their vested interest in proposing their own safeguards for responsible AI over helping Uncle Sam standardize something that will hold them accountable.

Amazon’s letter is a notable exception that stands out as an example of how an organization’s thought leaders can highlight the work and advancements as a conversation starter.. 

The regulation conversation continues to prove fascinating, even as it moves slowly. With headlines continuing to focus on the November elections, it will be worth watching to see what progress makes on the way out and what tangible policy Harris is willing to shape. 

AI at work

As federal AI regulation continues to move at a sluggish pace, most businesses are also still in the early stages of adoptingAI.

Axios shared the results of AI platform ServiceNow’s inaugural AI Maturity Index, which surveyed nearly 4,500 respondents across 21 countries, and found that “many companies have struggled to go from experiments into full-scale use of the technology.” 

“The study assigned maturity scores between 1 and 100,” reported Axios. “ The average score was 44 and the highest score was just 71. Only about 1 in 6 companies scored higher than 50.”

ServiceNow Chief Customer Office Chris Bedi told Axios that the adoption of AI use cases is ultimately a leadership competency.

 “You have to be able to get up in front of your team and say, ‘Here’s how your roles are going to evolve in an AI-first world,'” he said. 

Bedi also broke the maturity curve for adoption into two modes, defining the first mode around incremental improvements and the second mode as taking the leap to design new models and augment ways of working.

“Mode two is harder,” continued Bedi. “It’s saying, ‘If we were to assume the models are good enough, and AI was pervasive, how would we redesign these departments, these jobs, the organization, the enterprise, from scratch?’ It’s a much harder intellectual exercise.”

Though much of the piece reads as promotional, it highlights the wisdom and innovation of those at the frontline of AI advancements to drive forward the maturity Bedi advocates for. This will require a partnership between those doing the work and the leaders who ultimately sign off on the budget to be advocates.

While many organizations are slow to move to this second mode, former Amazon AI engineer Ashish Nagar recently explained how he created the customer service intelligence platform Level AI to address productivity challenges in the automated customer service industry:

“Frontline workers, like customer service workers, are the biggest human capital in the world,” Nagar told TechCrunch. “So, my idea was to use ambient computing — AI that you can just talk to and it listens in the background — to augment human work.”

While the TechCrunch piece and the Axios report both hinge on the expertise of a service provider,  Forbes contributor Bernard Marr asks what promised innovations will really prove transformative across industries and which are exercises in marketing:

While the picture being painted points to an imminent revolution across all industries, in reality, the impact is likely to be more iterative and nuanced.

Predictions of astronomic leaps in value that AI will add to industries may be achievable in theory. But challenges around regulation and data privacy, as well as technical challenges such as overcoming AI hallucination and bias, may not be simple to solve.

Overall, this means that while I believe AI will have truly profound implications for business, jobs and industry, the pace of this transformation may well be slower than some of the hype and hyperbole suggests – in the short term, at least.

Marr’s piece, which homes in on the gap between aspirational promises and execution of AI solutions across the retail and financial services industries, offers a sobering reminder to communicators: Being an advocate and early adopter of new tech requires cutting through the advertorial language, asking for metrics and examples from solutions providers, and setting aside time to experiment on the frontend. 

This remains one of the best ways to ensure that your skills as a communicator are central to the strategic growth and scale of your operations–ensuring that any tools you advocate for have been tested to ensure they prioritize safety, accuracy and tangible results.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.