AI for communicators: What’s new and what’s next

China rises, the U.S. hesitates on regulation and more.

AI for communicators.

This week, a Shanghai AI conference raises new questions about the global AI race, a former FCC chair calls out slow federal regulation efforts while California’s efforts advance, and CIO salaries increase.

Read on to find out what communicators need to be aware of this week in the latest chaotic, captivating chapter of AI’s growing influence.  

Tools and advancements

Last week’s World AI Conference in Shanghai featured Chinese tech companies showcasing over 150 AI-related products and innovations, with a handful of foreign companies like Tesla and Qualcomm participating, too.

Notable among the unveiled tech was an advanced LLM from SenseTime called “SenseNova 5.5,” reports Reuters. The company alleges this model will rival ChatGPT-4 in its mathematical reasoning abilities. 

The conference took place just before as OpenAI was banned in China, according to The Guardian

OpenAI has not elaborated about the reason for its sudden decision. ChatGPT is already blocked in China by the government’s firewall, but until this week developers could use virtual private networks to access OpenAI’s tools in order to fine-tune their own generative AI applications and benchmark their own research. Now the block is coming from the US side.

Rising tensions between Washington and Beijing have prompted the US to restrict the export to China of certain advanced semiconductors that are vital for training the most cutting-edge AI technology, putting pressure on other parts of the AI industry.

But executives like Zhang Ping’an, who leads Huawei’s cloud computing function, seemed unfazed at the conference.

“Nobody will deny that we are facing limited computing power in China,” Zhang said, according to Reuters. “If we believe that not having the most advanced AI chips means we will be unable to lead in AI, then we need to abandon this viewpoint.”

 

 

Meanwhile, a new poll conducted by the AI Policy Institute and shared with TIME found that American voters would rather the U.S. worry less about keeping up with China’s innovations and focus more on responsible AI.

Time reports:

According to the poll, 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI—by preventing the release of tools that terrorists and foreign adversaries could use against the U.S.—is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.” A majority of voters support more stringent security practices at AI companies, and are worried about the risk of China stealing their most powerful models, the poll shows. 

But new technology is coming, even with a cautious approach. MIT Technology Review shared a deep dive into an evolving category of AI assistants, dubbed “AI agents.” 

While the definition of AI agents is somewhat ambiguous, Nvidia’s AI agents initiative lead Jimmy Fan described them as tools that can make decisions autonomously on your behalf. In one of MIT’s examples, an AI agent functions as a more advanced customer service bot that’s able to analyze, cross-reference and evaluate the legitimacy of complaints.

According to MIT Technology Review:

In a new paper, which has not yet been peer-reviewed, researchers at Princeton say that AI agents tend to have three different characteristics. AI systems are considered “agentic” if they can pursue difficult goals without being instructed in complex environments. They also qualify if they can be instructed in natural language and act autonomously without supervision. And finally, the term “agent” can also apply to systems that are able to use tools, such as web search or programming, or are capable of planning. 

Other AI tools continue to show potential for making all sorts of creative assets the average person may not be able to produce on their own. Consider Suno, an AI music creation app that allows you to create AI-generated tunes and is, as of this writing, still free to download (the startup was sued in June by a handful of record companies, so hard to say how long that will last).

Washington Post tech reporter Chris Velazco recently detailed his experience with Suno, asking it to make a journalism-themed song for the storied paper that could serve as an update to John Phillips Souza’s iconic “Washington Post March.”

While a transformative tool like Suno has untold utility for communicators and marketers to create bespoke assets on the fly, looming legal threats raise the question of who will ultimately be holding the bag when the models that harvest copyrighted material are sue – the software companies or their clients? 

The popular collaborative design tool Figma recently disabled its “Make Design” feature after it was accused of being trained on pre-existing apps, reports TechCrunch.  YouTube also rolled out a policy change last week that lets people request AI-generated deepfakes that simulate their face or voice be removed, reports Engadget. 

The Wall Street Journal reports that many AI companies are focused on growing their customer base with lower-cost, less powerful models.

There’s even a specific use-case for internal communications: 

The key is focusing these smaller models on a set of data like internal communications, legal documents or sales numbers to perform specific tasks like writing emails—a process known as fine-tuning. That process allows small models to perform as effectively as a large model on those tasks at a fraction of the cost. 

Risks and regulation

Former FCC Chairman Tom Wheeler appeared on Yahoo Finance’s “Catalysts” show this week to say out that the U.S. has failed to lead on AI regulation efforts.

Yahoo Finance reports:

He notes that the EU and individual states in the US have their own set of rules, “which will create new problems for those tech companies who rely on a uniform market.” He explains that the way Europe regulates tech companies is a “much more agile approach” than industrial-era micromanagement, and it is an approach that “continues to encourage innovation and investment.” However, he believes the US will have a difficult time getting to that point as it grapples with “a Congress that has a hard time making any decisions.”

Some states are moving forward with their own regulations in the absence of national guidance. The AP reports that last week, California advanced legislation requiring AI companies to test their systems and include safety features that protect the tools against weaponization. 

But not everyone supports the regulation. 

According to the AP:

A growing coalition of tech companies argue the requirements would discourage companies from developing large AI systems or keeping their technology open-source.

“The bill will make the AI ecosystem less safe, jeopardize open-source models relied on by startups and small businesses, rely on standards that do not exist, and introduce regulatory fragmentation,” Rob Sherman, Meta vice president and deputy chief privacy officer, wrote in a letter sent to lawmakers.

Opponents want to wait for more guidance from the federal government. Proponents of the bill said California cannot wait, citing hard lessons they learned not acting soon enough to reign in social media companies.

The proposal, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices.

It’s then no surprise that  AI-related lobbying reached a record high over the past year, according to Open Secrets. 

Those hoping for some clarity or resolution would do best to work with your organization’s leaders across functions and codify your own set of responsible AI guidelines instead of waiting for the government.

AI at work

Misuse of AI doesn’t just affect external stakeholders, but the workforce, too. CIO reports on the rise of shadow AI, a term referring to unsanctioned AI use at work that puts sensitive company information and systems at risk.

The report cites numerous risks of unsanctioned AI use, including unskilled workers feeding tools sensitive data, disruptions among workers with various levels of competency and legal issues.

Suffice it to say, training employees at all levels on responsible and acceptable use is a sound solution. 

The Wall Street Journal reports that chief information officers are seeing previously unheard-of compensation increases for increasingly taking on AI-related responsibilities. 

According to WSJ:

CIO compensation increased 7.48% on average among large enterprises and 9% among midsize enterprises over the past year—the biggest gains among information-technology job categories, according to salary data from consulting firm Janco Associates. 

Overall, CIO and chief technology officer compensation is up more than 20% since 2019, with boosts to base pay and, more often, equity packages, according to IT executive recruiting firm Heller Search Associates. 

The gains indicate growing investment by enterprises in AI strategies and corporate tech leaders becoming more visible as they are increasingly tasked with new AI-related responsibilities.

“ChatGPT caught a lot of incumbent technology companies by surprise, and no organization wants to be left behind,” said Shaun Hunt, CIO of Atlanta-based construction services firm McKenney’s. 

It’s clear that AI is changing the game – for better and for worse. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.