AI news for communicators: Risk, regulation and reckoning

New tools from Adobe and Microsoft, warnings from a Nobel Prize winner, and more.

This week, we’re looking at new tools from Adobe and Microsoft that promise licensed content creation and healthcare innovations, a Nobel Prize winner who warns about AI’s potential to outsmart us, a new email security initiative from Google to counter increasingly smart AI fraudsters and more.

Here are the AI advancements that communicators need to know about this week.

Transformative tools and the onslaught of AI agents 

As AI tools flood the market, startups are cloning projects. PearAI’s founder recently admitted its coding editor was cloned from an open-source project. The incident prompted enough for AI authority and founder of Teaching Startup Joe Procopio to pen for Inc. a “ snarky, satiric, but painfully probable list of several scenarios where AI don’t just be the provider, but also the user and the customer and maybe even take your vacations for you.”

While cloned AI tools shouldn’t be making their way into your tech stack anytime soon, tools that promise to pull from licensed datasets are a safer bet.

Ars Technica reports that Adobe’s new Firefly Video Model, an AI-powered text-to-video generation tool, is only trained on licensed content. Adobe ensures this is “the first publicly available video model designed to be completely safe.” With no general release date announced, you can currently only get on a waiting list.

Last week during Nvidia’s AI Summit, the chipmaker announced a partnership with several companies including Accenture, Deloitte and more to create conversational platforms, advanced learning management systems and AI agents for partner clients like AT&T, the University of Florida and Lowe’s using the Nvidia NeMo agent creator tool.

AI agents are coming to the healthcare industry, too. Last week Microsoft announced new tools to aid medical professionals with clinical records and appointment scheduling. The company positioned these tools as part of its commitment to “developing responsible AI by design, ensuring that these technologies positively impact both the healthcare ecosystem and broader society.”

That last part is important—the implications of AI being wielded irresponsibly in a heavily regulated medical industry are too vast to list. It’s also crucial to consider how the scaling of AI-assisted agents may eliminate the need for some jobs, such as receptionists and administrators.

Position any application of these tools accordingly, with context about what this means for roles and functions that transform or vanish because of these tools in your messaging. Getting that right minimizes reputational risk and positions AI as a transparent change catalyst, instead of a secretive one.

Research and risk

While Microsoft and Nvidia’s developments are impressive, they underscore AI’s potential to disrupt organizational structures by eliminating various roles and functions. A fresh report from The Brookings Institute found that “more than 30% of all workers could see at least 50% of their occupation’s tasks disrupted by generative AI.”

The report explains that with so much concern around AI’s scaling has been focused on national security, privacy and surveillance, what it means for the workforce is often the last to be considered:

“Despite high stakes for workers, we are not prepared for the potential risks and opportunities that generative AI is poised to bring. So far, the U.S. and other nations lack the urgency, mental models, worker power, policy solutions, and business practices needed for workers to benefit from AI and avoid its harms.”

Meanwhile, Forbes reports that Google introduced a sophisticated anti-scam alliance initiative after a seemingly legit support scam targeted a Microsoft consultant by asking him to recover a Gmail account.

As scams evolve alongside the tech, staying up on the newest developments doesn’t mean diving in blindly—but educating teams about these increasingly intricate threats and providing them with a protocol to vet potentially fraudulent messages before engaging further.

Last week, Google DeepMind co-founder Demis Hassabis and his colleague John Jumper won the Nobel Prize in chemistry alongside U.S. biochemist David Baker for decoding microscopic protein structures. Former Google researcher Geoffrey Hinton also won the Nobel Prize in physics for his work on machine learning.

While these wins have sparked debates in the scientific community over whether AI innovations merit recognition in those scientific disciplines, Hinton’s concerns are more existential — he warns that these advancements will eventually leave AI smarter than we are.

“It will be comparable with the Industrial Revolution,” he said just after the announcement according to CNN. “But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.”

Michael Wade, professor and director at the TONOMUS Global Center for Digital and AI Transformation in Lausanne, Switzerland, likely agrees. He created an “AI Safety Clock” to assess when this singularity might happen.

Wade’s clock currently reads 29 minutes to midnight, he wrote in TIME, “a measure of just how close we are to the critical tipping point where uncontrolled AGI (artificial general intelligence) could bring about existential risks.”

“This is not alarmism; it’s based on hard data,” Wade explained. “The AI Safety Clock tracks three essential factors: the growing sophistication of AI technologies, their increasing autonomy, and their integration with physical systems.”

While Wade believes tracking and sharing this information will help keep AI from becoming smarter than us, achieving transparency at scale is complicated when some AI applications for data involve sensitive infrastructure—like nuclear energy.

Earlier this week, Google announced plans to partner with Kairos Power and use its small nuclear reactors to power its AI data grid. Like Microsoft’s Three Mile Island purchase weeks earlier and the Department of Energy’s efforts to restart a nuclear plant in Michigan, this move is positioned as a clean energy win that will benefit society.

Will existing nuclear power regulations inform the scale of energy use for AI data centers? Time will tell.

Meanwhile, a recent AI-generated system in Nevada suggests that the conclusions AI draws from our data still aren’t always in our best interests. When the state set out to be more equitable with how funds are distributed to low-income school districts, it worked with an outside contractor that used AI to determine how many students were actually at risk.

The New York Times reports:

The A.I. system calculated that the state’s previous estimate of the number of children who would struggle in school was far too high. Before, Nevada treated all low-income students as “at risk” of academic and social troubles. The A.I. algorithm was more complex — and set a much higher bar.

It weighed dozens of factors besides income to decide whether a student might fall behind in school, including how often they attended class and the language spoken at home. And when the calculations were done, the number of students classified as at-risk plummeted to less than 65,000, from over 270,000 in 2022.

This is the latest reminder that you know your stakeholders better than AI — for now — while your soft skills, human discretion and judgment are still paramount. Developing an AI EQ is the best way to ensure you’re considering the ramifications and consequences that automation will have on the real people who depend on you.

Regulation and reckoning

Last month, California’s AI Safety Bill was vetoed by Gov. Gavin Newsom. Now, a fresh Roll Call report looks behind the scenes on how Congress pushed back on the bill.

According to Roll Call:

“The methodologies for understanding and mitigating safety and security risks related to AI technologies are still in their infancy,” the lawmakers wrote in an Aug. 15 letter. The state bill was “skewed toward addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, disinformation, non-consensual deepfakes,” they wrote. 

After Newsom’s veto, Marc Andreessen, tech entrepreneur and the co-founder of the venture capital firm Andreessen Horowitz, thanked the governor on X for “siding with California Dynamism, economic growth, and freedom to compute, over safetyism, doomerism, and decline.” 

This process may explain why Senate Intelligence Committee Chair Mark Warner told the Atlantic Council last week “Don’t hold your breath” expecting AI regulation to pass  Congress, according to MeriTalk.

“They all say they want regulation until you put words on a page,” said Warner. “I am deeply engaged with a lot of these companies. I’m saying, ‘You guys have got to be for something.’ Now, do you do what the Europeans have overdone? We’ve done nothing. There is somewhere in the middle [for], I think, smart regulation.”

While federal regulation in the US continues to stall, states are working out their public-private partnerships that build on established regulation.

NY Gov. Kathy Hochul launched the first phase of the state’s “Empire AI” initiative last week, touting that the state is “the first in the nation to establish a consortium of public and private research institutions advancing AI research for the public good at this scale.”

Regulation is briefly mentioned at the bottom of the announcement, which notes that Hochul “signed legislation to prioritize safe, ethical uses of AI as the state continues to build its AI footprint” this past February.

The legislation requires disclosure of any politically manipulated media.

Hochul framed this law as part of her 2025 Executive Budget, illustrating a best practice of positioning AI regulation as an investment in mitigating risk.

Texas is also attempting to figure out its acceptable AI use cases, reports The Texas Tribune, hosting a four-hour hearing that included state agencies highlighting significant savings in time and money that AI has provided their teams. During the hearing, Consumer Reports Policy Analyst Grace Gedey claimed that private companies are using biased AI models to make housing and hiring decisions against the interests of working people.

While states continue to enact their own AI regulations continues instead of federal progress, it can be hard for communicators covering multiple markets and regions to keep up. Add the EU AI Act on top of that if and you’re managing a global comms puzzle gets more complex to solve.

This fragmented patchwork is creating a new cottage industry. TechCrunch reports on a new platform, Relyance AI, that promises to help companies align their AI data practices with global and regional governance laws.

Whether you outsource your governance and compliance to a platform, to legal counsel or both, keep reading to stay abreast of the evolving patchwork of state and local AI legislation. This will help you align internal governance and guidelines with current rules for play in the markets you operate in.

It will also position you as a valuable, future-forward, reputation-first leader across legal, public affairs, IR and other business lines. That’s a reckoning you actually have some control over.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.