AI news for communicators: Tools, partnerships, risk and regulation
New innovations from Apple and Google, national security guidance from The White House, and more.
This week, we’re looking at Apple Intelligence and Google’s upcoming AI agent for search engines, fresh AI partnerships at Universal Music and Reuters, The White House’s National Security Memorandum on AI more.
Here are the newest AI developments that communicators need to know about.
Transformative tools
The future of AI will not require us to open a separate program and generate prompts, but instead see generative AI functionality woven into the tools we already use. Think of Microsoft Copilot’s integration into the software across its its Office 365 Suite.
Earlier this week, Apple introduced Apple Intelligence as part of its iOS 18.1 update. After updating the operating system, iPhone users can go into the Settings app and request beta access.
ZDnet reports that Apple Intelligence doesn’t sync with ChatGPT yet, but the functionality is coming.
So what can it do? It can correct your grammar, spelling and style in the Apple Mail and Notes apps, along with several third-party apps. Its Notification summaries can parse long group texts to give you the most important information. It can also record phone calls directly, or record audio in notes, and summarize the most important points.
That integration into other apps is also at the heart of what makes AI agents so enticing. These programs interact with the environments you expose them to, parse data and make decisions in the background to fulfill the goals you set for them.
Earlier this month, Nvidia announced partnerships to create AI agents with several companies including Accenture and Deloitte, while Microsoft unveiled new AI agent tools for healthcare.
The introduction of AI agents continues with news that Google is reportedly building Project Jarvis, which The Information reports is a “computer-using agent” system that can integrate with a web browser to carry out tasks while you do other work.
The report explains that Jarvis is being designed to help users “automate everyday, web-based tasks” by screenshotting web pages, interpreting them and then clicking buttons or writing text on your behalf. Those tasks include “gathering research, purchasing a product, or booking a flight.”
Google is planning to release Jarvis to a small number of testers in December, but those plans may change.
Meanwhile, Cisco is launching a Webex AI agent to handle customer service calls. “We believe that in the next few years, a large majority of first-time calls will be handled by an AI agent that will be just as interactive, dynamic, engaging and personable as a human agent,” Jeetu Patel, executive vice president and chief product officer at Cisco said in the announcement.
AI agents will no doubt transform workplace operations and automate some roles as functionality and use cases scale. Will you wait to be told that your business is using them, or begin to test and iterate now?
Clearly defining how and where the communications function can implement agents is a logical next step. Consider mapping out a workflow and figuring out where these tools plug in, like Microsoft CCO Frank Shaw showed us earlier this month.
Before that, start by defining your baseline for measuring productivity, then explain what you plan to do with time regained while the agents work—and how those tasks create value for your organization.
What do these advancements mean for startups and smaller organizations? The larger, seismic changes that come with AI agents and other integrated tools won’t disrupt startups so much as transform them, Hemant Taneja and Fareed Zakaria write in HBR:
For the first time since the advent of the internet, every CEO in every industry in every country is trying to figure out how to adopt a technology at the same time. This time, ambitions and hopes are even higher. Not only does AI offer to translate existing processes from one realm to another, as digitization did before it, but it also learns by itself.
That key difference means that the productivity gains we can expect are not a one-time “step change” but rather a continuously improving “slope change.” At its core, AI is a workforce-transforming technology, one that will unleash human productivity by creating a parallel labor pool that can shoulder the burden of much of the work humans would rather not do. AI will become a source of abundance, exerting deflationary pressure across the economy by increasing the supply of labor for caretaking, tutoring, maintenance, and more.
They further explain that startups are at a disadvantage in the AI race compared to bigger companies, which have “lots of data and lots of expensive computing power” to build models, funds to pay for the power to analyze those models, and customer relationships to monetize them.
That doesn’t mean startups can’t add value—attracting entrepreneurial talent, innovating and adapting will always be more challenging for larger orgs. Our recent AI survey with Ruder Finn found that internal communications functions larger orgs are slower to adopt AI, too.
Being small and nimble can be an advantage when trying to move internal stakeholders along the AI maturity curve, as it may mean you have more inroads to gain leadership buy-in for new tools. Use whatever auspices of future-focused entrepreneurship your executives ask for to create touchpoints for collaboration that will get you there — while positioning your comms team as the conveners at each intersection.
Media deals and use cases
Disney, which recently scaled back its visual effects to save money, recently teased to The Wrap its plans to launch a major AI initiative that will augment its post-production and visual effects process.
While details of the plan remain vague, the effort highlights Disney’s intention to automate processes where it can save money.. Scaling your AI tech stack by thinking about where it can cut the comms budget something you should start thinking about, too.
Of course, using AI to augment creative output continues to raise a host of ethical questions. Those same questions were at the heart of the WGA and SAG-AFTRA strikes in Hollywood last year, and prompted high-profile lawsuits from artists like Sarah Silverman.
AI companies are navigating this delicate conversation not with regulation, but the promise of legal protections build into their products. While Adobe announced an update to AI image-generation tool Firefly earlier this month that only uses licensed content in its outputs, those protections may soon be built into AI-generated music, too.
Earlier this week, Universal Music announced it struck a deal with “ethical AI music company” Klay Vision, according to The Hollywood Reporter , touting “a pioneering commercial ethical foundational [LLM] for AI-generated music that works in collaboration with the music industry and its creators:”
The two companies said that they share “the conviction that state-of-the-art foundational AI models are best built and scaled responsibly through constructive dialogue and consensus with those responsible for the artistry that shapes global culture.” They added: “Building generative AI music models ethically and fully respectful of copyright, as well as name and likeness rights, will dramatically lessen the threat to human creators and stand the greatest opportunity to be transformational, creating significant new avenues for creativity and future monetization of copyrights.”
Big if true, but whether it exists PR or a copyright protection precedent for other creative industries to follow remains to be seen. For now, working with services and tools that bake their legal protections into the user agreement remains the best way to do business above board.
More AI partnerships are being inked in the media publishing world, too. Reuters announced last week that it will license its news content to Meta for use in its AI chatbot. Details of the deal remain confidential.
Meta AI, the company’s chatbot, is available across its services including Facebook, Whatsapp and Instagram. The social media giant did not disclose whether it plans to use Reuters content to train its large-language model.
“We can confirm that Reuters has partnered with tech providers to license our trusted, fact-based news content to power their AI platforms. The terms of these deals remain confidential,” a spokesperson for Reuters, said in a statement.
Through its partnership with Reuters, “Meta AI can respond to news-related questions with summaries and links to Reuters content,” a Meta spokesperson said in a statement sent by email.
This adds Meta to the list of other big companies, including OpenAI and Perplexity, that have struck AI partnerships with news organizations like Hearst. Whether you work in media or not, this will undoubtedly augment your earned media strategy sooner rather than later.
If you hold your relationships with journos close, ask those you know at these organizations how the partnerships are affecting their reporting process and workflows. Is there a separation of church and state between AI and editorial? If so, those examples may be worth applying to your own internal governance approach. If not, learning how their roles are changing in service of these partnerships can inform how you can tap their expertise more strategically in the future.
Remember, AI will most benefit those closest to the business. Learning how your colleagues are gaining influence, and encouraging them to get the training they need to do so, is a win-win.
Risk and regulation
The White House published a National Security Memorandum on AI last week. The lengthy title, dubbed “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence,” was likely not drafted using ChatGPT.
The memorandum’s stated objectives are:
- To maintain US leadership in developing advanced AI systems around AI talent and immigration, energy and infrastructure (think about recent investments in nuclear energy to power AI data centers), and counterintelligence to compete with China’s AI advancements.
- To accelerate the adoption of frontier AI systems across US national security agencies, encouraging them to better attract and retain AI-savvy talent, make it easier for private sector AI companies to work on security matters, and for the Department of Defense (DOD) to reexamine legal obligations and procedures around enabling responsible AI. The memorandum defines frontier AI systems as “a general-purpose AI system near the cutting-edge of performance, as measured by widely accepted publicly available benchmarks, or similar assessments of reasoning, science, and overall capabilities.”
- To develop governance frameworks with national security in mind, with each agency required to designate a Chief AI Officer and have the form a coordinated group that’s able to work with international partners and institutions.
While the White House seems to be making progress with its latest memorandum, it’s important to remember that a contentious election may derail the momentum this guidance creates.
However the November elections accelerate or stall this momentum, we now have clearerlanguage to talk about federal regulation. The Open Source Initiative (OSI) released the first version of its Open Source AI Definition after several years of academic and industry collaboration.
You might be wondering — as this reporter was — why consensus matters for a definition of open source AI. Well, a big motivation is getting policymakers and AI developers on the same page, said OSI EVP Stefano Maffulli.
“Regulators are already watching the space,” Maffulli told TechCrunch, noting that bodies like the European Commission have sought to give special recognition to open source. “We did explicit outreach to a diverse set of stakeholders and communities — not only the usual suspects in tech. We even tried to reach out to the organizations that most often talk to regulators in order to get their early feedback.”
Maffulli added that an open source AI model, by OSI’s definition, allows the user to follow understand how it’s been built, including usage rights for developers. “That means that you have access to all the components, such as the complete code used for training and data filtering.”
This precedent could be a foundational first step for larger federal regulatory progress.
The White House understands that with each piece of new tech comes new risk, and you should, too. Not all tools have legal protections built in, and any help you can afford to manage the inherent risks of scaling your AI use will drive adoption and buy-in across the organization.
With this in mind, IBM announced new AI data security software last week.
IBM claims its new Guardium Data Security Center offers “unified, Saas-first data security capabilities” by providing “a common view of organizations’ data assets, empowering security teams to integrate workflows and address data monitoring and governance, data detection and response, data and AI security posture management, and cryptography management together in a single dashboard.”
This streamlined approach to risk management can make it much easier for your IT and crisis teams to collaborate, especially with the tool’s generated risk summaries.
That would greatly benefit highly-regulated and high-risk organizations such as hospitals. AP News reports that OpenAI’s new AI audio transcription tool, Whisper, is hallucinating and generating nonsense when transcribing patient consultations with doctors. When hallucinations occur as we’re researching a story, it’s cute. When they happen while transcribing patient consultations with doctors, it’s potentially catastrophic.
While OpenAI warned against using Whisper in high-risk environments, not all have listened. “Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have started using a Whisper-based tool built by Nabla, which has offices in France and the U.S. Nabla said the tool has been used to transcribe an estimated 7 million medical visits,” AP reported.
This is a reminder of what’s at risk when organizations don’t have their own AI governance and guidelines in place. On the flip side, you can also look at Penguin Random House including language on its copyright page that explicitly forbids using the content for training AI models as a positive example that your content’s legal protections can be modeled after.
With the future of American leadership on the ballot, we are reminded that waiting for regulatory guidance from on high is a fool’s errand. Continue to learn from these examples what to do, and what not to do, then bring them to your partners acoss the business and have a conversation about where to go next.
What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!
Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Follow him on LinkedIn.