AI news for communicators: What’s new and notable
Another busy week of AI news brings fresh technological developments alongside new risks and advancing talks of federal regulation.
Another busy week of AI news brings fresh technological developments alongside new risks and advancing talks of federal regulation.
Read on to see what to be aware of in the ever-changing AI landscape.
Tools and advancements
ChatGPT users will soon be able to test new capabilities with the recent announcement of GPT-4o, the successor to GPT-4 model, reports The New York Times. The new model will integrate artificial general intelligence (AGI) into machines as that can analyze and generate ideas at a level comparable to the human brain. The new technology will allow communicators to develop products like chatbots, digital assistant, search engines and image generators on their own.
Meanwhile, OpenAI is also under fire for adding a controversial upgrade to its chatbot. The new voice feature, named “Sky”, reads responses aloud has been linked to actress Scarlett Johansson’s voice in the 2013 film “Her”. OpenAI CEO Sam Altman released a statement saying the voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers—though past social media posts say otherwise,.
This deepfake saga underscores the increasingly harmful nature of AI to an organization’s intellectual property. Partner with your legal and IT teams, putting a plan in place to address how any issues will be handled and communicated to mitigate future risk.
Apple is finding new ways to integrate AI into its systems, reports The Verge, with reported features including transcription, auto-generated emoji, and search improvements. The Voice Memo app is also rumored to get an AI upgrade, with the tech generating copies of your interview recordings or presentations.. The company also reportedly plans to announce a “smart recap” feature summarizing missed texts, notifications, web pages, news, or other media. This upgrade can be a helpful way for the busiest of people to stay informed while minimizing the “noise” of your notifications.
According to Bloomberg, Apple is also nearing a deal with OpenAI to integrate ChatGPT to iOS 18. However, your favorite chatbot may be integrated soon as there are also rumored partnerships between Google and Anthropic.
Risk and Regulation
The new “AI overview” on Google provided some misleading search results to its users. An NBC investigation found the queries “How many feet does an elephant have” and “How many Muslim presidents in US” returned false, misleading, and politically incorrect answers.
“The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experience using Search,” a Google spokesperson said in a statement shared with NBC, but posts sharing these results have gone viral online.
Mishaps like these remind us that tools can still hallucinate. This is the latest reminder that, although comically wrong, A.I summaries and content should always be fact-checked for accuracy.
The most recent call for federal AI regulation comes from former members of OpenAI’s board. In an op-ed for The Economist, Helen Toner and Tasha McCauley write:
“Certainly, there are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.”
The pair advise that, unlike the laissez-faire approach to the internet in the 1990s, the high stakes of A.I. development require universal constraint. Toner and McCauley envision regulation that ensures A.I.’s benefits are realized responsibly and broadly. Specifically, these policies may include transparency requirements, incident tracking, and government visibility to progress.
Amid all of the lawsuits and criticism, OpenAI is creating a safety and security committee to explore how to handle risks posed by GPT-4o and future models. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company shared in a press release. Those curious about the committee’s recommendations will be able to read them after the Board review in 90 days.
Given the recent news that Washington is starting to push for new AI rules, it is worth watching how Toner and McCauley’s op-ed may inform future legislation. The NO FAKES Act —Nurture Originals, Foster Art, and Keep Entertainment Safe — is a bipartisan proposal that senators are looking to introduce as early as June. The law would stop individuals or companies from using AI to produce an unauthorized digital replica of their likeness or their voice.
A new report from The Bank for International Settlements (BIS) surveyed 32 of its 63 central bank members about their interest in adopting AI for cybersecurity.
Seventy-one percent of responses are already using generative AI, while 26% have plans to incorporate the tools into their operations within the next one to two years. The highest concerns to respondents include the risks related to social engineering, zero-day attacks, and unauthorized data disclosure.
The cybersecurity sector may be able to enhance traditional capabilities with AI as respondents say the largest benefits for cyber security include the automation of routine tasks, improved response times, and deep learning insights. This data shows experts believe A.I may be capable of detecting threats sooner by analyzing patterns beyond human capabilities.
These capabilities may stand out, but for some companies, the costs associated with implementing these tools remain a concern. While there is no surprise BIS anticipates this move could replace staff and “free up resources” to be reallocated, it’s important for communicators to understand how this increase in AI cybersecurity detection can augment or enhance your own crisis communications strategies in the event of a cyberattack
Finally, a lesson from Meta about forming or restructuring your organization’s advisory group. A new group recently created to advise on AI and technology product strategies at Meta has been criticized for its apparent lack of diversity. “If you do not have people with diverse perspectives in the process of building and developing and using these systems, they are going to have a severe risk of perpetuating bias,” Alyssa Lefaivre Škopac, head of global partnerships and growth with the Responsible AI Institute, told CIO.
The advisory group’s composition does not appear as a business practice that reflects diversity, equity and inclusion (DE&I) efforts. Recent research by Gartner titled “How to Advance AI Without Sacrificing Diversity, Equity and Inclusion,” found that the rapid AI integration and biases inherent in the models are resulting in trade-offs of enterprise DE&I initiatives.
Meta’s situation and Gartner’s research show the need for a diverse and representative advisory group that truly reflects the expansive backgrounds, identities, perspectives and lived experiences of all stakeholders. This is a wake-up call for organizations to not pull from a variety of backgrounds, like race and gender, but also business functions in compliance, legal, HR, and technology procurement.
The workforce
Alibaba founder Jack Ma predicted in 2017 that in 30 years, a robot will likely be on the cover of Time Magazine as the best CEO. Yet, new research shows this prediction may come sooner than 2047.
The latest class of employees to be threatened by AI is the CEO. EdX survey data from last summer reveals nearly half of executives believe most or all CEO roles should be completely automated or replaced by AI. Of course, this data should be taken with a grain of salt given it is almost a year later.
Remember, there are many assets humans provide that machines do not. Accountability, leadership, and responsibility are three capabilities not yet possessed by technology.
On the contrary, AI will not replace, but help your job, says Netflix co-CEO Ted Sarandos.
In a recent interview with the New York Times, Sarandos told reporter Lulu Garcia-Navarro that, “A.I. is not going to take your job. The person who uses A.I. well might take your job,” echoing a line often delivered during Ragan AI training sessions.
Last year, Netflix posted a machine learning job during the Hollywood strikes that paid up to $900,000 and sent a signal to writers in the process. It remains unclear how Sarandos sees AI interacting with the protections that writers won during negotiations, but it’s a lingering question—the ramifications of which will send a larger signal to content producers across industries.
What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!
Callie Krosin is a Reporting and Editorial Intern at Ragan and PRDaily. Follow her on Twitter and LinkedIn.