Adobe’s lack of transparency comes at a great cost

The tech company bungled its rollout of new terms of service.

Kennyatta Collins is a freelance brand strategist. Follow him on LinkedIn.

Adobe has joined the list of major tech companies that have found that the even the idea of using AI in some products can cause fear and mistrust.

After a terms of service agreement update, which included a provision giving Adobe a “non-exclusive, worldwide, royalty-free sublicensable license” to use or reproduce user-created content at the company’s discretion, went viral on X, Adobe found itself in direct conflict with many of its users. The updated terms of service also included the right to “access, view, or listen to your content through both automated and manual methods,” prompting many artists to encourage a boycott fueled by speculation over whether their data would be used to feed AI models. It didn’t help that users were prevented from interacting with their Adobe products, including uninstalling them, until they opted into the new Terms of Service agreement.

Adobe’s initial attempt to rectify the situation, claiming the terms were only updated to improve content moderation, fell flat. The response failed to clarify privacy concerns and leaned heavily on legal jargon. Vaguely established parameters in when and how Adobe will access user content such as “operating or improving the services and software and to enforce our terms…” or “our automated systems may analyze your content and Creative Cloud Fonts using techniques such as machine learning…” fueled paranoia as arguments for boycotting the service provider mounted. If the systems are automated, could they access content without the company knowing? What constitutes “improving the services and software of Adobe”and what falls outside that boundary?

Chief Strategy Officer and Executive Vice President of Design and Emerging Products at Adobe, Scott Belsky, addressed some of the harshest responses to the blog post on X with further attempts to put things into perspective.

As users attempted to cancel their subscriptions to Adobe services, many discovered a 50% cancellation charge. The attention surrounding these cancellation fees grew so large it caught the attention of the United States Justice Department and the FTC, which are now suing the company and two executives for allegedly violating the Restore Online Shoppers’ Confidence Act. The lawsuit alleges Adobe “imposed a hidden Early Termination Fee on millions of online subscribers and that Adobe forced subscribers to navigate a complex and challenging cancellation process designed to deter them from canceling subscriptions they no longer wanted.”

Mike Nellis, founder of digital fundraising and advertising agency Authentic and Quiller, an AI fundraising tool, underscores the need for more transparency and situational awareness by tech companies. “People are afraid of what they don’t understand, and you can’t rush unclear messages to market and not suffer the consequences that come with shocking the system,” says Nellis. “This is the issue inherently — it’s basically the Wild West out there.”

While the terms of service update didn’t mean Adobe was using user information to train its generative AI model, the company’s lack of clarity and its haste to address the scrutiny without understanding the severity of the moment provoked further distrust, resentment, and unintended consequences.

Adobe later clarified the terms of service and its commitment to being a creative partner to its users in a new blog post. The tech company agreed with the need to make the terms of use better reflect the company’s commitment to its users while making clear that it has “never trained generative AI on customer content, taken ownership of a customer’s work, or allowed access to customer content beyond legal requirements.”

But could it be too little too late?

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.