AI can learn emotions. Are you ready to train it?

How to get started developing an AI EQ.

During his May 2024 speech at the TCC Loyalty Forum, author and AI futurist Mo Gawdat delved into how AI can predict and even exhibit emotions by following algorithmic patterns.

Gawdat suggested that most emotions, like fear and anger, can be broken down into logical processes because they are algorithmic in nature — love is the only emotion AI can’t recognize (because it doesn’t conform to an algorithm). For example, he claimed that fear occurs when “a moment in the future is perceived to be less safe than now.”

“Fear for a puffer fish makes it puff,” Gawdat said. “For a cat, it hisses. And for a human, we fight or flight. For a machine, it might move its code to another place.” The reactions may differ, but the underlying logic behind the emotion is the same.

As such, Gawdat believes that AI can easily be programmed to recognize these patterns and respond accordingly. How can communicators train AI tools to learn the appropriate emotions for engaging each stakeholder engagement, and maintain their influence in the process?

Ahead of her workshop during Ragan’s Internal Communications Conference this week, Microsoft Senior Storyteller and author of “Brand Storytelling” Miri Rodriguez shared her tips for strengthening your AI EQ through prompting.

  1. Harness your human edge.

While AI is always learning, Rodriguez emphasized that it will always be driven by human programming.

“There’s definitely an opportunity for us to understand deeply what AI can do, but we need to remember that the human piece and integration will always be a part of that,” she said.

AI can be programmed to communicate with empathy, but its power as a tool remains limited by the ethical boundaries and emotional context you set for it. This puts communicators in a position to ensure the emotional cues and empathetic behaviors AI is trained on align with human values.

“AI plus EQ, artificial intelligence driven by humans, will never not exist,” added Rodriguez. The trick is finding a balance.

  1. Research emotional patterns and AI’s responses

You can use AI as a tool to investigate how it interprets emotion before launching campaigns, a significant transformative shift from traditional research methods.

Emotions like joy, anger, frustration, and calm can be reflected in the tone, choice of words, sentence structure, and even punctuation.

For example:

  • Joy: AI might recognize joy when a user uses exclamation marks, positive adjectives (think “amazing,” “great”), or upbeat language.
  • Frustration: It may recognize frustration from negative adjectives (like “terrible,” “upset”) or harsh language (“screw this!”).

Rodriguez recommends going further by asking the AI to introduce vulnerability in responses and learn how it communicates that. Give the AI feedback along the way and praise its positive understanding so those developments are committed to its memory.

While AI models will learn certain patterns, directing AI to map these patterns to specific audiences helps it internalize the required nuances.

  1. Prompt engineering with emotion in mind

AI enables more proactive content strategies by delivering emotionally attuned messages tailored to specific audiences. After submitting a design brief of these personas, you can ask AI about the emotional triggers for particular personas and draft your message accordingly.

“For example, I can deliver content for CIOs, CTOs in government spaces,” Rodriguez said. “I’m not in their space, but I can use AI to ask, ‘What keeps a CTO up at night?’”

“When I consider those concerns, I take my piece of content and ask, ‘How do I deliver this?’ The prompting becomes more granular, like ‘What are your recommendations? How do I do this? What format? What words can I use?’”

“By doing this,” she continued, “I’m training it so that when I say ‘governor’ or ‘CTO,’ AI understands the kind of tone and voice that fits the urgency or importance needed for that audience.”

Rodriguez also advocates for developing prompt engineering skills to ask more granular questions: “What format? What words should I use?” By doing so consistently, you can train AI to recognize those specific tones and voices for different personas, improving the quality of its recommendations over time.

Your prompt for a message should include the following elements:

  • A goal. “I want a list of 3-5 bullet points… ”
  • Context.“For an upcoming speech from Executive X focused on the current state of the company and what it’s trying to achieve…”
  • Emotional expectations. “Respond with a tone that is friendly but authoritative…”
  • Source. “… and focus on the personas and guidance shared in the X brief.”

Once you have an output, Rodriguez recommends ensuring emotion is infused throughout the larger story by prompting AI: “If I am hoping to evoke [insert emotion here], in what other ways could this story develop?”

  1. Own your power to create ethical parameters

Ultimately, Rodriguez reminds communicators that we have an ethical responsibility when programming AI to generate empathetic content.

“We might look at AI and go, ‘Oh, it’s self-learning,’ but it will only go as far as we let it go, within the parameters we create,” she said.

This requires continually monitoring and refining your output AI-generated emotional insights to ensure they align with both audience needs and ethical standards.

By blending AI’s iterative potential with human emotional intelligence, you can train AI to understand stakeholder emotions and deliver content that resonates on a deeper level.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

In-depth resources on AI prompt creation and internal use cases are available exclusively to members of Ragan’s Communications Leadership Council. Learn more about joining here.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.