Though the use of AI is now a routine part of workplace communication, a recent study of over 1,000 professionals highlighted that while AI makes managers’ messages more polished, heavy reliance can damage trust.

Employees tend to accept low-level AI help, such as grammar fixes, but become skeptical when supervisors use AI extensively, especially for personal or motivational messages.

This “perception gap” can lead employees to question a manager’s sincerity, integrity and leadership ability, undermining trust between the two.

“We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., a researcher at the University of Florida’s Warrington College of Business and study co-author. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium to high levels of AI assistance.”

In the study published in the International Journal of Business Communication, Coman and his co-author, Peter Cardon, Ph.D., of the University of Southern California, surveyed professionals about how they viewed emails written with low, medium and high AI assistance.

Survey participants were asked to evaluate different AI-written versions of a congratulatory message on both their perception of the message content and their perception of the sender.

While AI-assisted writing was generally seen as efficient, effective and professional, Coman and Cardon discovered a “perception gap” in messages written by managers versus those written by employees.

“When people evaluate their use of AI, they tend to rate their use similarly across low, medium and high levels of assistance,” Coman explained. “However, when rating others’ use, magnitude becomes important. Overall, professionals view their AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors.”

While low levels of AI, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions.

The perception gap is especially significant when employees perceive higher levels of AI writing, causing them to question the authorship, integrity, caring and competency of their manager.

The impact on trust was substantial.

Only 40 percent to 52 percent of employees viewed supervisors as sincere when they used high levels of AI, compared to 83 percent for low-assistance messages. Similarly, while 95 percent found low-AI supervisor messages professional, this dropped to 69-73 percent when supervisors relied heavily on AI tools.

The findings reveal that employees can often detect AI-generated content and interpret its use as laziness or a lack of care.

When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities.

“In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust.

The study suggests managers should carefully consider the message type, level of AI assistance, and relational context before utilizing AI in their writing.

While AI may be professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages that require empathy, praise, congratulations, motivation, or personal feedback are better handled with minimal technological intervention.

University of Florida. (2025, August 12). Why AI emails can quietly destroy trust at work. ScienceDaily. Retrieved August 12, 2025 from www.sciencedaily.com/releases/2025/08/250811104226.htm