BETA
THIS IS A BETA EXPERIENCE. OPT-OUT HERE

More From Forbes

Edit Story

New Research Finds Why Some Surveyed Workers Don’t Trust Agentic AI

Following
Updated Jan 22, 2025, 11:47am EST

Reliability and trust can help determine the acceptance and success of new and rapidly improving technology. If people can’t trust the technology and rely on it to get the job done, they are less likely to use it or recommend it to others.

The latest example is agentic AI, a ramped up version of AI.

“The way humans interact and collaborate with AI is taking a dramatic leap forward with agentic AI…With their supercharged reasoning and execution capabilities, agentic AI systems promise to transform many aspects of human-machine collaboration…” Harvard Business Review explained.

But new research underscores the reasons why some workers are reluctant to use AI or its latest iteration.

  • 33% of those surveyed were worried about the quality of work that is produced by AI
  • 32% said there is a lack of human intuition and emotional intelligence
  • 30% did not trust the accuracy of AI-generated responses that they received

That’s according to the results of a new study conducted by YouGov for Pegasystems (NASDAQ: PEGA), a software development, marketing, and licensing company. YouGov surveyed more than 2,100 working adults in the U.S. and UK in November 2024 who use digital devices for their jobs.

More than half of those surveyed —58%—were already using AI agents for various tasks.

“This research underscores that many still have reservations [about the technology] and it’s up to enterprise leaders to strategically and thoughtfully incorporate the technology to help ensure adoption," said Don Schuerman, chief technology officer of Pegasystems, said in a statement provided by the company’s public relations representative.

Diving deeper into the concerns of the surveyed workera:

  • 47% believed AI lacks human intuition and emotional intelligence
  • 40% were uncomfortable submitting AI-generated work
  • 34% worried that AI-produced work isn't as good as their own, which is tied to concerns about the technology’s accuracy and reliability

There are other risks for companies who rely on AI, including creating more work and stress for employees and being over-confident on how ready their organizations are to embrace the technology.

AI experts and CEOs shared their concerns and reservations about using programs and tools powered by agentic AI.

Adopt AI Carefully

“Until AI can guarantee ethical decision-making and 100% compliance, every business should adopt it carefully, audit it rigorously, and override it as needed. We can’t just let AI run our businesses for us, but we can certainly use it— with strict monitoring— to augment our efforts,” attorney Jonathan Feniak counseled via email.

Approach AI With Caution

“Executives should approach AI solutions with caution, especially free or mass-market tools. [The] age old saying applies: ‘If the product is free, you are the product,’” Elle Ferrell-Kingsley, an AI ethicist and specialist, warned via email.

Some AI tools can come with built-in risks.

“Public AI tools often come with greater risks, especially regarding sensitive company data. It’s generally advised not to input any confidential information (or client info) into these systems. However, it can be easy for employees to think of it as a quick fix or great new tool to enhance their workflow,” she cautioned.

Understand AI’s Limitations

“Trust in AI isn’t just about the technology itself—it's also about understanding its limitations. For example, while AI can streamline data analysis and enhance compliance efforts, it’s still critical to have human oversight to interpret insights and ensure alignment with broader business goals,” Ryan Niddel, CEO of MIT45, observed via email.

Prioritize

“One of the biggest concerns I see with AI tools is the potential for over-reliance on algorithms without fully understanding the data sets driving them. In my opinion, companies must prioritize tools that demonstrate ethical use of data and deliver actionable insights rather than generic outputs,” Niddel concluded.

Have AI Policies

“Whether an executive trusts AI and AI-powered tools, they should create processes and policies. And they should expect that even if they don’t trust the technology or the tools….others on their team – both executives and underlings—will trust AI at differing levels,” David Radin, CEO of Confirmed and who conducts workshops about time management in the age of AI, advised via email.

Policies and procedures that govern the use of AI at companies and organizations should include some basic provisions.

“A properly built AI policy and process should include organizational expectations as well as people who have been empowered to carry out the policy, including enforcement. It should address far-reaching issues from copyright infringement (in both directions) to potential losses or harms to others, and a methodology for addressing each. It should also include an ongoing communications and training process to ensure that all members of the executive team and staff are aware of the policies and how to enforce it,” Radin advised.

A Balancing Act

All technologies come with a certain level of risks and potential rewards. Business leaders who are too quick to embrace AI and agentic AI before they and their workers are fully ready, could be creating more problems than they are solving.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.