Trustworthy AI agents mean that people want to use them as tools and possibly as teammates. Responsible AI agents are a statement about the development process, starting with which datasets were used and how experts were involved.

Most organizations’ first experiments with AI agents will come through their technology partners. For example, Workday customers will test AI agents in HR and finance, while SAP has 40 AI agents that extend to supply chain and sales. Cisco Webex delivers its future of work AI agents in employee and customer experiences, while Appian and Quickbase are transforming low-code with AI agents for developing applications.
But I believe more enterprises will take the next step and experiment with developing AI agents. Some will focus on proprietary workflows, and others will develop industry-specific AI agents before they become commercially available. The more innovative organizations seek to develop customer experience AI agents and transform mobile field applications.
I’ve gotten into the weeds about developing AI agents covering development platforms, non-functional requirements, a development blueprint, and the risks of rapidly deploying them. We also covered AI agents at two recent Coffee With Digital Trailblazers, one on smarter AI and data governance that drives innovation, and a follow-up on data overload to biased AI.
Establish AI agent development principles
For this article, I aim to guide CIOs, CTOs, architects, data engineers, data scientists, and change leaders. What principles should they craft for their teams to create responsible and trustworthy AI agents? Here are seven identified by experts.
1. Use datasets to match the AI agent’s responsibilities
Organizations with tagged data catalogs, classified datasets, and data quality metrics have a leg up on developing AI agents. That said, delivery leaders working on AI agents should elicit data owners and subject matter experts to review datasets for completeness, biases, and other quality issues before developing AI models.
“The organization must have validated datasets for the problem at hand, and it is critical that domain experts manually curate this data to collect both the good and bad user feedback from operational agents,” says Debo Dutta, chief AI officer for Nutanix.
Recognize that the AI agents SaaS vendors are developing focus on workflows performed frequently and with the most recent, tightly scoped datasets. Recruiters prioritize more recent applicants, while closing the books requires the last several months of data. These are higher-value agents focusing on repetitive work, but they are also easier to implement due to their narrow data scope and duration.
“Most organizations think they’re AI-ready because they have lots of data, but the truth is most knowledge bases are outdated or inconsistent, so AI ends up confidently delivering the wrong answers,” says Ryan Peterson, EVP and chief product officer at Concentrix. “Data readiness isn’t a one-time check; it’s continuous audits for freshness and accuracy, bias testing, and alignment to brand voice. Metrics like knowledge base coverage, update frequency, and error rates are the real tests of AI-ready data.”
Recommendation: Consider AI agents in areas that are challenging for people to do without AI, are done frequently, and only need easy-to-scope datasets.
2. Formalize data quality validation and metrics
“Only 12% of organizations say their data is AI-ready, and without accurate, consistent, and contextual data, even the most advanced models risk unreliable outcomes, says Tendü Yogurtçu, PhD, and CTO of Precisely. “To ensure trust, companies must align governance, contextual quality, and continuous monitoring of data health.”
Yogurtçu recommends
- Adopting key metrics, including observability, lineage, inclusion of all relevant data, and fairness.
- Verifying data quality through profiling, anomaly detection, and automated quality checks to strengthen visibility, quality, compliance, security, and privacy.
Recommendation: Evaluate data quality and health before embarking on any AI modeling or AI agent development.
3. Seek help on compliance and regulations
I’m calling a spade a spade – innovators are generally not compliance experts. Internal risk and compliance teams have day jobs and may not be versed in evolving AI and data requirements. So, how should Digital Trailblazers get expert advice around compliance and regulations before embarking on an AI agent development process?
“CIOs implementing AI agents must have trusted regulatory and compliance partners in place from day one,” says Ravi de Silva, CEO of de Risk Partners. “It’s about having smart talent that truly knows the intricacies of compliance and regulatory challenges – not relying entirely on AI as a replacement for trusted advisors. The organizations that will thrive are those viewing compliance not as a constraint on innovation, but as a strategic enabler of smarter, more resilient operations.”
Recommendation: With global regulations and customer expectations changing rapidly, organizations should get assistance from compliance partners early in the development process.
4. Establish non-negotiable safeguards
Some companies don’t have standards defined, while others have wikis and large documents full of them. I have the same recommendation for both groups: create a one-page list of non-negotiable standards as a starting reference point.
I’ve published DevSecOps non-negotiables and another for data governance non-negotiables. It’s too early to have comprehensive non-negotiables for AI agent development, but there are several starting points.
Ravindra Patil, VP and practice leader of data science at Tredence, says, “Enterprises developing agentic AI must balance innovation with responsibility by embedding safeguards, transparency, and ethics from the start.”
Patil’s best practices include human-in-the-loop checkpoints, explainable design, continuous monitoring, and governance frameworks to ensure accountability. “Dynamic guardrails, privacy-preserving technologies, and adaptive feedback loops help AI agents remain trustworthy and safe as they scale in real-world environments,” says Patil.
Recommendation: Start with an expression of business value and end-user personas to define a vision for the AI agent. Then reference existing standards around data, automation, and creating feedback loops.
5. Review third-party AI agents and avoid building generalists
One of the lessons you can deduce is how AI agents from SaaS companies tend to be small in scope. They’re often designed to look at one problematic workflow, such as Workday’s closing the financial books agent and Cisco Webex’s meeting scheduling agent. Why? One reason is that single-purpose AI agents require narrowly focused datasets, making them easier to develop and test.
“Avoid building generalist agents as these often lose context, fall short on accuracy and precision, and can lead to more confusion than help,” says Sydnee Mayers, AI at Cribl. “Try not to fall into the everything must be custom trap, as pre-built agents can provide functionality that shortens the time to value for using agents within your workforce.”
Recommendation: Mayers also says to beware of rogue AI agents that require broad permissions or lack guardrails.
6. Create test data sets to validate models and AI agents
I have an upcoming article on how to test AI agents. Until then, here are two recommendations:
- “When building AI models, it is important to keep some data aside for testing so you can validate the model and its predictions, says Steve Simpson, VP of global enablement and learning at Copado. “If you train with all of your data, then you do not have a way to independently validate its predictions, because your data is in the model. Long before machine learning, it has been common practice to save portions of the data for independent validation.”
- “Trusting an AI agent’s decisions requires a quantifiable level of data quality and security, not just volume, says Christopher Hendrich, associate CTO of AppMod at SADA, An Insight company. “Adopt rigorous metrics and tests such as completeness and consistency scores, alongside bias audits, to continuously validate the data pipeline. A robust framework built on data governance, observability, and DataOps ensures the right data gets to the right place, securely and reliably.”
Recommendation: AI Agents are non-deterministic, so expect to invest significantly more in testing them than in the build and development costs.
7. Bring experts and end-users into the development process
The single biggest mistake in digital transformation initiatives is forgetting that bottom-up change management drives success. I wrote that article in 2018 and followed it up last year with another article on digital transformation’s fundamental change management mistake.
I would make the same argument about deploying AI agents: define a change plan before building them.
“One of the biggest mistakes organizations make is treating AI adoption primarily as a technology problem, ignoring the people problem,” says Pam Njissang, senior consultant at Spring Catalyst. “Don’t deploy agents without first giving employees express permission and guardrails to experiment because competitive advantages come from culture change, not just new tools.”
“When teams stop asking ‘How do I protect my role from AI?’ and start asking ‘How can AI help me do work I’ve never been able to do before?’, that’s when evolution happens,” adds Njissang. Success should be framed not as human vs. AI performance but as what’s possible when humans are safely supported and amplified by intelligent systems.”
Recommendation: Although this is the last on my list of essential principles, I recommend starting with a people-first approach to developing AI agents. Leading with change management is a best practice for driving end-user adoption, quelling fears, and getting faster feedback.
Are you developing an AI agent? I’d love to hear about it.




















Leave a Reply