Speak to CIOs and CHROs, and they’ll say they are working on developing AI literacy in their organizations. But what exactly does that mean?

I recently hosted a Coffee With Digital Trailblazers episode titled “Learning 2026: What is an AI-Literate Organization? To kick off the discussion, I presented Perplexity’s response to my prompt, “Provide a definition for AI literacy.”
“AI literacy is the knowledge and skills needed to understand, use, and question artificial intelligence systems in a practical1, informed2, and responsible way3. It means people grasp what AI can and cannot do4, can work effectively with AI tools5, and can critically evaluate their outputs6, risks7, and ethical implications8 rather than accepting them at face value.” The AI used the following sources in formulating its response: NJIT, GDPR Local, Denison Edge, and IBM.
Not bad, eh? Based on that definition, I numbered eight leadership recommendations for developing AI-literate organizations. Based on input from Coffee With Digital Trailblazer speakers and several AI experts, the following is a leadership guide for CIOs, CHROs, and other C-level leaders.
1. Target practical visions, not moonshots

AI literacy requires a dual focus on AI. What KPIs should teams experimenting with AI focus on that deliver ROI, and what’s the longer-term AI vision? AI innovators should draft two AI vision statements that convey these two objectives and steer clear of impractical moonshots.
“AI capability is as much about the potential as it is about what is happening now,” says Jason Williamson, CEO of MythWorx. “You must understand the organizational behavior impact, the levers to profitability, and what it takes to get to the next step. While AI is new and exciting, it differs from other innovations in terms of adoption. The impact and results may be greater, but integrating the capabilities is the same as anything else that is new and worth doing.”
Vision statements should define success criteria, and founders should identify a change management process when piloting AI agents and other AI capabilities.
2. Being informed starts with the Board
Sunil Senan, SVP and global head of data, analytics & AI at Infosys, says AI literacy starts at the top. “Boards that treat AI as a standing strategic priority, rather than an occasional technology update, are better positioned to manage risk and unlock value,” Senan said.
Senan shared research from Infosys showing that while 86% of boards now discuss AI regularly, fewer than one-third are deeply engaged in evaluating AI risks, explainability, and outcomes. “AI-literate organizations ensure leadership understands how AI systems make decisions, where human oversight is required, and how AI performance ties directly to business value,” adds Senan.
When discussing AI with the Board, spend one-third of your allotted time in each of these three areas: (1) AI Strategy and the organization’s objectives, (2) AI governance and protecting the business, and (3) AI literacy, to debunk AI hype and share information on relevant innovations, competitive intelligence, and R&D opportunities.
3. Use AI responsibly: Establish and explain data security
Nico Dupont, founder and CEO of Cyborg, suggests these practical steps for improving AI literacy from a security and governance perspective:
- Classify AI data based on sensitivity, regulatory requirements, and business impact, including embeddings and retrieved context, not just source systems.
- Extend zero-trust principles into AI workflows, ensuring no component implicitly trusts access to sensitive data.
- Adopt encryption-in-use architectures for vector search and inference so that data remains protected even as it is queried and processed.
- Establish cross-functional AI governance, aligning security, compliance, and technology teams around shared visibility into AI data flows and risks.
“Without strong data protection, AI can’t safely access the information that makes it valuable in the first place,” says Dupont.
Don’t expect employees to learn their AI responsibilities just from training materials or courses. Guidelines must be clearly spelled out in policies and implemented in AI initiatives from their onsets. StarCIO recommends defining data governance non-negotiables.
4. Accelerate learning on what AI can and cannot do

During the episode on AI-literate organizations, speakers shared key insights on leadership, collaboration, and defining business value. Amid all the AI hype, the speakers shared recommendations for aligning leaders and teams around what’s feasible today.
Joe Puglisi, growth strategist and fractional CIO at 10xnewco, says, “Identify one or more members of the staff whose sole responsibility, and I do mean sole responsibility, would be to track, document, and share developments in the AI realm. They would be responsible for fostering a clear and concise understanding of the opportunities that AI presents for the company.”
Elena Putilina, PhD and VP at AutoRemind, said that the business community needs to understand that AI is not a miracle but is much more than a productivity tool. She said, “It’s going to change the way people work within the organization. AI literacy is about grounding the commercial functions, not only sales and marketing, but especially for CFOs. They’re the decision makers in what AI could do for your organization with its development.”
Joanne Friedman, CEO and co-founder of ReilAI, shared how leaders can simplify communications around what AI can do and where to focus organizational efforts.
- AI can deliver better, safer, and faster decisions.
- Prioritize areas where decisions are expensive, frequent, and consequential.
- Fund literacy where it converts into outcomes such as money (margin/cash), risk (quality/safety/compliance), and throughput (cycle time/velocity).
5. Upskill so that people can excel using AI tools
Two weeks ago, Anthropic and OpenAI said AI now writes 100% of their code, causing a small crash in SaaS stocks. Investors suddenly believe that SaaS will be disrupted as enterprise CIOs redevelop their operations using vibe coding and other AI code-generation capabilities.
I’ve seen some of the new capabilities, and they’re very impressive. But I doubt they will disrupt the average enterprise’s software development process overnight. What CIOs must do is prioritize developer training while also evaluating the AI agents offered by enterprise SaaS companies.
“AI literacy isn’t just about generating code faster; it’s about giving engineers the tools and trust to safely govern AI output and focus on high-impact, creative architecture,” says Priya Sawant, GM and VP of platform and infrastructure at ASAPP. “When leaders invest in upskilling and involve DevOps teams in AI adoption, those teams evolve from operators into the driving force behind AIOps and organization-wide innovation.”
Liz Martinez, SVP value realization at Wauna Credit Union, recommends a practical approach to getting employees to try out AI as part of their jobs. “I think about how I define AI literacy for myself when I consider some tasks I need to do. I need to do X and define a strategy and a plan for the steps. Then I ask, how can I get AI to do some of the steps for me? What are the ramifications of using AI for them, and is it safe to ask AI do them? And if AI does them for me, what are the ramifications? AI literacy means thinking about using AI as a tool and a partner, just as you would a teammate or a vendor.”

6. Validate AI’s outputs, responses, and recommendations
A recurring theme at the Coffee With Digital Trailblazers on AI is the development of critical thinking skills. How should employees validate the response of an LLM’s response or an AI agent’s recommended course of action?
Many companies established a data-driven culture while introducing citizen data science. AI reaches a much broader audience, requiring leaders to educate more employees around data and AI literacy.
Martin Davis, managing partner at DUNELM Associates, shared his perspectives on manufacturing companies. “You have a lot of extremely good people, but most of it is gut feel. They know that they can improve that process by tweaking this or changing that. It’s extremely interesting when you put the data in front of them to trust, but verify by looking at what the data says, and then see how you can make improvements,” said Davis.
7. Understand AI risks and tradeoffs
When there’s emerging and rapidly changing technology, there are two risks in every C-level leader’s mind.
- Driving too slow to evaluate and adopt, risking disruption from new competitors/
- Pushing teams to deploy to production without release readiness criteria, creating security and other operational risks.
Part of AI literacy is ensuring employees understand the risks and the tradeoffs between speed to market and operational resiliency.
During the Coffee With Digital Trailblazers, John Patrick Luethe, owner of Comfort Keepers Seattle, was most afraid of being disrupted. “You don’t want to get left behind like Kodak when digital cameras came out. It’s also really important for companies to determine where they want AI to help and where they need humans. Leaders must consider where the company’s differentiators will be and what’s really important to customers and employees. Based on that, make a conscious decision about what things are so important that they need the human touch and where AI can provide improvements,” says Luethe.
Derrick A. Butts, enterprise CXO AI cyber resiliency advisor, CISO, and founder of Continuums Strategy, is most concerned about ensuring sufficient security practices are in place. “What kind of security by design practice do we need to implement if you have developers working with different code generators and vibe coding tools. What are the guardrails, and how are you securing new code to prevent intellectual property from leaking into the ether? Make sure you have red teams simulating different problems, such as prompt injection, data leaks, or other types of infiltration,” says Butts.
StarCIO recommends formulating agile AI teams that include teammates specializing in security and operations. See my recent article on the top seven leadership mistakes when assigning people to agile teams.
8. Evaluate ethical implications and challenge assumptions
Research from Infosys reports that Boards fear the impact of AI-powered information, listing the most threatening risks as misinformation proliferation (39%), privacy violations (31%), deepfakes (25%), security breaches (24%), hallucinations or harmful predictions (18%), and lack of explainability (16%).
Biased or discriminatory output came in next at 15%, followed by ethical violations at 10%.
That’s too low from my perspective, especially in large enterprises, where overly scripted, policy-heavy environments easily overwhelm employees. Leaders should define ethical principles and ensure teams understand data biases.

Joanne Friedman shared this insight during the Coffee With Digital Trailblazers episode, Data Overload to Biased AI: Why Leaders Struggle to Find Decision-Worthy Information.
“Agents that are built out of generative AI are a round peg, square hole to a certain extent, because then you’re using a model that’s been trained not for a purposeful function, but to be the ocean,” says Friedman. “The LLM will give you an answer to whatever you ask it, and will do it in an affirmative way, which is an inherent bias. It’s not probabilistic or deterministic; the data is stochastic, meaning it’s constantly changing. The data can be interpreted in many ways, and nuance counts. You have to layer in semantics to get the right context to the right question in the right way.”
Heather May, founder and president of May Executive Search, encourages Digital Trailblazers to recognize biased orders and challenge them. “CEOs may try to set an objective that aligns with their biases and wants the data to answer their question. If you don’t know what the objective is, you can have a bunch of yes-sayers in the C-suite. You have to be brave enough and confident enough to respond, “No, this is not what we’re trying to achieve.” Yes, the CEO should ultimately make the decision, but they need good input and data,” says May.
Define your plan for AI literacy
While this is a guide, every business should develop an AI literacy program tailored to its objectives, risks, and culture. Today, AI is only reshaping businesses; it’s the Digital Trailblazers who will drive transformation.

























Leave a Reply