The rapid infusion of AI into organizations is not simply a technology shift. It is a leadership shift.
Managers don’t need to become data scientists, machine learning engineers, or AI product specialists. Some may, but that will not be the primary expectation for most. The more important shift is for managers to help their teams use AI wisely, responsibly, and productively in the actual flow of work.
This is becoming more urgent because managers are already using AI more often, and not always in appropriate ways.
For example, a manager might use AI to summarize performance notes from the last 10 months in order to draft a performance appraisal. On the surface, that sounds efficient. It may save time, organize scattered observations, and help the manager write more clearly.
But it also creates risk.
Those notes may include confidential employee information, coaching conversations, performance concerns, interpersonal conflict, disciplinary documentation, or sensitive context that should not be entered into an AI tool. Even when the intent is innocent, the manager may have created a privacy, compliance, or trust issue without realizing it.
That is why the next critical management skill will not simply be AI literacy. It will be AI discernment: the ability to know when AI can support the work, when it should be limited, and when it should not be used at all.
The most valuable managers will be those who can translate between business problems, human judgment, team workflows, and responsible AI use. They will help their teams adopt AI without over trusting it, misusing it, or allowing it to quietly reshape work in ways no one has intentionally designed.
The first skill managers need is not prompt writing. It is judgment.
AI can draft, summarize, analyze, suggest, compare, and automate. But it can also hallucinate, flatten nuance, reflect bias, and misuse sensitive information. AI often produces confident but incomplete answers. Managers must be able to ask, “Is this output good enough to use?” and “What still requires human review?”
This is especially important because employees may begin using AI before they fully understand its limits. The risk is not just that people will avoid AI. The risk is that they will use it casually, copy the output, and move on.
Managers should develop the habit of reviewing AI-generated work through a simple lens:
What is accurate?
What is missing?
What assumptions need to be tested?
What decision requires human accountability?
A good development activity is to have managers review AI-generated work together. Use examples such as project updates, customer responses, meeting summaries, job descriptions, or business analysis. Ask the group to identify what is useful, what is risky, and what still requires human judgment.
This builds practical discernment. It also reinforces an important standard: AI output is not the final answer. It is a starting point for better thinking.
Some uses of AI are clearly low risk. Others are clearly inappropriate. But many situations fall into the gray area.
Performance management is one of the clearest examples.
A manager using AI to improve the clarity of a general feedback statement may be appropriate, depending on the tool and the organization’s policies. But uploading months of employee-specific performance notes into an AI tool is very different.
The issue is not just confidentiality. It is also judgment.
AI may summarize what appears most frequently in the notes rather than what matters most. It may overemphasize negative incidents, miss improvement over time, strip away context, or produce a polished appraisal that sounds fair but reflects incomplete or poorly weighted information.
That’s unfair to the employee and creates risk for both the manager, and the organization.
A safer approach would be:
Do not enter identifiable employee performance notes into an unapproved AI tool.
Use approved internal tools only when organizational policy allows it.
Remove names, personal details, and sensitive context when appropriate.
Use AI to improve wording or structure, not to make the evaluation.
Keep the manager accountable for the final assessment.
Require human review for anything related to performance, discipline, promotion, compensation, or termination.
This distinction matters. The goal is not to ban AI from every aspect of performance management. The goal is to prevent managers from unintentionally outsourcing sensitive judgment or exposing confidential information in the name of efficiency.
Managers need to learn how to ask:
What information am I providing?
Is it confidential, sensitive, or employee-specific?
Is this an approved tool?
Am I using AI to support my thinking, or replace my judgment?
Could this output influence someone’s pay, role, reputation, or employment?
Would I be comfortable explaining this use of AI to HR, Legal, my employee, or my executive team?
That last question is especially useful. If the answer is no, the manager should pause.
Another mistake organizations will make is layering AI on top of poorly designed processes. When that happens, the organization may move faster, but not necessarily better. AI can accelerate confusion just as easily as it can accelerate clarity.
Managers will need to examine how work actually gets done. They will need to identify where AI can help and where it creates risk. They will need to ask:
Which tasks are repetitive or time-consuming?
Where do people need better information?
Where are handoffs breaking down?
Where is human judgment essential?
Where do we need review checkpoints?
What work should disappear entirely?
This is where managers become essential. Executive leaders may set the AI strategy, IT may provide tools, and Legal or HR may define guardrails. But managers are closest to the day-to-day work. They can see where AI actually fits.
A practical development exercise is to have each manager map one recurring workflow. For example, preparing client updates, responding to service issues, conducting performance reviews, onboarding employees, creating reports, or planning meetings. Then identify which parts of the workflow can be assisted by AI, which should remain human-led, and which require approval before use.
The goal is not “use AI more.” The goal is better work.
AI rewards clear thinking. A vague request produces a vague result. A poorly framed problem produces an answer that may sound useful but miss the point.
That means managers need to become much better at defining the problem before asking AI, or anyone else, to help solve it.
This includes clarifying the desired outcome, audience, constraints, risks, context, available data, and definition of success. Interestingly, this is not only an AI skill. It is a core leadership skill. AI simply makes the weakness more visible.
Managers can develop this by using the same structure for AI prompts that they should already use when delegating work:
Here is the goal.
Here is the audience.
Here is the context.
Here are the constraints.
Here is what good looks like.
Here is what to avoid.
When managers improve problem framing, they improve delegation, decision-making, communication, and coaching. AI becomes another reason to develop a skill leaders already needed.
AI creates practical questions, but it also creates emotional ones.
Employees may wonder whether their skills are becoming obsolete. They may feel embarrassed that others are further ahead. They may worry about using AI incorrectly. They may resist it because they see it as a threat to quality, identity, or job security.
Managers need to be able to normalize learning, acknowledge uncertainty, and create a safe environment for experimentation. They should not dismiss concerns with shallow reassurance. Instead, they need to help people separate realistic risks from vague fears.
Useful manager language might sound like this:
“Some parts of our work will change. Some tasks may become easier, faster, or less valuable. But your judgment, relationships, accountability, and ability to solve real problems still matter. Let’s learn how to use these tools wisely.”
Managers should practice conversations around questions such as:
Am I going to be replaced?
What am I allowed to use AI for?
How do I know whether the output is accurate?
Will expectations keep increasing because AI makes things faster?
What skills do I need to build now?
These are leadership conversations, not technical support tickets.
Executives have a different responsibility. They need to create the conditions in which managers can lead AI adoption well.
First, executive leaders must avoid treating AI as a pure efficiency play. If the only message employees hear is “do more with less,” they will naturally become defensive. AI strategy should connect to better decisions, better customer value, better employee experience, and better use of human capability.
Second, executives need to clarify governance without creating paralysis. Employees need usable guidance, not vague warnings. Managers need to know what is approved, what is prohibited, and where they have room to experiment.
Third, executives need to distinguish between categories of AI use. For example:
Low-risk use: “Help me rewrite this general project update for clarity.”
Higher-risk use: “Analyze these anonymized engagement themes.”
Potentially inappropriate use: “Summarize these 10 months of employee performance notes and write the appraisal for me.”
That kind of distinction helps managers make better decisions in real time.
Fourth, executives need to invest in manager capability, not just tool deployment. Training should not stop at how to use a platform. Managers need practice applying AI to real workplace scenarios, especially where confidentiality, judgment, fairness, and accountability are involved.
Finally, executives need to measure more than adoption. Usage alone does not prove value. Better questions include:
Are decisions improving?
Are workflows becoming simpler?
Are employees clear on responsible use?
Are customers experiencing better outcomes?
Are managers helping people adapt?
Are productivity gains being reinvested wisely?
The managers who thrive in the AI-heavy workplace will not be the ones who simply use the newest tools. They will be the ones who help people think better, work better, decide better, and adapt faster.
The most important manager skills will be judgment, discernment, workflow design, problem framing, coaching through uncertainty, responsible AI use, and continuous learning.
AI may change the work. But managers will still shape the climate, expectations, decisions, and behaviors that determine whether the change helps or harms the organization.
The central leadership question is not, “How do we get people to use AI?”
The better question is:
How do we use AI to improve the quality, speed, and humanity of our work without losing judgment, trust, or accountability?

Copyright © 2026. All Rights Reserved.