Sunday, March 1, 2026

AI Can Drive, But Humans Hold the Map

For years, the reassuring phrase in AI conversations has been “human in the loop.” It suggests oversight, control, and safety. The machine works, the human checks, and the organization remains protected. In earlier stages of automation, that model made sense. Systems were narrow, tasks were defined, and errors were easier to detect and reverse.

But AI is no longer confined to narrow tasks. It now generates content, recommends decisions, evaluates risk, orchestrates workflows, and increasingly operates with a degree of autonomy that feels less like a tool and more like a participant. In that world, simply placing a human at the end of the process to review output is not enough. The real shift required today is from “human in the loop” to “human in the lead.”

Being in the loop implies reaction. Being in the lead implies direction.

When humans are merely in the loop, they validate decisions already shaped by algorithmic logic. The system frames the problem, processes the data, and proposes the outcome. The human approves or overrides. In theory, that preserves control. In practice, however, humans often defer to confident systems. High accuracy rates, persuasive outputs, and performance dashboards subtly influence behavior. Over time, review becomes routine. Overrides decline. Accountability blurs.

By contrast, when humans are in the lead, the posture changes fundamentally. Leadership defines the objective before optimization begins. Leaders set the risk appetite, determine acceptable trade-offs, and establish guardrails within which AI operates. They decide what success means and what constraints matter. The system supports those decisions, but it does not define them.

This distinction becomes clearer when examining real-world deployments.

Consider a large property and casualty insurer that introduced AI into its claims processing workflow. The goal was straightforward: accelerate claims triage, estimate damage costs from submitted images, and flag potential fraud. The implementation was technically successful. Processing times dropped. Operational efficiency improved. Straight-through claims increased.

On paper, it was a model transformation initiative.

Yet within months, issues began to surface. Legitimate customers were flagged as suspicious because the fraud model over-indexed on certain claim patterns. Cost estimates skewed low because the training data reflected pre-inflation repair averages. Claims adjusters, faced with highly confident AI recommendations, rarely overrode the system, even when contextual cues suggested they should. The organization had technically preserved “human in the loop” oversight. Adjusters could intervene. But culturally, the AI had begun to lead.

The system framed the judgment. Humans validated it.

Recognizing the drift, leadership reframed the model around a human-in-the-lead philosophy. Instead of asking adjusters to confirm AI outputs, they clarified that AI recommendations were analytical inputs, not decisions. Senior leaders explicitly redefined risk tolerance thresholds and required contextual reasoning for acceptance of AI estimates in complex cases. Explainability tools were introduced so adjusters could see which variables influenced cost projections and fraud flags. Monthly review forums were established to assess model drift, inflation impact, and anomaly clusters. Incentives were redesigned to balance speed with fairness and accuracy rather than throughput alone.

The difference was subtle in workflow but profound in accountability. The AI continued to process data at scale. But strategic direction, risk calibration, and ethical judgment returned visibly to human leadership.

This is the deeper reason the shift matters. AI systems optimize based on historical patterns. Humans interpret shifting realities. Markets change. Regulations evolve. Social expectations move. Ethical lines sharpen. Context expands faster than training data can adapt. If humans are only reviewing outputs, they are reacting to yesterday’s assumptions. If they are leading, they are actively redefining tomorrow’s boundaries.

As AI systems grow more capable and more embedded in everyday operations, the psychological dynamic becomes even more important. Humans tend to trust systems that demonstrate consistency and confidence. The danger is not that AI makes decisions; it is that humans unconsciously relinquish strategic ownership.

Human in the lead does not mean slowing innovation or second-guessing every output. It means clarity of responsibility. It means explicit ownership of outcomes. It means designing governance structures where escalation is normal, recalibration is routine, and objectives are human-defined before they are machine-optimized.

The organizations that will navigate AI most successfully will not be the ones that automate the fastest. They will be the ones that remain unmistakably accountable. They will treat AI as a powerful instrument, capable, efficient, and transformative, but still an instrument.

Because in the end, accountability cannot be outsourced. Leadership cannot be automated.

And progress without stewardship is simply acceleration without direction.

#AI #Leadership #ResponsibleAI #DigitalTransformation #Governance #FutureOfWork

How Services Firms Should Pivot in the Age of AI: The Case for Customer Truth

Every IT Services, Engineering Services, and BPO firm in the world right now is asking the same question: how do we respond to AI?

The honest answer, from the most credible voices in the industry (Ethan Mollick, Sir Demis Hassabis, BCG, Gartner, ISG, HFS) is that we don’t know yet. They disagree on the pace of disruption, the role of offshore delivery in an AI-automated world, and whether competitive advantage will ultimately belong to the firm with the best AI tooling, the best relationships, best domain expertise, or best commercial model.

That uncertainty is not a failure of analysis. It reflects the genuine complexity of a transition that has no historical precedent at this speed or scale. Analyst frameworks, academic research, and competitive intelligence are all essential inputs. None of them, individually or collectively, are sufficient.

So if the world’s best thinkers are working from incomplete information, how can you as a Services executive pivot with greater confidence?

The Strategy Room Has a Blind Spot

The pressure to act is real. But there is a dangerous assumption embedded in most AI strategy discussions: that leaders already know what their most important customers need them to become.

They often don’t. They hear from customers regularly through executive briefings, account reviews, Voice of the Customer programs, and satisfaction surveys. The signals look strong on paper. But there’s a meaningful difference between customer feedback and Customer Truth.

Customer feedback is what customers say in formal settings where they are conscious of the relationship, the audience, and the consequences of candor. Customer Truth is what they actually believe, what they will say when no one from your organization is in the room, and what strategic decisions they are already making that your firm doesn’t know about yet. The gap between those two things is where AI strategy goes wrong.

One Critical Input That Most Firms Are Missing

Direct, unfiltered customer dialogue belongs in the AI strategy toolkit alongside analyst research, competitive intelligence, and internal capability assessments. Not as the only answer, but as the input that grounds everything else in reality.

Customer Truth doesn’t travel through formal channels. Hierarchy filters it, incentives soften it, and risk aversion removes the sharpest edges before it reaches anyone with the authority to act. It only surfaces when senior customers are in the right setting: structured for candor, genuinely peer-to-peer, with the commercial relationship temporarily set aside.

There is also a dimension that analyst reports cannot capture: AI disruption is not landing uniformly. A CIO managing enterprise platforms is navigating an entirely different reality than a Chief Product Officer sourcing engineering capabilities, or the EVP of Shared Services in a manufacturing organization. The implications for what they need from a partner in terms of talent, tooling, risk posture, commercial arrangement, and co-investment are materially different. An AI pivot strategy built from a single industry lens is almost certainly incomplete for a significant portion of your most strategic accounts.

The only way to surface that diversity is to deliberately convene it, bringing together a cross-section of your most influential customers to speak openly, challenge each other’s assumptions, and pressure-test your direction alongside everything else you’re learning.

The Firms That Will Get This Right

The firms that navigate this transition most successfully will not be the ones with the most confident internal strategy, or those who leaned hardest on external research. They will be the ones who triangulated market intelligence with direct, unfiltered dialogue from their most strategic customers.

In a moment of genuine uncertainty, the competitive advantage belongs to the firm that asks better questions, of the right people, in the right setting, and treats those answers as a critical input to a strategy that is still, by necessity, being built.

Your most strategic customers know more about where they are headed than any analyst report. The question is whether your organization has built the structures to truly hear them.

#CustomerTruths #CustomerAdvisoryBoards #CABs #ITServices #BPO #EngineeringServices 

Kanji: The postbiotic Traditional Miracle

This summer, don’t just hydrate. Most of us talk about probiotics. Few of us understand the magic of postbiotics. This summer, I want lot of you to rediscover something our grandmothers already knew the science behind traditional kanji.

We know
• Prebiotics feed good bacteria.
• Probiotics are good bacteria.

But postbiotics are the powerful compounds created by good bacteria including
→ Short-chain fatty acids (SCFAs)
→ Certain B vitamins
→ Lactic acid
→ Anti-microbial peptides
→ Anti-inflammatory compounds

And the best part is this can be created at home naturally. Here’s the simple 3-day summer kanji recipe

-Take a 2-litre glass jar.
-Add 1 small beetroot (washed, peeled, chopped).
- Add 1 amla (washed, chopped).
- Add 2 tsp crushed mustard seeds.
- Add 1½ tsp sea salt.
- Fill with water.
- Cover with a white cloth and secure with a rubber band.
- Keep in sunlight for 3 days.
- Stir every morning with a wooden spoon.
Your kanji is ready.

If you’re concerned about sugar spikes, try it with carrots preferably black carrots instead of orange. Try creating this batch with your family and friends. There’s something beautiful about preparing health together. Try it for 3–4 days on empty stomach in the morning... Notice how your gut feels. And tell me your experience in the comments.

Let’s bring back tradition backed by science.

Hyderabad, Telangana, India
People call me aggressive, people think I am intimidating, People say that I am a hard nut to crack. But I guess people young or old do like hard nuts -- Isnt It? :-)