Tuesday, March 31, 2026

5 Steps To Enterprise AI Value

Today is the last day of the financial year, and a traditionally day of stock taking for many businesses. So what do we make of AI Adoption so far?

90% of enterprises have adopted AI. Enterprise AI spend has grown from $1.7B to $37B in two years. Worker access to AI rose 50% last year. Productivity gains are real and measurable. All of that is quite impressive.

But…

Two-thirds of organizations are still stuck in pilot purgatory, unable to scale what they started. Only 8.6% have AI agents running in production (Recon 2026). And just 34% of leaders say they're truly reimagining how their business works (Deloitte, 2026).

What we know: The technology is rarely the problem. The blockers are almost always human: underdeveloped decision architecture, governance delegated to tech teams, and leaders who champion AI in town halls but don't model it in their own workflows.

So how do we drive depth and business outcomes with AI?

From having worked with a few dozen clients on their AI strategies and plans, and seeing what is being implemented and scaled, here are 5 key steps that I believe are key to driving real outcomes.

(1) Find data rich environments: not every area or problem has the underlying data ready. But some do. For example, your technology stack, or your HR or Finance functions are usually flush with ready-to-use data, that is structured, and ready to process. Most processes in these environments are also semi-automated, so there is a level of process maturity. Conversely you might think about CRM or loyalty data, but often these are managed through 3rd parties, or held in multiple systems, and not easily converted into AI ready data. A high percentage of the AI initiatives we see moving forward to value are in these 3 areas.

(2) Find areas with effort headroom: AI adoption may be curtailed by the lack of participation of colleagues who fear for their jobs. Picking areas where even with automation, there is a significant backlog of work, means that AI will simply enable more work to get done rather than create redundancies. One of our AI programs has recently saved a government department thousands of hours, but this is just enabling them to catch up with the significant backlog of work, so there is no question of job loss.

(3) Solve a specific problem or address a specific opportunity: a lot of AI projects start with applying AI to a broad area, but the specific problem is not well defined, and nor is the specific outcome being sought. One of our most viable projects involved bringing down mean time to respond for critical incidents by a very clear target. You could look to speed up your recruitment or onboarding, or enable more straight through processing in your finance function. Whatever it is, it should be (a) measurable and (b) have a clear line to your topline/ bottom line goals i.e. why it matters. And you need to ensure that the focus always stays on delivering the outcome. It’s likely to require more than just the AI component.

(4) Deliver to sharp 3-month timelines for velocity: any project that takes more than 3 months in today’s AI environment risks obsolescence. No matter how complex the problem or how significant your goal, it’s critical that delivery is chunked into 3 month deliverables, with each period delivering tangible value by way of features, benefits, and outcomes. And here’s a tip: the actual AI component of this may take only a month or less of that time. The rest is systems access, testing, implementation, and change management.

(5) Work to a roadmap that is revisited regularly: your 3 month deliverable needs to be backed by a product roadmap. Every development needs to be seen as a product - which means accepting that the solution will constantly evolve and add new features and ideas, and respond to the needs and available AI capabilities. The project mindset is a dangerous one in this world. It’s too rigid, too complex, and far too dependent on predictability. At any point, you may have a roadmap that is sharply defined for the next quarter, and increasingly indicative for future periods.

Ensuring AI adoption across the organization is a good foundation but delivering outcomes is where value lies. Over the next 12 months, the differentiator for enterprise AI will be the ability to move quickly and deliver specific value. And to make this a repeatable and institutionalized approach.

Managing Diabetes : Details

Whatever your doctor has prescribed, don’t just take it, understand where it works. Because every diabetes medication acts on a different organ and comes with a different trade-off.

Here’s a quick, practical breakdown I often share with patients

1. Brain + gut + pancreas →
GLP-1 agonists like semaglutide, liraglutide. They reduce hunger, slow gastric emptying, and improve insulin response.
Yes, weight loss and sugar control improve but nausea and long-term tolerability can be challenging for many.

2. Liver → Metformin
The old, reliable one.
Reduces glucose production from the liver.
Mild GI side effects, but decades of safety and effectiveness.

3. Pancreas → Insulin secretagogues
They push your pancreas to release more insulin. Sulfonylureas (glimepiride, gliclazide)
DPP-4 inhibitors (more glucose-dependent, safer)
Meglitinides (short-acting)

They work but over time, they can exhaust the pancreas... Use wisely.

4. Kidneys → SGLT2 inhibitors
like empagliflozin, dapagliflozin.
They remove excess glucose through urine.
Bonus: strong heart and kidney protection.
Watch out for urinary infections.

5. Intestine → Alpha-glucosidase inhibitors
They slow carbohydrate absorption.
Expect bloating and gas in some cases.

6. Muscle → Insulin sensitizers (pioglitazone)
Improve how your body uses insulin.
But can cause fluid retention in some individuals.

Now here’s the part most people miss is
↳ Most medications reduce HbA1c by ~1–1.5%
↳ A committed lifestyle shift can reduce it by 2–3% in 3 months

Medication is support. Lifestyle is the leverage. 
Use both but don’t outsource your health entirely to a prescription.

#DiabetesReversal #MetabolicHealth #LifestyleMedicine #InsulinResistance #ChronicDiseaseManagement

Data Engineer: The Role the myth

Being a Data Engineer feels glamorous on paper, until reality hits. While the job description said "build data pipelines." It didn't mention the rest.

Think of it like being the unseen plumber in a luxury skyscraper. Everyone loves the view from the penthouse, But if the pipes burst at 3 AM, guess who’s crawling in the basement?

Here's how a data engineer's week actually looks like, 

What your job description skips:

  • Pipelines fail at 2 AM. You fix what nobody saw break. 
  • "Active user" means 6 different things. You pick one and defend it. 
  • Bad data comes in fast. Saying no is half the job. 
  • You're building for 10x the load. Not for today. 
  • The boring stuff, lineage, backfills, docs, is actually the job.

What hits you later:

  • Most time goes into fixing, not building 
  • “Clean data” is rare, assumptions fail often 
  • You’ll block more requests than you approve 
  • Context beats perfect code 
  • Fancy tools won’t fix a messy base

The invisible part:

Late fixes. Quiet improvements. Small changes that stop bigger problems. No one notices. Things just break less. The best data engineers don't just move data. They reduce ambiguity. Protect trust. Make confident decisions possible.

What I wish someone told me sooner:

Master the boring stuff first, it pays dividends longer than shiny new tools. Document like your future self is hungover and angry.

Say “NO” politely early, prevents heroic 80-hour weeks later.

Science of 7 Chakras

Ancient Wisdom meets Modern Anatomy: The Science of Chakras?

Ever wondered why we feel "butterflies" in our stomach (Solar Plexus) when nervous, or a "lump" in our throat when we’re holding back words?

While the 7 Chakras are often viewed as purely spiritual, they correlate strikingly with our Endocrine System, the network of glands that produce the hormones regulating everything from our mood to our metabolism.

The Heart Chakra aligns with the Thymus, the heart of our immune system.

The Throat Chakra aligns with the Thyroid, regulating how we speak and grow.

The Root Chakra aligns with the Adrenals, our "survival" center.

The Takeaway: When we talk about "balancing energy," we are often talking about regulating our nervous system and hormonal health. Whether you prefer the term "alignment" or "homeostasis," the goal is the same: a body and mind in sync.

How do you keep your "systems" in check? Meditation, movement, or maybe a bit of both?

#MindBodyConnection #HolisticHealth #EndocrineSystem #WellnessAtWork #ScienceAndSpirituality

Humans Hit Pause While AI Hits ‘Run’

In recent months, a new kind of protest has begun to take shape, not against governments, not against corporations in the traditional sense, but against something far more abstract and powerful: the rapid, unchecked acceleration of artificial intelligence. What began as scattered concerns among researchers and ethicists have evolved into visible demonstrations outside the offices of major AI companies like OpenAI and xAI. Protesters are rallying around a simple but provocative demand: slow down.

At the heart of these protests there is a growing unease. AI is no longer confined to narrow tasks or experimental labs, it is writing code, generating media, making decisions, and increasingly acting autonomously. For many, the pace of this transformation feels less like progress and more like a runaway train. The concern is not just about job displacement or misinformation, though those remain significant, but about something deeper: loss of human oversight.

What makes this movement particularly compelling is its similarity to earlier global concerns, such as climate change. In both cases, the warning signs are visible, the potential consequences are massive, and yet the systems driving acceleration, economic competition, geopolitical rivalry, and technological ambition, make it difficult to slow down. Protesters argue that AI development has entered a phase where incentives favor speed over safety, and innovation over accountability.

Critics of the protests often point out that slowing AI development could hinder progress, especially in areas like healthcare, education, and climate modeling. But the protesters are not necessarily anti, AI. Rather, they are calling for governance frameworks that match the scale of technology. They want transparency in how models are trained, clarity on how decisions are made, and safeguard against misuse.

One of the central fears fueling the protests is the emergence of “agentic AI”, systems that can act independently, execute tasks, and make decisions with minimal human input. While this capability opens doors to efficiency and automation, it also introduces new risks. What happens when an AI system makes a flawed decision on a scale? Who is responsible? And how do you intervene in a system that is designed to operate autonomously?

A real, world example that highlights these concerns can be found in the financial services industry. A large fintech firm deployed an AI, driven loan approval system designed to streamline credit decisions. Initially, the system improved efficiency dramatically, reducing approval times from days to minutes. However, over time, discrepancies began to emerge. Certain demographic groups were being disproportionately rejected, not due to explicit bias, but because the model had learned patterns from historical data that reflected systemic inequalities.

The issue escalated when regulatory scrutiny exposed the lack of transparency in the model’s decision, making process. The company faced reputational damage, legal challenges, and a loss of customer trust. The solution required a complete overhaul: introducing explainable AI frameworks, conducting bias audits, and implementing human in the loop systems to review critical decisions. What started as a push for efficiency became a lesson in accountability.

This example mirrors the broader concerns raised by protesters. It is not that AI should not be used, but that its deployment must be thoughtful, transparent, and aligned with societal values.

Another dimension of the protests is geopolitical. Nations are racing to dominate AI, viewing it as a strategic asset akin to nuclear technology or space exploration. This competitive pressure makes it unlikely that any single country or company will voluntarily slow down. Protesters, therefore, are increasingly calling for international agreements, something akin to digital arms control, to ensure that AI development remains safe and cooperative rather than adversarial.

Despite the urgency of these concerns, the protests face an uphill battle. AI development is deeply embedded in economic growth and innovation pipelines. Companies are investing billions, and the momentum is difficult to reverse. Yet, the very existence of these protests signals something important: society is beginning to engage with AI not just as a tool, but as a force that needs governance.

In many ways, this moment represents a turning point. The question is no longer whether AI will shape the future, it already is. The real question is whether humanity can shape AI in return.

#AI #ArtificialIntelligence #TechEthics #FutureOfWork #AIGovernance #Innovation #DigitalTransformation

AI’s New Biology Problem

There was a time when biology moved at the pace of observation. You looked through a microscope, ran experiments, and slowly built an understanding of life’s machinery. Then artificial intelligence arrived and quietly flipped the script.

Today, systems like AlphaFold 2 and its successors don’t just analyze biology; they generate it. They predict protein structures from raw sequences with astonishing accuracy, solving what was once considered one of biology’s grand challenges, the protein folding problem.

But prediction was only the beginning. The real shift came when scientists realized: if AI can understand proteins, it can invent them.

And it has.

We are now in an era where algorithms design proteins that have never existed in nature, novel enzymes, synthetic antibodies, and entirely new biological structures. These are not incremental tweaks to evolution’s work. These are leaps into a space evolution never explored. Researchers estimate that natural proteins represent less than 1% of all possible protein configurations, leaving a vast, uncharted design space now being explored by machines.

This is both exhilarating and deeply unsettling.

Because increasingly, scientists can use these AI-generated designs… but cannot fully explain them.

The core tension lies in a simple paradox: AI accelerates biological discovery faster than human understanding can keep up.

Modern generative models can output protein sequences that fold correctly and perform useful functions. Yet, the reasoning behind why a particular sequence works often remains opaque. These systems operate as black boxes, learning statistical patterns from massive datasets rather than explicit biological rules.

In traditional biology, explanation comes first, application second. With AI, that order is reversing.

We are entering a phase where function precedes understanding.

Even more striking is the gap between digital biology and real-world biology. AI models typically predict static protein structures, clean, stable, idealized forms. But real proteins are dynamic, constantly shifting and interacting within complex cellular environments.

A protein that looks perfect in silico may fail in reality; misfolding, degrading, or behaving unpredictably. The result is what researchers call the “design–experiment gap”: a widening disconnects between what AI suggests and what biology accepts.

And yet, despite these limitations, progress continues at breakneck speed.

Consider the broader implications. AI systems are now being extended beyond proteins to model interactions across entire biological systems, DNA, RNA, small molecules, and more. The ambition is staggering: to simulate life itself, perhaps even entire cells, inside a computer.

But each layer of complexity adds another layer of opacity.

We are not just struggling to interpret individual proteins; we are approaching systems whose behavior may be fundamentally beyond intuitive human reasoning.


A compelling example comes from the pharmaceutical industry, where companies are using AI to design drugs based on predicted protein structures.

In theory, the workflow is elegant:

  • Use AI to predict a disease-related protein structure
  • Generate molecules that bind precisely to it
  • Fast-track drug discovery

In practice, the challenges have been sobering.

AI-designed drug candidates often fail during laboratory validation. Some bind poorly in real biological environments; others exhibit unexpected side effects or instability. The root issue is the same: models capture structural possibilities but struggle with dynamic biological realities and context-specific interactions.

The issue:

  • Over-reliance on static predictions
  • Lack of interpretability
  • Poor translation from simulation to experiment

The solution emerging in the industry:

  • Hybrid pipelines combining AI with experimental feedback loops
  • Iterative “design–test–learn” cycles
  • Integration of molecular dynamics simulations to model real-world behavior
  • Development of explainable AI tools to uncover why a molecule works

In short, the industry is learning that AI is not replacing biology, it is becoming a collaborator that still needs human-guided validation.

 

And this brings us back to the central question: Are we inventing biology faster than we can comprehend it?

The answer, increasingly, appears to be yes. But that may not be a failure, it may be a transition.

Just as early engineers built machines before fully understanding thermodynamics, we may be entering a phase of operational biology: using systems effectively before we fully understand them. Over time, new tools, explainable AI, better simulations, richer datasets, may close the gap.

Or they may not.

Because there is another possibility: that biology, at sufficient complexity, resists full human comprehension, and that AI becomes not just a tool for discovery, but an intermediary between us and life itself.

A translator for a language we can no longer fully speak.

#ArtificialIntelligence #Biotech #DrugDiscovery #DeepTech #Innovation #FutureOfWork #LifeSciences #AI #Healthcare #Research

Strength.Stamina.Flexibility: Mix in Exercises Routines

Walking 45 minutes daily won't Reverse Diabetes. Before you commit to that daily walk routine, ask your doctor this - How many patients have completely reversed diabetes by walking 45 minutes daily for years?

The answer might surprise you. Many doctors still believe diabetes can't be reversed which is a dangerous myth.

Here's what I've observed in practice
→ Patients walking 45 minutes daily often damage their knees
→ Many develop accelerated neuropathy
→ They miss the three critical components: strength, stamina, and flexibility

Your thigh and calf muscle size is the #1 determinant of longevity and diabetes reversal.

Why? Muscles consume glucose and fat continuously for 24 hours.

My 3-2-1 formula:
3x weekly → Strength training (leg press, squats, calf raises)
2x weekly → 45-minute cardio sessions
1x weekly → Yoga for flexibility, especially hamstrings

Work toward leg pressing your body weight over time.

"But I'm over 60, it's too late...". Wrong!!

There are 85-year-olds completing Iron Man competitions - 3.5km swimming, 180km cycling, 45km running in 17 hours. One started at age 60.

Stop accepting limitations. Your body is capable of extraordinary things at any age.

Don't fall for the "daily walk solves everything" myth. True health requires strength, stamina, and flexibility in the right proportions.

#DiabetesReversal #HealthMyths #Longevity #StrengthTraining #SuperHealth

Body's Insulin Curve

Most people never hear about AUC (the Area Under the Curve). And that silence is exactly why your Fasting, Low-carb, or High-protein diet has stopped working.

AUC = the total insulin your body secretes throughout the day. Not just the spike... The entire curve.

Here's what's quietly happening to each of you

1. Secular fasters → Glucose spikes stay low. Insulin stays controlled. Good foundation.
2. Intermittent fasters & 2-meal plans → Meals get heavier to compensate. Bigger spikes. More insulin across the day. AUC rises.. results stall.
3. Low carb + high protein (eggs, chicken, fish) → Here's the twist most don't know - protein also triggers insulin roughly 40–50% of what carbs do.
Lower spikes, yes... But not zero.

The fix isn't another diet trend.

It's matching your diet to your metabolic phase.
→ Still carrying fat? You need a catabolic protocol first.
→ Ready to build muscle? Then anabolic support kicks in.
→ Not sure? Especially if you're a "thin-fat Indian" lean outside, fat in the liver and blood (triglycerides) this distinction is critical.

#MetabolicHealth #DiabetesReversal #IntermittentFasting #FatLossScience #InsulinResistance

Tuesday, March 24, 2026

LLM Prompt routing Architecture

Prompt routing architecture determines response quality, latency, and reasoning depth in modern AI systems.


Four execution modes exist, each optimized for a different workload profile.

  • Instant mode routes directly to a fast inference model. Best for quick factual queries, autocomplete, and lightweight tasks where latency matters more than deep reasoning.
  • Auto mode sends prompts through a router that selects the optimal model path. Routing decisions depend on prompt complexity, token length, and reasoning signals detected in the input.
  • Thinking mode activates structured reasoning chains. Intermediate reasoning steps are generated internally before producing the final response. This improves accuracy for logic, math, debugging, and multi step analysis.
  • Pro mode runs multiple parallel reasoning paths. A reward model scores candidate outputs and selects the highest quality answer. This approach resembles ensemble inference and significantly boosts reliability for complex problem solving.

Safety layers then evaluate the selected response using topic classifiers and reasoning monitors before delivery to the interface.

Example:
- A simple question like “capital of Japan” is handled by Instant mode.
- A request like “optimize a distributed training pipeline with cost constraints” is routed to Thinking or Pro mode because it requires planning, trade off analysis, and multi step reasoning.

Understanding routing systems is essential for building production grade AI platforms. Performance does not depend only on the model. It depends on orchestration, evaluation, and safety layers working together.

#AI #GenAI #LLM #SystemDesign #MachineLearning #AIArchitecture #DeepLearning #TechExplained

Macrohard: When Your Entire Company Becomes an AI Intern

There’s a quiet but radical shift underway in how we think about organizations. For decades, companies have been structured around people, processes, and software tools. But what if the company itself became the software?

That’s the premise behind Macrohard, also called Digital Optimus, a joint artificial intelligence initiative between Tesla and xAI, led by Elon Musk. It’s not just another AI assistant or automation tool. It’s an attempt to replicate the functions of entire companies using coordinated AI systems.

At a glance, this might sound like an evolution of existing AI copilots. But it’s more ambitious, and more unsettling. Instead of helping employees do their jobs, Macrohard aims to be the employee, the manager, and in some cases, the entire department.

The Architecture of a “Digital Company” is the core. What makes Digital Optimus different is how it blends reasoning with action.

At the top sits Grok, the large language model developed by xAI, acting as a strategic “navigator”, understanding goals, context, and decision-making. Beneath it operates a Tesla-built AI agent that doesn’t rely on APIs or integrations. Instead, it watches screens, interprets workflows, and interacts with software the way a human would, through keyboard inputs and mouse actions.

This dual-system approach mirrors how humans work:

  • One part thinks, plans, and reasons
  • The other executes in real time

The result is an AI that can theoretically log into tools, manage spreadsheets, respond to emails, write code, analyze dashboards, and coordinate workflows, without needing custom integrations. In principle, Musk claims, such a system could “emulate the function of entire companies.” That’s the leap: from automation of tasks → to simulation of organizations.

Historically, software has been layered inside companies, CRMs, ERPs, collaboration tools. Macrohard flips that model. The company itself becomes a programmable entity.

This has three profound implications:

  1. The boundary between human labor and digital labor begins to blur. If an AI can operate tools exactly like a human, the distinction between “automation” and “replacement” becomes thinner.
  2. Organizational design could compress dramatically. Entire middle layers, operations, coordination, reporting, are precisely the kinds of structured workflows AI excels at.
  3. Speed becomes a competitive weapon. A Digital Optimus system doesn’t sleep, doesn’t wait for meetings, and doesn’t suffer from communication lag. Decisions and execution collapse into a single continuous loop.

Consider the global customer support industry, particularly large SaaS companies.

The problem: Customer support at scale is messy. Companies face:

  • High operational costs with large support teams
  • Fragmented tools (ticketing systems, CRMs, knowledge bases)
  • Slow response times due to handoffs and escalation layers
  • Inconsistent quality depending on agent experience

Even with chatbots, most systems fail at complexity. They can answer FAQs but break down when workflows span multiple tools or require judgment.

The Macrohard-style solution: A Digital Optimus system wouldn’t just respond to queries, it would operate the entire support workflow.

Imagine this:

  • It reads a support ticket
  • Logs into the CRM
  • Checks user history
  • Identifies billing or technical issues
  • Executes fixes directly in backend systems
  • Sends a personalized response

All of this happens without APIs, just by interacting with software like a human would.

Because it observes and mimics real workflows, it can adapt across tools without needing custom engineering for each platform.

The outcome:

  • Reduced dependency on large support teams
  • Faster resolution times (minutes instead of hours)
  • Consistent quality across interactions
  • Continuous learning from real workflows

This isn’t theoretical. Early agentic AI systems are already moving in this direction, and Macrohard pushes that idea to its logical extreme.

The Tension now prevails because it’s a hiatus between: Power vs. Practicality. As with most of Musk’s ideas, Macrohard sits at the edge of possibility and skepticism.

On one hand, the concept aligns with broader industry trends toward agentic AI, systems that don’t just generate outputs but take actions. Even competitors are exploring similar directions, signaling that this isn’t an isolated idea but part of a larger shift.

On the other hand, there are real challenges:

  • Reliability in complex, unpredictable workflows
  • Security risks when AI controls sensitive systems
  • Ethical concerns around workforce displacement
  • The difficulty of scaling from demos to full enterprise operations

Even within early discussions online, reactions range from excitement to skepticism, with some users calling it transformative and others dismissing it as overhyped.

That tension is important. Because if Macrohard succeeds even partially, it won’t just disrupt tools, it will redefine what a company is.

In Conclusion, Macrohard or Digital Optimus is not just an AI product. It’s a provocative idea: that organizations themselves can be abstracted, replicated, and run as intelligent systems.

If the last decade was about software eating the world, this next phase might be about AI becoming the company that eats it.

#ArtificialIntelligence #FutureOfWork #ElonMusk #Automation #DigitalTransformation #AIRevolution #Leadership #BusinessStrategy

Monday, March 23, 2026

How Body reacts to Food intake?

There’s a big gap in how we understand food.

This post might stir things up a little… because today we’re not talking about sugar we’re talking about Insulin. Most conversations today revolve around calories and sometimes the glycemic index.

While both are important, they don’t give the complete picture of how the body responds to food.

One key aspect that often gets overlooked is the insulin index, which looks at how much insulin a food triggers, regardless of its effect on blood sugar.

Here are some foods that spike insulin the most (and may surprise you)

→ Whey isolate (yes, the “healthy” protein)

→ Low-fat curd / skimmed milk

→ Full-fat curd / full-fat milk

→ Wheat bread

→ Boiled potatoes

→ Refined cereals (like cornflakes)

→ White rice

→ Eggs and fish

Whey protein contains amino acids like leucine, lysine, isoleucine, and valine all of which strongly stimulate insulin. Pair it with something like a banana… and the spike goes even higher.

Even low-fat dairy behaves differently than expected. With less fat, absorption is faster leading to a sharper insulin response compared to full-fat versions.

Now here’s where it matters. Repeated high insulin spikes can

↳ Make fat loss harder

↳ Increase fat storage (especially in the liver)

↳ Stress the pancreas over time

Does this mean these foods are “bad”? Not exactly..

If you’re already lean, active, and metabolically healthy they can work in your favor.

Understanding this becomes particularly relevant for individuals working on weight loss, managing diabetes, or trying to control cravings. It is not only about how many calories are consumed, but also about how the body hormonally reacts to those foods over time.

A common pattern seen is that people reduce fat intake in an effort to eat healthier, but unintentionally increase foods that lead to higher insulin spikes. Over time, this can slow progress rather than support it.

If your goal is sustainable fat loss and better metabolic health, being mindful of insulin matters more than most people realize.

Weight Loss & Protein Intake

Don’t chase protein blindly. There’s a lot of noise on social media right now around protein.

“How much?” “How more?” “Am I getting enough?”

And somewhere in all this noise, the basics quietly get ignored. When someone walks in trying to lose weight, the assumption is often the same more protein will speed things up.

But the body doesn’t work like that. In most cases, what actually moves the needle is much simpler - a consistent calorie deficit, maintained over time.

Protein plays a role, yes but not in excess. For the majority of people, around 0.6–0.8 grams per kilogram of body weight is enough during a fat loss phase.

We’ve seen this play out not once or twice, but across lakhs of journeys over the last 13 years. Steady, sustainable weight loss not because protein was pushed high, but because the overall system was balanced.

Now, there are phases where the approach evolves. If progress slows down after a couple of months, protein can be nudged up maybe closer to 1.2 g/kg while still keeping calories in check.

Because different phases demand different inputs. Fat loss, maintenance, muscle building each has its own requirement. But the mistake happens when we apply one phase’s logic to everything.

And then there’s something most people don’t even notice - Collateral nutrition.

A vegetarian increases protein… and unknowingly increases carbs along with it.

A non-vegetarian increases protein… and fats creep up in the process.

So while the focus stays on protein, the overall intake quietly shifts

and results get affected Which brings up a more important question:

not just “Am I getting enough protein?” but “What is my protein coming with?”

For many vegetarians, especially in their 50s or 60s, there’s also hesitation around protein supplements. Understandable. New things often come with doubt.

But sometimes, starting small changes everything.

A simple 20–25 grams of protein at breakfast. Trying different options. Observing how the body feels.

Because often, the resistance is in the mind. The benefit, however, is in the physiology. And zooming out even further, Health isn’t just about protein or calories in isolation. It’s about how the body responds to what you eat.

→ Insulin response.

→ Glycemic load.

→ The total exposure your body handles through the day.

These are quieter factors but powerful ones. Don’t chase protein blindly. Build your approach with balance and clarity.

Why I Believe AI Will Become the Operating System in future

For the longest time, I have seen AI being treated as just another layer inside software. We plug it into IDEs, dashboards, search engines, enterprise workflows, almost like a smarter feature bolted onto existing systems.

But the more I think about it, the more I feel this framing is fundamentally limiting.

What if AI is not meant to live inside applications?

What if AI itself is the system?

Not an assistant. Not a plugin. Not another API.

I am starting to envision the future where AI becomes the operating system, the layer that everything else runs on.

It sounds futuristic at first. But when I look at how operating systems evolved, this direction starts to feel less like science fiction and more like an inevitability.

At its core, an operating system has always been about one thing: managing complexity.

It sits between us and the machine, abstracting away the messiness of hardware, CPUs, memory, storage, networks, and giving us clean, usable primitives.

We don’t think in terms of registers or disk sectors. We simply say:

  • open a file
  • run a program
  • allocate memory

And the OS takes care of everything underneath.

In a way, an OS is just a very sophisticated orchestrator of resources.

Now here is where my thinking started to shift.

What if the next layer of complexity we need to manage is not hardware… but intelligence itself?

I am beginning to see the possibility of an entirely new kind of operating system, one that does not just run programs, but orchestrates intelligence.

In future, the system would not manage CPU cycles or memory blocks.

It would manage:

  • reasoning
  • knowledge
  • tools
  • agents
  • code generation
  • decision-making

And most importantly, the interface would not be commands or applications anymore.

It would be intent.

Instead of opening tools and stitching workflows together, we would simply describe what we want:

“Analyze this dataset and build a real-time dashboard.”

And the system would figure out everything else.

Not just partially, but end to end.

It would generate services, design interfaces, create pipelines, provision infrastructure, and deploy, all as a coordinated outcome of a single goal.

No context switching. No tool fragmentation. No manual glue code.

Just intent → execution.

When I try to break this down from a systems perspective, I don’t see magic. I see architecture.

If I were to design this, I imagine it having a few foundational layers.

An intent kernel that understands what we mean, not just what we say, breaking goals into executable tasks.

An agent scheduler that coordinates multiple intelligent agents, deciding what runs, when, and with which resources.

A capability registry, almost like system calls, where the AI dynamically discovers and invokes tools: APIs, databases, code execution engines, simulations.

A memory layer that actually remembers, not just context in a session, but knowledge, history, and experience over time.

And critically, a self-verification loop, because a system that can act autonomously must also be able to question itself, validate outputs, and correct failures.

The more I think about it, the more it feels like we arere not building smarter apps.

We are slowly assembling the components of an entirely new operating model.

If this vision becomes real, software as we know it may fundamentally change.

Today, we build applications that are:

  • written manually
  • versioned carefully
  • deployed deliberately
  • maintained continuously

But in an AI OS world, I don’t think software will be static anymore.

I see it becoming dynamic and goal-driven.

You don’t install applications.

You invoke outcomes.

A request like:

“Build me a marketplace with authentication and payments”

…would not result in a backlog.

It would result in a system being generated, backend, frontend, database, infrastructure, all assembled dynamically.

And when the need changes, the system evolves instantly.

I don’t believe developers go away.

But I do believe our center of gravity shifts.

Less time writing implementation code. More time shaping intelligence.

We start focusing on:

  • defining constraints
  • designing capabilities
  • building reusable intelligence modules
  • creating evaluation and safety frameworks
  • guiding how systems think and act

We move from being software engineers to something closer to intelligence architects.

When I zoom out, this starts to look like a platform shift.

We have seen this before:

Mainframes → PCs, PCs → Mobile, Mobile → Cloud

Each shift changed not just technology, but how we think about building.

What I am beginning to see now is something different.

A shift from tool-driven computing to goal-driven computing.

A world where we don’t interact with applications anymore.

We interact with systems that create applications for us.

If this direction holds true, then the most important question is no longer:

“What software should we build?”

It becomes:

“What system can build whatever we need?” And honestly, that changes everything.


Thursday, March 19, 2026

AI Capabilities are far higher !!

AI can theoretically do 88% of legal tasks. In practice? It's doing 15%.

That gap isn't because the technology isn't good enough. The gap exists because nobody wants to be the first person to sign off on the AI's advice when something goes wrong.

We don't adopt things because they're revolutionary. We adopt things when we trust them. And trust comes from watching someone else use it first.

View image

Every profession has a moment where the new thing stops being experimental and starts being the standard. Surgeons resisted anaesthesia. Accountants resisted spreadsheets. Both eventually became unimaginable without them.

We are living inside that transition right now. The 73-point gap between what AI can do and what it actually does today needs people to go first.

#AI #Innovation #Strategy

Data Lake vs Data Warehouse

A data lake can hold everything. A data warehouse makes it usable. Most data programs stall because they optimize for storage when the business needs decisions.

View image

Not more ingestion.
Not another dashboard.
Clear separation of purpose.

A simple way to think about it:

→ Data lakes are built to capture: raw, semi-structured, high-volume, “we might need this later.”
→ Data warehouses are built to serve: curated, modelled, governed, “this KPI must be trusted.”
→ Lakes are great for exploration, ML, long-term retention and low-cost scale.
→ Warehouses are great for consistent reporting, performance, access control and business definitions.

Problems show up when teams pick one and expect it to do both jobs:

• Lake-only becomes “dump now, figure out later” (and later never comes).
• Warehouse-only becomes “model everything up front” (and delivery slows to a crawl).

The strongest architectures treat them as complementary layers:
land data fast → apply quality + governance → publish reusable datasets → measure outcomes.

If you want adoption, focus less on where data lives and more on who needs what, when, and with what level of trust.

Where is your current bottleneck: capturing data, curating it, or getting people to actually use it?

Welcome to AI Token Pay

Compensation has always been about more than money. At its best, it signals what an organization values, shapes behavior, and enables people to do their best work. In today’s AI-driven workplace, a subtle but powerful shift is emerging: companies are beginning to treat access to AI tokens, the units that power usage of AI systems, as a form of compensation, or at least as a structured benefit tied to performance and productivity.

Unlike traditional bonuses that reward outcomes after the fact, AI tokens influence how work gets done in the moment. They are not just rewards; they are enablers. When employees are given dedicated access to AI tools, whether for coding, research, design, or analysis, they gain leverage. Tasks that once took hours compress into minutes. The bottleneck shifts from effort to imagination.

In this sense, AI tokens function less like cash and more like fuel. The more thoughtfully they are used, the more output they can generate. A product manager can prototype faster, a marketer can test multiple campaign variants in a day, and a developer can debug complex issues with far greater speed. Productivity is no longer just about time spent, but about how effectively one can collaborate with intelligent systems.

Yet, introducing AI tokens as part of compensation requires a different mindset. Organizations must move beyond the idea of equal access and begin to think about intentional allocation. Not every role uses AI in the same way, and not every employee extracts the same value. Some will treat tokens as a scarce resource, using them strategically. Others may underutilize them due to unfamiliarity or hesitation.

This creates an interesting dynamic: productivity gains are no longer evenly distributed. They depend on both access and capability. As a result, companies that offer AI tokens as a benefit must also invest in building AI fluency. Without it, the tokens risk becoming an underused perk rather than a transformative tool.

There is also a behavioral dimension to consider. When employees know that their access to AI resources is tied to performance or outcomes, it can create a sense of ownership and experimentation. They are more likely to explore new workflows, automate repetitive tasks, and rethink how they approach problems. Over time, this fosters a culture where efficiency is not mandated from the top but discovered from within.

However, this model is not without its tensions. If poorly designed, it can introduce unintended consequences. Employees might hoard tokens, fearing scarcity. Others might overuse them without clear productivity gains, leading to cost inefficiencies. And in some cases, it may even create disparities, where high-performing teams get more access to AI tools, further widening the gap with others.

The key lies in framing AI tokens not as a reward to be competed for, but as a capability to be cultivated.

A global consulting firm recently experimented with providing AI tokens to its analysts and consultants, enabling them to use advanced AI tools for research, report generation, and client deliverables. The objective was straightforward: reduce turnaround time and improve output quality.

In the early phase, the firm allocated a fixed number of tokens per employee each month. The expectation was that consultants would integrate AI into their daily workflows and naturally become more productive.

The results, however, were uneven. Some consultants quickly embraced the tools, using tokens to automate data analysis, draft presentations, and generate insights. Their productivity increased significantly, and they were able to handle more complex client engagements.

Others, however, barely used their allocation. They were either unsure how to incorporate AI into their work or skeptical about its reliability. As a result, a gap emerged within teams, one driven not by skill alone, but by comfort with AI.

There was also a third group: heavy users who consumed tokens rapidly without proportional gains in output quality. In some cases, over-reliance on AI led to generic insights that required additional rework.

Recognizing these challenges, the firm recalibrated its approach.

Instead of treating AI tokens as a flat allocation, they introduced a guided usage model. Employees received baseline access, but additional tokens were unlocked through demonstrated use cases and impact. More importantly, the firm invested in structured training programs, showing employees how to effectively integrate AI into specific consulting workflows.

They also created shared prompt libraries and best practices, allowing employees to learn from high performers. This reduced the learning curve and standardized quality.

Finally, the firm shifted its messaging. AI tokens were no longer framed as a limited resource to be managed, but as a productivity multiplier to be mastered. The focus moved from consumption to outcomes.

Over time, adoption became more consistent, productivity gains stabilized, and the firm saw measurable improvements in turnaround time and client satisfaction.

In conclusion, Access to AI tokens as compensation represents a subtle but important evolution in how organizations think about performance and productivity. It acknowledges that in an AI-powered world, the tools people use are just as important as the effort they put in.

But the real value of this model lies not in the tokens themselves, but in what they enable. When combined with the right training, culture, and incentives, they can transform how work happens, making it faster, smarter, and more creative.

As with any new approach to compensation, the challenge is not in adopting it, but in designing it thoughtfully. Because in the end, giving employees access to AI is easy. Helping them use it well is where the real work begins.

#FutureOfWork #AI #Productivity #DigitalTransformation #HRInnovation #GenAI #WorkplaceStrategy

Hyderabad, Telangana, India
People call me aggressive, people think I am intimidating, People say that I am a hard nut to crack. But I guess people young or old do like hard nuts -- Isnt It? :-)