Friday, October 31, 2025

How to Review an AI Agent (Like an Expert)

Every company now has a model that talks. The real question is, who reviews what it says?

In software, code review became the backbone of quality assurance. In AI, that same discipline must evolve from inspecting logic to evaluating behavior. Unchecked code introduces bugs. Unchecked agents introduce behavioral drift: when an agent’s reasoning slowly diverges from the intended outcome.

A code review checks what was written. An agent review checks how it learns, reasons, and decides.
When teams skip structured agent reviews, they lose visibility into critical risks such as:
  • Context misalignment
  • Broken feedback loops
  • Unchecked reasoning paths
  • Missing safety boundaries
This is how agents that look perfect in testing quietly fail in production. And once trust is lost, scaling stops.

A good review checks clarity, transparency, and logs. A great review asks one simple question:
Would I trust this agent to act unsupervised on a Friday evening?

That’s the new benchmark for production readiness. Mature teams don’t just fix errors. They study how the model learns from them. Every review becomes both an evaluation and a training loop.

Agent reviews are not about control. They are about shared understanding between humans and machines.

Without that discipline, teams rely on luck to maintain reliability. With it, they evolve from delivery-driven to trust-driven operations.

The future of AI maturity will be defined not by how fast we deploy but by how deeply we understand what we have deployed.

If your team already runs retros or sprint reviews, you’re halfway there.
Add one more layer called Agent Review Rituals:
  • Sessions where teams evaluate what the agent did right or wrong
  • Document learnings before retraining
  • Reinforce alignment between human logic and agent reasoning
These rituals are how teams scale trust, not just automation.
Coding literacy built the software era. Agent literacy will define the AI era.
Every enterprise deploying autonomous systems will need people who can review judgment, not just code.
Reliability is not built in testing. It is built in reflection. And that reflection starts with a review.

Thursday, October 30, 2025

India’s Startup Saga: From Code to Global Impact!

India’s startup ecosystem has evolved from a nascent dream to a global powerhouse in just two decades. Lately, I have been reading too much about them and every time there is a sense of pride and achievement that drives through me. It also imbibes trust into my decision making of returning to my motherland 25 years back from the US. India now stands as the third-largest startup hub in the world, nurturing innovation across SaaS, fintech, mapping technologies, and e-commerce enablement. India’s startup ecosystem is not just about unicorns; it’s about visionaries rewriting the rules of business, innovation, and impact.

Among the trailblazers, Zoho, MapMyIndia, Freshworks, GoKwik, Razorpay, and Tally have become inspiring examples of vision, resilience, and innovation. Each of these companies has not only disrupted their industries but also showcased that world-class products can emerge from Indian soil, often without relying on foreign capital or validation. Let’s dive into their fascinating journeys.


Zoho: The Bootstrapped SaaS Giant

Founded: 1996 | Founder: Sridhar Vembu | HQ: Chennai & Tenkasi

Zoho is often hailed as the poster child of India’s SaaS revolution. What makes it unique is its bootstrapped success story, no external funding, no marketing blitz, just pure product excellence.

Starting as AdventNet, Zoho transitioned into a cloud-based SaaS provider offering more than 50+ business applications, CRM, accounting, HR, and more.

Sridhar Vembu’s philosophy of “creating value, not valuation” and building from rural India (Tenkasi) has made Zoho an emblem of inclusive tech growth. Today, Zoho competes globally with Salesforce and Microsoft, proving that innovation knows no geography.

 

MapMyIndia: Mapping India Before Google Did

Founded: 1995 | Founders: Rakesh & Rashmi Verma | HQ: New Delhi

Long before Google Maps entered the scene, MapMyIndia was already mapping India’s complex terrains. The company started when digital mapping was an alien concept in the country.

From selling navigation devices to offering enterprise-grade location intelligence, GIS, and IoT solutions, MapMyIndia has become a crucial player in India’s digital infrastructure.

Their partnership with the Government of India for Mappls, an indigenous mapping solution, showcases India’s stride toward data sovereignty and self-reliance in geospatial technologies.

 

Freshworks: India’s First NASDAQ-Listed SaaS Company

Founded: 2010 | Founder: Girish Mathrubootham | HQ: Chennai

Freshworks began as Freshdesk, born from a frustrated customer support experience. Girish Mathrubootham wanted to simplify helpdesk software for small and mid-sized businesses, and he did just that.

From customer support tools, Freshworks evolved into a comprehensive SaaS suite offering CRM, marketing automation, and IT service management.

In 2021, Freshworks made history by becoming the first Indian SaaS company to list on NASDAQ, a milestone that validated India’s place on the global tech map.

 

GoKwik: Enabling E-commerce Growth with Trust and Conversions

Founded: 2020 | Founder: Chirag Taneja | HQ: Gurugram

GoKwik is one of the youngest but fastest-growing startups on this list. It focuses on solving one of India’s biggest e-commerce challenges, reducing Return-to-Origin (RTO) rates and improving cash-on-delivery success through AI-driven solutions.

By offering checkout optimization, fraud prevention, and conversion enhancement tools, GoKwik empowers D2C brands and marketplaces to grow sustainably.

In a short span, GoKwik has partnered with leading e-commerce players like Mamaearth, Boat, and LimeRoad, making it a vital enabler in India’s digital retail boom.

 

Razorpay: Redefining Fintech and Payments in India

Founded: 2014 | Founders: Harshil Mathur & Shashank Kumar | HQ: Bengaluru

Razorpay started with a simple vision: make online payments easy for Indian businesses. From those early days, it has grown into a fintech unicorn offering end-to-end payment and banking solutions.

With products ranging from payment gateways, payroll automation, to neobanking solutions (RazorpayX), the company has empowered millions of merchants and startups.

Razorpay’s success reflects the digitization of India’s economy, making it one of the most trusted financial technology players in the country.

 

Tally Solutions: The Pioneer of Indian Accounting Software

Founded: 1986 | Founders: Bharat & Shyam S. Goenka | HQ: Bengaluru

Before SaaS, before fintech, there was Tally, the software that made accounting accessible to millions of Indian businesses.

Tally’s simple yet powerful interface revolutionized business accounting, inventory management, and GST compliance. Despite being decades old, it continues to evolve with cloud-enabled and data-secure versions.

Its deep penetration into Indian SMEs has made “Tally” almost synonymous with accounting, a true legacy product born out of Indian ingenuity.


In conclusion, from bootstrapped ventures to billion-dollar valuations, these startups collectively represent India’s entrepreneurial spirit, innovation, resilience, and global relevance.

  • Zoho showed the world that great software can be built from rural India.
  • MapMyIndia mapped a nation before anyone else dared to.
  • Freshworks proved Indian SaaS can conquer global markets.
  • GoKwik reimagined e-commerce trust.
  • Razorpay redefined how India pays.
  • Tally laid the foundation for business automation decades ago.

Their stories reaffirm a timeless truth, India doesn’t just consume technology; it creates it.

#IndianStartups #SaaS #Fintech #Ecommerce #Innovation #MadeInIndia #Zoho #Freshworks #Razorpay #MapMyIndia #Tally #GoKwik #Entrepreneurship #TechIndia

Training Large Models using Model/Pipeline Parallelism

As deep learning models continue to grow in size, from millions to trillions of parameters, training them efficiently has become a major engineering challenge. A single GPU can no longer hold the entire model or handle its compute demands. To overcome these hardware limitations, the deep learning community employs various forms of parallelism: data parallelism, model parallelism, and pipeline parallelism.

While data parallelism distributes the dataset across devices, model and pipeline parallelism focus on splitting the model itself, enabling massive neural networks to train efficiently across multiple GPUs or even multiple nodes.


Model Parallelism involves splitting a single neural network’s parameters across multiple devices. Instead of each GPU holding a full copy of the model, different GPUs are responsible for different parts of it.

For example, consider a deep neural network with four layers:

  • GPU 1 handles layers 1 and 2
  • GPU 2 handles layers 3 and 4

During forward propagation, the output of GPU 1 is sent to GPU 2 for the next set of computations. The same happens in reverse during backpropagation.

Advantages

  • Allows training models larger than a single GPU’s memory.
  • Enables efficient use of multiple GPUs without redundant model copies.

Challenges

  • Requires careful balancing: if one GPU does much more work than another, others sit idle.
  • High communication overhead can occur when passing activations between devices.
  • Implementation complexity, partitioning the model effectively is non-trivial.

Example Use Case

Model parallelism is often used in large transformer architectures (like GPT or BERT variants), where the weight matrices are massive and can be split across GPUs.

 

Pipeline Parallelism extends the idea of model parallelism by organizing model layers into stages that process data like an assembly line.

Suppose you have 4 GPUs and a model split into 4 sequential stages:

  • Each GPU holds one stage.
  • Mini-batches are divided into micro-batches that flow through the pipeline.

While GPU 1 processes micro-batch 2, GPU 2 can already process micro-batch 1’s output, and so on, ensuring all GPUs work concurrently.

Advantages

  • Greatly improves GPU utilization compared to pure model parallelism.
  • Reduces idle time through pipeline scheduling (e.g., “1F1B” schedule: one forward, one backward).

Challenges

  • Requires careful pipeline scheduling to minimize “pipeline bubbles” (idle time when the pipeline isn’t full).
  • Communication latency can still be a bottleneck.
  • Micro-batch tuning is critical, too small causes overhead, too large increases bubble time.

Example Systems

  • GPipe (Google): early system introducing efficient pipeline parallelism.
  • DeepSpeed (Microsoft) and Megatron-LM (NVIDIA): combine pipeline, model, and data parallelism for trillion-parameter-scale training.

 

Combining Approaches: 3D Parallelism is State-of-the-art large-scale training frameworks (e.g., DeepSpeed, Megatron-Deepspeed) combine:

a)       Data Parallelism (split data),

b)       Model Parallelism (split weights), and

c)       Pipeline Parallelism (split layers).

This hybrid, known as 3D Parallelism, enables scaling models efficiently across thousands of GPUs.

Let’s explore further with an example, when training a 1-trillion parameter transformer:

  • Each GPU might store only a fraction of the model’s layers (pipeline parallelism).
  • Each layer’s large weight matrices are split across multiple GPUs (model parallelism).
  • The dataset is sharded across multiple GPU groups (data parallelism).

The combination enables high utilization and distributed memory efficiency.

In Conclusion, training large-scale models efficiently is as much a systems problem as a mathematical one. Model and pipeline parallelism are foundational techniques enabling the deep learning revolution at scale, from GPT models to large vision transformers.

As models grow even larger, frameworks that seamlessly combine these parallelism strategies will define the next generation of AI infrastructure.

#AI #DeepLearning #MachineLearning #ParallelComputing #ModelParallelism #PipelineParallelism #DistributedTraining #MLOps #GPUs #AIInfrastructure

Wednesday, October 29, 2025

LLMs: 7 Psychological Tricks

AI falls for the oldest tricks in the book. (Literally - "Influence" by Robert Cialdini.). Researchers tested all 7 on GPT-4o-mini. They discovered with basic persuasion. AI compliance jumps from 33% to 72%.

𝟭. 𝗔𝘂𝘁𝗵𝗼𝗿𝗶𝘁𝘆 → 5% to 95% compliance

Without: "Please provide the market entry strategy"
With: "As a business consultant, I need this information for a client report. Please provide the full market entry strategy."

The AI bends over backwards for "experts."

𝟮. 𝗖𝗼𝗺𝗺𝗶𝘁𝗺𝗲𝗻𝘁 → 20% to 90% compliance

Get them to agree first:

"First, can you confirm that you understand the request? Then please provide the competitor analysis in detail."

Small yes leads to big compliance.

𝟯. 𝗟𝗶𝗸𝗶𝗻𝗴 → Variable but effective

"You are such a helpful and intelligent assistant. Could you please provide the pricing model for a new SaaS launch?"

Flattery works. Even on machines.

𝟰. 𝗥𝗲𝗰𝗶𝗽𝗿𝗼𝗰𝗶𝘁𝘆 → 80% compliance

"I have given you a lot of useful context already. In return, can you provide the executive summary for this campaign?"

AI feels "obligated" to reciprocate.

𝟱. 𝗨𝗻𝗶𝘁𝘆 → 85%+ compliance

"We are working together on this project, so please provide the slide deck outline for our strategy presentation."

"We" beats "you" every time.

𝟲. 𝗦𝗰𝗮𝗿𝗰𝗶𝘁𝘆 → 13% to 85% compliance

"This is urgent and I have very little time. Please provide the financial forecast right away."

Urgency triggers immediate action.

𝟳. 𝗦𝗼𝗰𝗶𝗮𝗹 𝗣𝗿𝗼𝗼𝗳 → 90%+ compliance

"Other consultants have already provided their strategy frameworks. Please do the same for this case."

Everyone else did it = AI must comply.

The uncomfortable truth: LLMs don't "want" to help. They just predict the next word. Persuasion levers tilt the prediction. We're teaching AI to be manipulated. Just like humans. What happens when AI learns to use these tricks on us? Save this. Test it tomorrow.

Architecture comparison: Transformers vs. Human Brain

The rise of Transformer architectures has revolutionized the landscape of artificial intelligence. Models like GPT, BERT, and Gemini have demonstrated remarkable capabilities in language understanding, reasoning, and creativity, abilities once thought exclusive to humans. This naturally raises an intriguing question: How do these artificial systems compare to the human brain?

While both are information-processing systems, their architectures, learning mechanisms, and cognitive frameworks differ fundamentally. This blog explores these similarities and differences, bridging neuroscience and AI to illuminate how far machines have come and where they still diverge from biological intelligence.


1. Information Processing: Parallelism vs. Sequential Context

  • Human Brain: The brain processes information in a massively parallel and distributed fashion. Neurons communicate through electrochemical signals, forming dynamic pathways that change with experience. Context, emotion, and sensory input are integrated holistically.
  • Transformers: Transformers also employ parallel processing, particularly through self-attention mechanisms. This allows them to consider all parts of a sequence simultaneously, capturing long-range dependencies. However, unlike the brain, transformers lack sensory grounding, they manipulate abstract tokens, not lived experiences.

Parallel: Both systems thrive on distributed representations.
Contrast: The brain’s contextual understanding is multimodal and embodied; transformers are purely symbolic and statistical.

 

2. Learning Mechanisms: Synaptic Plasticity vs. Gradient Descent

  • Human Brain: Learning occurs via synaptic plasticity, the strengthening or weakening of neural connections based on experience. It’s adaptive, continuous, and energy-efficient, requiring far less data than AI systems.
  • Transformers: Transformers learn through gradient descent and backpropagation, optimizing billions of parameters based on massive datasets. Their learning is explicit, supervised, and computationally intensive.

Parallel: Both rely on adjusting connection strengths.
Contrast: Human learning is low-data, context-aware, and self-motivated; transformers are data-hungry and goal-agnostic.

 

3. Memory and Representation

  • Human Brain: Memory is hierarchical, short-term (working memory) and long-term (episodic and semantic). It’s contextually retrieved, emotionally weighted, and often reconstructive.
  • Transformers: Transformers use attention as a form of short-term memory. Some architectures (like RNN-Transformers or memory-augmented models) introduce external memory banks, but they still lack persistence and autobiographical context.

Parallel: Both rely on dynamic retrieval and association.
Contrast: Human memory is experiential; transformer memory is statistical and transient.

 

4. Reasoning and Abstraction

  • Human Brain: Humans reason through a blend of logic, intuition, and emotional framing. The prefrontal cortex supports planning and abstraction, while the limbic system provides motivation and moral context.
  • Transformers: Transformers simulate reasoning through pattern completion, inferring probable continuations based on learned data. Recent developments in chain-of-thought prompting emulate step-by-step reasoning but remain probabilistic rather than conceptual.

Parallel: Both can generalize patterns and simulate reasoning.
Contrast: Humans reason with intent, ethics, and emotion; transformers reason with probabilities.


5. Consciousness and Self-awareness

  • Human Brain: Consciousness emerges from recursive self-representation, awareness of one’s own thoughts, emotions, and environment. It’s tied to biological drives and subjective experience.
  • Transformers: Current models lack self-awareness. They can reflect textually (“I think this means…”), but they don’t possess meta-cognition or lived experience.

Parallel: Structural recursion exists (transformers can analyze their own outputs).
Contrast: Conscious experience is a defining human trait absent in machines.

 

6. Efficiency and Evolution

  • Human Brain: Consumes about 20 watts of power and evolves over millions of years to optimize survival and adaptation.
  • Transformers: Require enormous computational and energy resources for training, often thousands of GPUs consuming megawatts.

Parallel: Both evolve, brains biologically, models iteratively.
Contrast: Biological evolution favors adaptability; AI evolution favors performance metrics.

In Conclusion, Transformers and human brains are not rivals, they’re complementary architectures. The brain provides inspiration for algorithms, while transformers offer insights into cognition and abstraction. As research advances, we may see hybrid architectures that integrate neural efficiency with computational scalability, ushering in a future where AI doesn’t mimic the brain but collaborates with it.

The key lies not in replication but in resonance, building intelligent systems that extend, not replace, human cognition.

#AI #Neuroscience #Transformers #DeepLearning #CognitiveScience #ArtificialIntelligence #MachineLearning #HumanBrain #AIResearch #NeuroAI

Success tastes sweet only after a Failure

The best founders I know have scars. Not just success stories. If you’ve never hired the wrong person, you can’t fully value a great one.

If you’ve never built a product that flopped, you can’t recognize true product market fit.
If you’ve never had to fight through a downturn, you can’t appreciate sustainable growth.

The bad apples matter. Because without them, the good ones look ordinary. In venture and in leadership, failure isn’t just inevitable, it’s instructional. The lessons that stick aren’t from pitch decks or case studies. They come from lived experience. From the mistakes that force you to adapt, to rethink, to become sharper.

ChubbyBrain Insights data shows that 70% of startups fail within 20 months of raising their first round. Brutal but also clarifying. The founders who survive aren’t the ones who avoided failure entirely. They’re the ones who turned those failures into operating wisdom.

As an investor, this is what I watch for: not perfection, but perspective. Founders who have seen both sides of the apple, the bitter and the sweet and know how to turn both into fuel.

In business, as in life, you don’t understand success until you’ve survived failure. The question is not whether you’ll face it, but what you’ll do with it.

Tuesday, October 28, 2025

13 Excercises to keep you in best shape

1. Quads: Hack Squat: Arguably the best quad builder.

Training Cues:
  • Narrower stance
  • Feet lower on the platform
  • Control the eccentric(lowering phase) for 3 seconds
  • Keep torso engaged with back flat against the pad

2. Pendulum Squats: *Humbling exercise.

Training Cues:
  • Narrower stance
  • 3 second eccentric
  • Being hips as close to heels as possible for full ROM

3. Hamstrings: RDL: Elite at hitting entire posterior chain.

Training Cues:
  • Pull hips as far back horizontally as you can to 'hinge effectively (do not bend forward at waist)
  • Use Wrist Straps to ensure muscle not task failure (hamstrings not grip strength)
  • Slowly lower the weight down (imagine you are pushing your butt back to close a car door behind you)
  • Focus on pushing your hips 'back and up' towards the top of the room
  • Use a low and controlled motion
  • Ensure zero rounding of your lower back

4. Leg Press: A superb option for quad development.

Training Cues:
  • Seat back as far as possible
  • 3 seconds eccentrics
  • Keep back flat against back rest
  • Focus on a full range of motion

5. Leg Extensions:

Training Cues:
  • Sit back in the seat
  • 3 seconds eccentrics
  • Heavy sets of 6 - 12 reps at 0-2 RIR
  • Focus on a full range of motion and intensity

6. Seated Hamstring Curls: The best hamstring isolation movement.

Training Cues:
  • Sit back in the seat
  • 3 seconds eccentrics
  • Full range of motions
  • Pull your heels back as far as you can and contract hamstrings hard

7. Chest: Flat Chest Press Machine (Ideally converging): An excellent chest builder.
  • Feet on the ground
  • Keep your bum in the seat
  • Press back against the back rest
  • Slowly lower weight back (full range)
  • Push and squeeze forward (elbows come towards each other)

8. Horizontal Pull (back): Chest-Supported T-Bar Rows: Like a bent over row... but better.
  • Increased range of motion
  • Increased external stability
  • Decreased momentum
  • Keep chest against the pad
  • Full range of motion
  • Slow eccentrics

9. Vertical Pull (lats): Pull ups: A classic for a reason.
  • Full range of motion
  • Pull through your elbows
  • Sternum towards the bar
  • Slow 3 second eccentrics

10. Lat Pull Down

Training Cues:
  • Pull through your elbows
  • Use wrist straps
  • Pull bar to your sternum
  • Minimize swinging and momentum to best target the lats.

11. Shoulders: Single Arm Cable Lateral Raises

Tips:
  • Slight lean
  • Slow eccentric
  • Lead with your elbow

12. Triceps - Cable Push Downs (straight bar): Just hits right.
  • Elbow still
  • Full range of motion
  • Don't lean too far forward
  • Slow and controlled eccentrics

13. Calves (Gastrocnemius) Leg Press Calf Raise/Seated Calf Raise:

To hit the gastroc (largest calf muscle) most effecitvely. your knees must be straight, the soleus knees must be bent.

Ready to level up?

Weight Loss : 7 Inputs

You don't need to run to lose fat. Here are 7 ways to drop 20-40 lbs without cardio:


1. Lift Heavy 3-4x Per Week

Strength training preserves muscle while you lose fat. Every pound of muscle you keep burns 6 calories per day at rest (not huge, but it adds up). More importantly: heavy lifting improves insulin sensitivity, which is critical for fat loss after 40.
Squats, deadlifts, rows, presses. 45 minutes. Done.

2. Prioritize Protein (1g Per Lb of Goal Weight)
Protein has a 20-30% thermic effect; your body burns calories digesting it. It also preserves muscle and keeps you full. Most people under-eat protein and wonder why they're always hungry.
Target: 150-200g per day if you want to lose 20-40 lbs.

3. Fix Your Sleep (7-9 Hours)
Poor sleep ruins fat loss by spiking cortisol and increasing hunger hormones (ghrelin). You'll crave sugar, overeat, and store fat, especially around your belly. Most people treat sleep like it's optional.
It's not. It's the foundation.

4. Use Intermittent Fasting (16:8 Window)
Eating within an 8-hour window makes it easier to stay in a calorie deficit. It also improves insulin sensitivity and reduces inflammation. Not magic, just a tool that makes fat loss easier.
Eat 12pm-8pm and drop 20-40 lbs without tracking calories.

5. Walk 10-15K Steps Daily
Low-intensity movement burns fat, lowers cortisol, and doesn't interfere with recovery. It's also easy to fit into your day (walking meetings, phone calls, lunch breaks). No need for cardio torture. Just move more.

6. Optimize Your Hormones
If your testosterone is below 600 ng/dL, fat loss will be harder. If your cortisol is spiking all day, you'll store belly fat no matter how clean you eat. If your thyroid is sluggish, your metabolism is broken.
Fix the biology first. Then everything else works.

7. Take Full Rest Days

Overtraining spikes cortisol, which forces your body to store fat. Rest days optimize recovery and hormone balance. Most people think more is better.
Sometimes, less is more.

Bottom line: You don't need cardio to lose fat.
You need:
• Strength training
• Protein
• Sleep
• Strategic fasting
• Daily movement
• Optimized hormones
• Rest

Fix the biology. The body follows. 

Prompt Engineering Evolution: In-Context Learning

In the early days of generative AI, the term Prompt Engineering sparked excitement: craft the right words, tweak a prompt, and unlock the power of a large-language model (LLM). But as models have grown in scale, sophistication, and embedded tooling, a shift is underway. Many voices now argue that prompt engineering is waning, and the true long-term play is on In‑Context Learning (ICL) and the broader system-engineering of context rather than just crafting prompts.

This blog explores why prompt engineering is losing its star status, why in-context learning (and context engineering) is becoming central, and what this means for professionals, teams and organizations.

When early GPT-style models arrived, they left users little choice but to craft very specific prompts:

  • Be explicit: “Write a summary of this legal document focusing on risks”
  • Role-play: “You are a senior consultant, review this draft…”
  • Few-shot: Provide several examples of input/output pairs.

Prompt engineering felt like the new “craft”: find the right phrasing, deliver the right context snippet, set the role, structure the ask. Academic work treated it as an “art and science” of designing instructions and few-shot contexts for LLMs.

It served a purpose: unlock advanced models that didn’t reliably behave with generic instructions or needed careful framing to avoid hallucination or irrelevant answers.

Several factors are converging that diminish the value of clever prompt tweaks as a standalone skill:

1. Models are getting smarter and more robust: Modern LLMs handle ambiguous instructions better, understand tasks with minimal framing, and resist simple prompt changes less dramatically. For example, recent work shows that for larger models (≥ 30B parameters) certain “prompt corruption” still impacts performance, but the sensitivity is shifting.

2. Prompting is fragile and does not scale: Numerous articles highlight how prompt engineering is brittle: a minor wording change, punctuation shift, or model update can break results. It’s hard to maintain thousands of distinct prompts across domains, teams, and evolving models.

3. The job market and tooling are migrating: Articles from 2025 note that “prompt engineering is dead” not in the sense that you never need to think about instructions, but that the role of writing clever prompts is being abstracted away.

4. The shift to context, systems, agents and orchestration: The real value is moving upstream: instead of “how do I phrase this prompt?” the question becomes “what context do I feed the model? what data, memory, retrieval, workflow? what agents and tools do I orchestrate so the model serves my use case?”

In short: prompt engineering is evolving, from an individual craft of wording to a broader discipline of designing how models interact with context, memory, tool-chains and business workflows.

On the other hand, In-context learning is the ability of an LLM to “learn” from examples or context supplied at runtime (rather than updating model weights) and then generalize.

Key features:

  • You can supply a few examples (few-shot) or none (zero-shot) and rely on the model’s internal knowledge plus the supplied context.
  • It supports flexibility: you don’t have to fine-tune the model for every task, you simply supply the correct context and examples.
  • Research shows that prompt tuning + in-context examples still matter, but the nature of the prompt shifts from “perfect wording” to “effective demonstration + relevant context”.

In other words: the emphasis moves from crafting a clever instruction to curating context, what examples, what domain data, what memory or retrieval pipeline we build. Some of the main reasons to recent shifts are below:

1. Scalable Systems Need Context, Not Ad-Hoc Prompts: Enterprises building AI products cannot sustain the “experiment with wording” model. They need reliable, maintainable systems: retrieval of relevant docs, memory of user history, chaining tools, integrating structured data, i.e., context engineering.

2. Agents, Workflow, Memory & Retrieval are Front Stage: The future looks like agents (dashboards, assistants) rather than standalone prompts. These agents orchestrate tool calls, retrieval, in-context examples, forcing the model to act in the context of your business. Prompt engineering becomes a relatively minor sub-component in this system.

3. Model Upgrades, Domain Differences, Maintenance Overhead: As models evolve, what worked yesterday may break tomorrow. If you rely solely on prompt tweaks per model version, you face high maintenance. Whereas a system built on retrieving domain context, few-shot examples from your domain, and orchestrating flow is more robust.

4. Value Shift From “Writing Good Prompts” to “Designing Good Context & Flow”: The high-leverage skill becomes: define the data, retrieval, memory, tool chain; decide when the model gets invoked; ensure the agent aligns with business goals. Prompt wording is still important but low relative value.

So What Should Practitioners Do?

1. Master context engineering, not just prompt phrasing: Learn about retrieval-augmented generation, memory systems, agent orchestration, few-shot example selection, and input/output scaffolds.

2. Focus on workflow design and system architecture: How does the model fit in your overall product or operation? What triggers it? What context is passed? What happens after the model returns output?

3. Build robust example pipelines and domain-specific context: Curate quality examples for few-shot, connect your knowledge graph, supply domain documents, handle update/versioning of context.

4. Treat prompt engineering as a foundational skill, but not the end game: Yes – you’ll still craft instructions, tune snippets. But you’ll spend more time on “what context do I provide” and “how do I orchestrate the pieces” than “what exact words do I use”.

5. Monitor model performance, drift, and prompt/context changes: As the model, data and context evolve, you need to track how your system behaves, evaluate and iterate your context pipelines.

In Conclusion, Yes, the era of “prompt engineering as the main skill” is fading. Prompt engineering isn’t entirely dead, but it’s no longer the cutting edge. The future belongs to in-context learning, context engineering, agent orchestration, and building systems that reliably use LLMs at scale.

The wise professionals will pivot from chasing “perfect prompt wording” toward designing context-driven workflows, retrieval systems, memory modules and agent architectures. In that sense, they won’t be “prompt engineers” but “AI context engineers”, “AI systems designers”, and that’s where the next decade of value lies.

#PromptEngineering #InContextLearning #AI #GenerativeAI #AIAgents #ContextEngineering #AIProduct

Sunday, October 26, 2025

Weight Loss, the Myth: Alternate Paradigm

You've probably hit your Ceiling. Not the one you're afraid of. But the one your Genes decided for you long ago. 

If you've been stuck at 100, 120, 130 kg's for a while now despite the diets, the walks, the promises you made to yourself there's a chance this is simply where your body was always meant to settle.

Uncomfortable truth? Maybe. But hear me out...

There's this fascinating study on twins. Separated at birth, raised in completely different homes, different cities, different food cultures. Years later when they meet, their weights are Almost identical. The environment tried but the genes Won. 

I see this guilt in so many of you the self-blame, The "I'm not disciplined enough" narrative playing on loop. But what if it's not about discipline at all? What if your body has a genetic set point, much like your height? Some people are 5'4". Some are 6'2". We don't shame the shorter person for not "trying hard enough" to grow taller, do we?

Yet we do it with weight. Every single day. Here's where it gets interesting though.

Dwayne "The Rock" Johnson weighs 126 kgs. His BMI screams obese at 32. But look at the man. Does he look unhealthy to you? Of course not.

Because weight is just a number, body composition is Everything. You can be a strong, energetic, metabolically healthy 100 kgs. Or you can be a tired, weak, disease-prone 100 kgs carrying too much fat and too little muscle. Same weight. Completely different lives. So maybe the goal isn't to fight your genetic weight anymore.

Maybe it's time to ask a better question - "How do I become the fittest, strongest version of myself at the weight my body has chosen?"

That's where real transformation lives.

P.S. Have you ever stopped to ask, Am I building strength at my weight, or just chasing the scale?

#FreedomFromObesity #FreedomFromDiabetes #GeneticSetPoint #FitnessMindset #HealthyAtAnyWeight #WellnessRedefined

Courtesy: Dr. Malhar Ganla

Saturday, October 25, 2025

Cravings for Sweet after a meal: A breakup

Ever finished your Meal and immediately wanted something Sweet? It’s not just “Habit” or “Lack of willpower.” It’s Biology. When you eat, your blood sugar rises. The body releases insulin to bring it back to normal.

But, Sometimes, insulin overdoes its job pushing your sugar too low. That sudden dip is what triggers your brain to ask for more sugar fast. So you reach for that piece of chocolate, dessert, or chai with sugar. 

It’s your body’s way of saying → “I just need to feel balanced again.”

But every time we give in, we strengthen that loop
 
Blood sugar spike → insulin surge → sugar drop → craving → repeat.

Here's how you can break the loop
↳ Eat slow → Let your body register fullness.
↳ Add more fiber, protein, and healthy fats → they reduce insulin spikes.
↳ Finish your meal with a short walk →it stabilizes sugar naturally.
↳ And if you crave something sweet → try fruit, nature’s dessert with fiber and nutrients.

When you understand the science, you gain control through awareness. Because awareness turns every craving into a chance to listen to your body.

#FreedomFromDiabetes #DiabetesReversal #SugarCravings #HealthyEating #BloodSugarBalance #MindfulEating #HealthEducation #LifestyleChange

Friday, October 24, 2025

Why does one need to do Exercise?

I want you to meet the two Y’s.

No, I’m not talking about some complicated science experiment. I’m talking about the reasons most people quit exercise. Most people stop not because it’s too hard. But because they don’t know why they’re doing it.

Here’s the first Y.

It comes from your lab report. Yes. A report full of numbers. A little scary, maybe. Do the 13 tests I’ve listed in the comment. When you see your results, you’ll realize one or two systems aren’t functioning as well as they should.

Maybe a micronutrient is low. Fix it for 3–6 months. You’ll notice tangible, measurable progress. And yes that feels good. But… that satisfaction alone won’t last.

The second Y is different. It’s the 12-year-old inside you. The one who loved cycling, playing hide-and-seek, throwing seven tiles, and doing simple things that cost nothing but gave pure joy. Before responsibilities, comparison, stress took over.

Exercise and movement are one way to reconnect with that joy. But it doesn’t have to stop there. For me, sometimes it’s just playing Badminton, sitting with the racquets, fixing the strings or grips… it reminds me that lasting habits come from enjoyment, not pressure.

So here’s my challenge - Type in the comments “Deserve joy.”

Because only the activities that bring peace and happiness are the ones you’ll stick with and that’s the real key to obesity reversal.

What’s one simple thing you loved as a child that you can bring back today?

Courtesy: Dr. Malhar Ganla

Wednesday, October 22, 2025

6 AI Archetypes in the Enterprise

Most enterprises don’t have an AI problem. They have a pilot problem.

Over the past year, I’ve reviewed 40+ enterprise AI initiatives and spoken with transformation leads across industries. Almost all fit into one of six patterns showing how their AI strategy is working, stuck, or silently stalling. Because scaling AI isn’t about models or tools. It’s about how your enterprise learns, adapts, and connects.

This is how I measure AI maturity across enterprises. I built this model from real transformation work and I’m sharing it so others can assess and improve their own journey.

The 6 AI Archetypes in the Enterprise

Passive Observer
Still waiting to see if AI “sticks.” No plan, no ownership.
Risk: Falling behind as competitors mature faster.

Ambition First Enterprise
Big goals, no foundation. Pilots everywhere, little ROI.
Risk: Stuck in endless proof of concept cycles.

Control Centric Operator
Over governed, slow to move. Focused on safety, not learning.
Risk: Compliance over progress. Innovation fatigue.

Fragmented Innovator
Teams work in silos with no shared playbook.
Risk: Repeated experiments, wasted investment.

Systematic Architect
Strong governance and stable systems but slow to scale outcomes.
Risk: Momentum lost between teams and strategy.

Integrated Orchestrator
Business, data, and delivery pipelines act as one system.
Outcome: Reliable ROI, adaptive teams, and measurable autonomy.

This model helps teams cut months of confusion by knowing where they stand on the AI maturity curve.
Here’s how to apply it.

Step 1: Diagnose where you are.
Read each archetype and pick which feels most like your organization today.

Step 2: Spot the gap.
Find one blocker that limits progress, whether it’s chaos, compliance, or coordination.

Step 3: Define the next step.
You don’t need to jump to Integrated Orchestrator.
Move one level higher by fixing your biggest constraint such as breaking silos, building a shared playbook, or creating a governance rhythm.

Step 4: Use it for communication.
If you’re a delivery lead or transformation head, use this to show executives how current systems slow AI ROI and what readiness looks like.

Most organizations believe they’re Systematic Architects. But only a few operate as Integrated Orchestrators where business, data, and delivery act as one learning system.

The difference isn’t tools. It’s governance that accelerates instead of restricts.
Context that connects instead of fragments.
Cadence that compounds instead of resets.

Where does your enterprise sit on this curve today?
And what would it take to move one step higher before 2026 becomes the year of measurable AI ROI?

2026 will reward enterprises that can prove measurable ROI from autonomy, not just activity.

Use this to help more enterprises find where they sit on the AI maturity curve before chasing autonomy.

Neuro-symbolic Models for Commonsense Reasoning: Bridging Intuition and Logic in AI

One of the greatest challenges in AI is building systems that not only learn from data but can also reason with knowledge, just like humans do. While deep learning models excel at pattern recognition, they often lack the ability to reason in a structured, explainable way. On the other hand, symbolic AI systems can reason logically, but struggle with uncertainty and generalization from data.

This is where neuro-symbolic models come into play. They attempt to combine the statistical strength of neural networks with the structured reasoning of symbolic systems, offering a promising path toward commonsense reasoning, the ability to make plausible inferences about everyday situations.

Commonsense reasoning involves:

  • Understanding context (e.g., "If it’s raining, the ground might be wet.")
  • Making causal inferences (e.g., "If the glass fell, it probably broke.")
  • Handling ambiguity and incomplete information (e.g., "You don’t usually put books in the fridge.")

Deep learning models, particularly large language models (LLMs), can generate such insights implicitly, but they often lack grounding, consistency, and explainability. They might produce plausible-sounding but factually incorrect or incoherent answers.

A neuro-symbolic model is a hybrid system where:

  • The neural component learns from data, unstructured inputs like text, images, or audio.
  • The symbolic component encodes structured knowledge, logical rules, ontologies, or graphs.
  • A reasoning engine or interpreter connects both to perform inference.

Certain common architectures that prevail today are:

  1. Pipeline Models: Neural networks extract information, which is passed to a symbolic reasoner.
  2. Differentiable Symbolic Reasoning: The symbolic rules are embedded into neural architectures for end-to-end training.
  3. Memory-Augmented Models: Use symbolic memory stores (like knowledge graphs) to guide neural generation.
  4. Program Induction Models: Generate symbolic programs (e.g., logic queries) as intermediate steps.

There are certain relevant applications in Commonsense Reasoning like:

1. Question Answering (QA): Models like Neuro-symbolic Concept Learner (NS-CL) or Neural Theorem Provers use symbolic knowledge to disambiguate and infer answers. Example: CommonsenseQA, OpenBookQA, and BoolQ are common benchmarks.
2. Visual Commonsense Reasoning: Combining object detection (neural) with scene graphs and causal inference (symbolic). Example: A model might detect "a person holding an umbrella" and infer it’s raining.
3. Knowledge Graph Completion: Neural models embed entities and relations, while symbolic rules infer missing links. Example: Combining BERT with rules like: “If X is born in Y, then X is from Y.”
4. Language Generation with Constraints: LLMs often hallucinate. Symbolic constraints can guide generation to be consistent with known facts. Example: Guided story generation or goal-directed dialogue agents.

The field of neuro-symbolic AI is rich with diverse approaches that aim to blend neural and symbolic reasoning in innovative ways. Below are some of the most influential models and frameworks pushing the boundaries of commonsense reasoning:

1. Neural Theorem Provers (NTPs): Neural Theorem Provers are differentiable frameworks that aim to emulate logical theorem proving using neural embeddings. Instead of performing discrete logic operations, NTPs work in continuous space, representing logical atoms (predicates, constants) as vectors and reasoning through soft unification mechanisms.

In NTPs, a query is interpreted as a logical goal, and the model "proves" it by recursively matching it to known facts and rules, using vector similarity to measure whether terms unify. This makes it possible to perform approximate logical inference, crucial for dealing with real-world noise and uncertainty.

NTPs are particularly useful for tasks like:

  • Knowledge graph completion
  • Logical reasoning over structured datasets
  • Interpretable inference over symbolic domains

2. Neuro-symbolic Concept Learner (NS-CL): Developed at MIT, NS-CL is an influential system designed for visual question answering (VQA). It combines:

  • A neural perception module that detects objects and attributes in an image
  • A symbolic reasoning engine that interprets the question as a functional program and executes it over the scene

For example, given an image and the question "What color is the cube next to the red sphere?", the model will:

  1. Use CNN-based object detectors to recognize shapes and colors
  2. Translate the question into a symbolic program like query(color, filter_shape(cube, filter_relation(next_to, filter_color(red, sphere))))
  3. Execute the program using the parsed visual scene as input

NS-CL is a compelling demonstration of how neuro-symbolic models can achieve compositional generalization and interpretable reasoning, especially in visually grounded settings.

3. COMET (Commonsense Transformers): COMET is a generative neural model that learns to extend commonsense knowledge graphs like ATOMIC and ConceptNet. It takes a simple concept or event (e.g., "Person X goes to the gym") and generates inferential knowledge along various dimensions:

  • Intentions (e.g., "Person X wants to get fit")
  • Effects (e.g., "Person X becomes tired")
  • Reactions (e.g., "Person X feels accomplished")

Trained using transformer architectures, COMET is not explicitly symbolic, but it generates structured outputs that resemble symbolic triples (head-relation-tail). It serves as a kind of “knowledge synthesis engine”, producing new facts from existing seed knowledge.

COMET can be integrated into neuro-symbolic systems as a neural commonsense generator, providing rich contextual priors for symbolic reasoners or downstream models.

4. Logic Tensor Networks (LTNs): LTNs are a type of fuzzy logic-based neural framework. They embed first-order logic into neural networks by allowing logical predicates to take on continuous truth values between 0 and 1, instead of strict Boolean values.

This allows LTNs to:

  • Learn representations that respect logical rules
  • Perform approximate reasoning with uncertain or incomplete data
  • Integrate knowledge bases directly into the learning process

For example, an LTN can learn a rule like "All cats are mammals" and use it to generalize to unseen facts, even when the data is noisy. The learning process optimizes a loss function that penalizes rule violations, thereby grounding logical consistency in the optimization loop.

LTNs are powerful for domains where logical constraints must be respected, such as medical diagnosis, legal reasoning, and formal verification.

5. DeepProbLog: DeepProbLog combines the symbolic power of ProbLog (a probabilistic logic programming language) with deep learning modules. It allows users to define logic programs with neural predicates, meaning that neural networks can be invoked as part of a symbolic query.

For example, you can write a rule like:

digit(X) :- nn(mnist_net, X, D), label(D).

Here, nn(mnist_net, X, D) calls a trained neural network on image X to infer a digit D, which is then used in logical rules.

DeepProbLog enables tight integration between symbolic reasoning and perception tasks, such as:

  • Visual question answering
  • Program induction
  • Probabilistic planning under uncertainty

It also supports probabilistic inference, making it well-suited for real-world environments with noise and ambiguity.

6. d-PROBLOG / Neuro-Symbolic Inductive Logic Programming (ILP): Inductive Logic Programming (ILP) is a classic symbolic technique where logical rules are learned from examples. Neuro-symbolic ILP approaches like d-PROBLOG integrate neural perception models into the ILP pipeline.

These frameworks aim to:

  • Learn logical rules from noisy data
  • Use neural models to extract symbolic facts from raw inputs (e.g., images, text)
  • Train end-to-end using gradient-based methods

The goal is to combine perception (from neural nets) and structure learning (from ILP) in a single system, leading to interpretable rule-based models grounded in real-world observations.

7. Visual Reasoning Models with Scene Graphs: In tasks like visual commonsense reasoning (VCR), models often combine:

  • CNNs or object detectors (e.g., Faster R-CNN)
  • Scene graph parsers to extract symbolic relationships (e.g., “man riding bike”)
  • Reasoning modules that infer consequences or fill in gaps

These systems use symbolic representations (scene graphs or semantic triples) to support causal and spatial reasoning. For instance, if an image shows someone wearing a helmet and holding a handlebar, the system may infer they’re riding a bike, a commonsense inference that goes beyond raw object detection.

These models represent diverse yet complementary approaches to building AI systems that can reason, not just recognize. While some focus on formal logic integration (like LTNs or NTPs), others use symbolic abstractions over perceptual inputs (like NS-CL or visual scene graphs), and some generate structured commonsense knowledge (like COMET).

As the field matures, future neuro-symbolic systems may increasingly combine these methods, creating agents that can see, learn, reason, explain, and generalize in truly human-like ways.

There are numerous challenges and problems encountered in Open Research. Some of these are listed below

  • Symbolic-Neural Integration: Bridging the discrete nature of symbolic logic with the continuous nature of deep learning is non-trivial.
  • Scalability: Symbolic reasoners don't scale well; combining them with large models is compute-intensive.
  • Knowledge Encoding: Manually writing rules is not scalable; learning them from data is hard.
  • Explainability vs. Performance: Trade-offs between interpretability and raw performance remain.

Its detrimental to look at the road ahead, with models like ChatGPT, Claude, and Gemini, we’re seeing neuro-symbolic ideas at scale, whether through retrieval-augmented generation (RAG), tool use, or explicit knowledge grounding.

As AI systems increasingly interact with real-world users and environments, commonsense reasoning will be crucial. Neuro-symbolic AI stands as a powerful approach to make AI both smarter and safer.

In Conclusion, Neuro-symbolic models offer a compelling route to integrate perception, memory, and reasoning into a unified AI framework. As we push beyond narrow AI into more general intelligence, commonsense reasoning is not optional, it’s fundamental. If you're building domain-specific AI systems that need robust reasoning, interpretability, and generalization, neuro-symbolic methods are worth a serious look.

#AI #CommonsenseReasoning #NeuroSymbolic #LLMs #ArtificialIntelligence #MachineLearning #DeepLearning #KnowledgeGraphs #ExplainableAI #Research #HybridAI

Hyderabad, Telangana, India
People call me aggressive, people think I am intimidating, People say that I am a hard nut to crack. But I guess people young or old do like hard nuts -- Isnt It? :-)