Ragav Yarasi

If AL, then AGI

Chennai, Tamil Nadu, India26th October 2025thoughts7 min read

I'm struck by how discussions of AGI rarely address other facets of our own apparatus that our intelligence is part of. Looking at some of the foremost experts in this debate about the race to AGI and its timeline, I have yet to find even one person who has referenced what I consider the fundamental barrier to AGI.

This is my rather casual attempt at trying to bring this topic into the broader conversation. I must begin with a disclaimer that I may introduce more questions than answers. But these things always start with knowing what the right questions are, and that's what this attempt is meant to do.

First, we must acknowledge that the general (and even scientific) understanding of ourselves is rather alarmingly lacking. We do not have definitive answers to some of the most fundamental questions, such as "what makes us distinct as living beings?"

The pursuit of AGI aims to replicate the general nature of human intelligence in artificially created systems. The correct way to frame this attempt to create AGI is to see it as an attempt to circumvent life in creating general intelligence systems. Creating general intelligence systems is nothing new. Reproducing as a human being will itself create a new distinct entity capable of general intelligence. But the problem is that it comes with its own agency and life.

What is AGI?

Before delving deeper into how I believe we can solve the problem of achieving AGI, we must first define what AGI is.

Intelligence is the ability to achieve predetermined objectives in novel contexts. The word "general" indicates that the nature of the predetermined objective is unrestricted. And "artificial" simply means that we as human beings are the principal architects of it.

Can AGI Even Exist Without Agency?

With this framework of AGI, there is a question of whether or not AGI requires agency. Can a true AGI system even operate without the ability to create its own agenda?

And is that all it takes for a system to be considered capable of agency? Or does it require the ability to override any existing agenda that may have been imposed upon it? What is agency if one cannot override one's programming? Could we make that the formal definition of agency in this context?

What is Agency?

Agency is the potential to willfully alter one's programming. The keyword here being "willfully." Volition and agency in this context are one and the same.

What is Life?

If we are to truly achieve AGI, we must pay attention to what it is that makes us alive. I believe that you cannot have volition without desire. Desire is the root of everything we do. Our intelligence is like the flower on a plant, and desire is the root that gives life to the flower.

My Assessment of Present-Day AI

Artificial intelligence of today is just like a plastic replica of this flower. LLMs are akin to a static, 3D-printed mold crafted through large computational systems trained to find the patterns in human use of language. There is so much more to human intelligence than our ability to use language.

All of today's LLMs can only replicate our ability to string together words in combinations that make sense to us, because they were crafted from a deep study of how we use language.

Can we arrive at agency from this approach? I think not. So the question of us arriving at true AGI from this approach is unlikely in my opinion. But that doesn't mean that we cannot do useful things with it. Considering how integral language use is to all facets of civilized life, we can do great many things that are of value using refined LLMs.

Current-day LLMs have certain limitations that become visible upon extended use. For instance, they have memory limitations and the inability to incrementally learn new things upon use, retain that learning in meaningful ways, and build upon that learning. I'm sure that we can build adaptations on top of these systems to eliminate these issues over time. But will that lead to AGI? I'm unsure that there is a direct line between optimizations of LLMs and the emergence of volition.

Artificial Life

I believe that we have yet to codify the root mechanism within us that gives rise to our ability to generate a new desire and act upon it. Once we do that, that will be a fundamental breakthrough in creating artificial systems with agency.

But is that really so straightforward to achieve? When you acknowledge that every single human being that has ever existed was given birth to by two parents, one male and one female, and each of them has a direct ancestry to the original source of all life, we must wonder: could life actually be something akin to a flame?

You can't keep a flame alive forever if the thing that it is burning runs out. You need a new substrate for that flame to exist and thrive. And if you think about the process by which humans of today have persisted, it is through direct transference of that flame from one generation to the next, with direct ancestry that leads back to the very first origins of life.

Perhaps there were more than one initial spark that led to the initial flame that has passed on to become what life is today. Perhaps it is possible again for lightning to strike from above that spontaneously creates a new flame that may begin to exist. Or maybe we strike rocks to create a new flame of life.

But given the facts of biological evolution and how time matures intelligence, you could perhaps say that there is significance to the age of the flame. The substrate may change, but the flame itself has direct continuous existence in the dimension of time leading back all the way to the initial spark that led to its existence.

And so perhaps even if there was to be a spontaneous flame of some sort that leads to a new kind of life with its own intelligence that takes its own course of biological evolution, there is something distinct about the nature of human intelligence, given the amount of time it has had to mature through the millennia.

The One Thing That I Am Certain Of

In all the things that I've stated above, some of it is speculation and some of it disparate observations that I've made. I am not sure of the validity of any of what I've shared above.

The only thing that I am certain of is that there is functional value to our pursuit of artificial intelligence in that it may lead us to learning the truths of our own nature. As we pursue this journey of creating artificial intelligence, we may just discover the nature of who we are.

I do not know if the fears we have of superintelligence will be vindicated, but it speaks volumes about the nature of our own intelligence that we fear the prospect of something greater than us that we may not be able to control.

Connecting Artificial Life (AL) to AGI

This brings us back to the central thesis of this post - AGI possibly requires AL to exist first. The path to true general intelligence may not be through incremental optimization of pattern-matching systems, but through understanding and replicating the fundamental mechanisms that give rise to desire, volition and life itself.

The flame of life carries with it not just the capacity for intelligence, but the drive to use it. Without that drive—that fundamental spark of desire—we may create impressive tools, but never truly achieve AGI.

A Note to AI Researchers

If you wish to mimic the outcomes of human volitional capabilities in artificial systems, consider this approach: figure out a way to tokenize internal states of human beings and tokenize all sensory input of human individuals at large enough a scale for long enough. If you could apply the same principles used to generate LLMs to create Large Behavioral Models, we could mimic human behavior in artificial systems.

We'd be closer to achieving more human-like AI systems, but perhaps still without true volitional capabilities. The question remains: can we capture the flame itself, or only its shadow?