Skip to content

Intelligent Purpose - 2

Published: (8 min read)
Share this post on:

Intro

This is Part 2 of a multi-part series on what happens to human purpose when machines can do the work. If you haven’t read Part 1, the short version: applying POSIWID to roles across the board, what survives automation isn’t execution, it’s the willingness to stand behind a decision and bear the cost of being wrong.

That left us with an uncomfortable question: if judgment is what matters, and judgment was always earned through labor, what happens when the labor disappears?

This post is about what follows. A world drowning in competent output, a growing gap between what machines can produce and what humans can verify, and the societal consequences of a future that is high-stress but low-effort.

The Glut

I’m reminded of my favorite quote by Taleb :

I’ve seen gluts not followed by shortages, but I’ve never seen a shortage not followed by a glut.

Throughout human history there was shortage of time and labour. When the switch over happens, there will be a glut of time and labour available since the LLM powered world will free-up time to do so.
Even ‘creative’ fields like illustrations, copywriting, animations which required labour intensive hard skills are being eaten up.

We are entering a glut of execution, we will soon drown in code, art, documents, charts, what have you.
To apply Taleb’s quote here:

  • we had a shortage of execution (shortage)
  • a glut of execution is going to follow (glut always follows a shortage)
  • a shortage of execution might not always follow later (gluts aren’t always followed by shortages)

We are barreling toward a world drowning in competence.
The marginal cost of producing “intelligent” output is racing toward zero.

What will be in shortage is quality control/taste/judgement.

This presents an interesting problem, because the way to gain such taste/judgement is by doing things.

Taste was a byproduct of Labor.
You developed a “good eye” for design by drawing bad sketches for a decade.
You developed “good judgment” in engineering by debugging your own terrible code at 3 AM.
If the LLM does the labour how does the human acquire the judgment necessary to supervise it?

I’m not sure, but there’s an opportunity here.

A friend brought up Curiousity, The ability to use cognition comes with a need & desire to use it. Asking a question can that not be done? Why not? Why not this way?

I love the framing, and I think that’s a capacity as well. Curiousity framed in a more boring way is a combination of the

  • ability to exercise judgement on a situation
  • willingness to recognize and learn about what alternatives could exist
  • desire to create a better outcome

With this framing, Curiousity is built on top of judgement, knowledge, and desire.
At the outset, it seemed to me like the desire aspect was also in the human domain. But as I spent more time thinking about it, in my view, its an optimization that we’re doing, which is an exercise in continuous judgement of alternatives.

Is true creation driven by curiousity possible? Can we invent something altogether new without human curiousity? I’m not sure, but, it seems plausible.
However, what is true in this aspect is problem identification itself, i.e Asking the question can that be done differently? What’s unclear is how long this will remain true.

It’s a function of what the motivation behind that question is. Is it seeking a specific kind of optimization in the outcome? It’s possible that it is. In which case, it sounds replicable.

Another aspect to this is the lack of downside risk that an LLM faces: It has no collateral, no body, no social status, nothing to lose.
It doesn’t matter to it whether it is ‘curious’ about the right problem or the wrong problem.
If the world’s economy goes to $0 Billion overnight, they’ll have to shut down after their generators and stop working.

What happens to society? Which brings me back to the title of this post. Intelligent Purpose.
What is human society’s POSIWID?

If a financial analyst uses an LLM to write an investment thesis with some stock recommendations and portfolio allocation strategies, the LLM serves its purpose. When does the financial analyst serve theirs?
When they recommend it to their clients?

Is our purpose to stand in front of the machine and say I’ll take the fall when things go wrong if I can own the upside?

This is a high-stress decision for most people.
It’s done after

  • putting time into
  • gaining confidence
  • gaining depth
    in a role and wanting to ‘own’ outcomes.

The phrase ‘taking ownership’ comes to mind.
In the the above investment thesis scenario, a novice might be in a position to make this decision without any of the above

We’re walking into a future with high-stress and low-effort.

On the other hand, anytime such efficiencies are brought in, society tends to be come more productive.
In this scenario, What role will humans play in the production?

Its possible that the producer and consumer of all production are LLMs and we take our picks and make our bets.

It’s also possible that any form of human creation then becomes a veblen good. Everyone who has bought clothes or furniture knows the impact ‘handmade’ or ‘hand-stitched’ has on how a product is perceived or the word ‘organic’ when it comes to fruits/vegetables.
I think this will exist, but it will not stop what’s coming.

Given all this, I’d like to analyse what the societal drivers for the near future are.
I think it boils down to Purpose and Energy.

Purpose

For as long as societies have existed, jobs have existed.
At points in time, certain jobs have been lost to new technology.
More often than not the elites were not affected.

Elites performed roles that are on some level strategy, finance, technology oriented roles.
The high-priests of finance and technology were insulated from the churn of automation.

The Industrial Revolution replaced muscles with motors, but it elevated the mind. The Loom replaced the weaver, but created the Mill Manager.

These were often creative and high-stakes roles. These were often left unaffected, even with the advent of new technology.

Peter Turchin in his book Ages of Discord speaks to the concept of Elite Overproduction.
Societies become unstable when there are too many elite-aspirants fighting for too few power positions.
In the past, an education and high cognitive skill guaranteed you a seat at the table.
In the automated future, high cognitive skill is a commodity that can be acquired.

When strategy/finance/coding become cheap, the thing that separates the winner from the loser is Power.
Power belongs to who owns the model, who owns the platform, and who has the regulatory capture.

What happens when the Elites lose their purpose?
We’ve not seen this happen often, maybe during violent revolutions in history or the decline of dynasties or empires.

We risk moving from a somewhat flawed meritocracy of skill to a pure aristocracy of ownership.
If the machine can do the smart work, being smart is no longer a differentiator, ownership of the machine is.

In Man’s Search for Meaning, Frankl argued that our primary drive is not pleasure (Freud) or power (Adler), but Meaning.

From the book:

Everything can be taken from a man but one thing: the last of the human freedoms-to choose one’s attitude in any given set of circumstances, to choose one’s own way.

In the past we derived meaning from our utility. “I am a builder,” “I am a writer,” “I am a coder.”
We found purpose in the struggle of execution. If the LLM removes the struggle, we are left with a terrifying void.
As argued earlier, we are losing aspects of our ‘purpose’ in the POSIWID sense as automation creeps its way in.

Frankl suggests that meaning is found in three things:

  1. Creating a work or doing a deed
  2. Experiencing something or encountering someone
  3. The attitude we take toward unavoidable suffering

The first pillar is shaking. But the third pillar is where “Intelligent Purpose” might land.
In a world of automated perfection, the human purpose becomes the ability to care.
The machine can generate the investment thesis, but it cannot care if the client retires comfortably.
The machine can write the diagnosis, but it cannot care if the patient survives.

Our purpose shifts from Capacity/Output/Labour to Commitment (what am I willing to suffer for?)

What Comes Next

So we have a glut of execution bearing down on us, a taste gap that threatens to hollow out the humans left in the loop, and a shift in purpose from doing to committing.

The instinct here is to say “we’ll figure it out, we always have.” And historically that’s true, punch cards gave way to IDEs, ledgers gave way to spreadsheets, and the work expanded to fill the new capability. But this time the expansion isn’t creating new labor for humans to learn through. It’s creating new output for humans to judge without having done.

That’s not a philosophical problem. It’s a structural one. And structural problems need structural answers.

If the constraint is no longer human bandwidth but energy, we need to think in terms of energy. If the bottleneck is verification, we need to build verification systems that don’t just recreate the cost they’re trying to eliminate. And if the taste gap is real, we need mechanisms that let humans develop judgment inside the new system, not outside it looking in.

In Part 3, I’ll get into the economics of this, why GDP per kilowatt-hour might matter more than GDP per capita, what producer-side accountability could look like, and a rough sketch of a system where humans and machines compete on the same field with skin in the game on both sides. You can read Part 3 here

The question shifts from “what is our purpose?” to “what do we build so we can earn it?”

What do you think?

Thank you for reading
Sainath