simile.ai | perfection = the absolute noncontingency of contingency
From the beginning of the history of the moving image’s development, note that it is moving, from the beginning, the moving image has captured the indetermination we find in nature. That is, the indetermination that is contingent on our material conditions.
Freed from those conditions, the moving image, the indetermination in the moving image, wants to have more and more noncontingency: this doesn’t mean it wants to be controlled; its contingency, its randomness, its being incidental, the leaves on the trees, the dustclouds from the wall, the waves breaking, is its noncontingency, meaning, the moving image is no longer reliant on either fluid dynamics, or physical laws of any sort.
It’s not a contradiction: the contingency of the moving image wants an increase in noncontingency.

How? How does it itself want it? On its own? Without actual bodies helping? Writing the software programmes, producing through painstaking effort the effects of hair being mussed randomly by the wind? Reproducing effects taken from nature, out of a desire for … perfection.
Perfection is of course the absolute noncontingency of contingency.

Ideally, AI is responsible.
The above is an excerpt from lecture 9 of a series given in 2022, distantly because of Covid-19, during which it also occurred to me that the majority of the populace is unnecessary to the function of the general economy. After it, as if people feared their redundancy, interest grew, was noticed and fed by political PR departments, in Replacement Theory.
Ordinary and otherwise quite sensible people came to believe that immigrants posed a threat, even when they were immigrants themselves, of replacing them, snatching their homes as lockdowns had their freedom of movement and Covid-19 had done their bodies. Conspiracy fever was a known side-effect, and that surrounding the question of where it came from symptomatic, in its racist xenophobia and, in the absence of answers calculated to arouse it further, irrationalism. State-funded irrationalism has only increased since, a kind of psy-ops that, out in the open and, with public complicity, becomes mandated policy, policy which is tested like any other product. Now, inspired by the Sims, by AI sims. People just like you:
Simile CEO Joon Sung Park wants to simulate all 8 billion humans, not replace, but they will be making our decisions for us, for you. Because "The future is too important to be left to chance."
I have been unable to find the video anywhere else but on Xwttr, under the name Dustin, who (in little pellets of sense I have no wish to replicate, so I quote in full, and follow with the link in context) writes:
Every major decision in human history has been made the same way.
Guess. Execute. Hope it doesn’t break the world.
That era is ending.
Simile CEO Joon Sung Park just revealed what the most serious AI researchers are actually building toward.
Park: “Our goal eventually is to ask ourselves what would it mean if we can create simulations of 8 billion people? The entire Earth.”
Not a metaphor. A literal, bottom-up digital replica of civilization.
We assumed the ultimate goal of AI was to predict the future.
Park says that’s thinking too small.
He points to The Oracle in The Matrix to explain the real paradigm shift.
Park: “You’re not necessarily here to make a decision. You’ve already made that decision. You’re here to understand the decision and why you made that.”
AI isn’t a crystal ball to tell us what happens next.
It’s an x-ray to show us why.
Think about how civilization actually operates right now. We guess.
We pass a law. We shift an economic policy. We launch a product.
Then we wait a decade to find out if it broke the world.
The simulation ends that.
Before a government passes a policy, they run a dry-run on 8 billion digital agents.
Before a company ships a product, they trace the exact moment the market breaks.
Park: “Creating these kind of bottom up simulations where we can actually go back and trace through the audit logs of how society might unfold is an amazing way to gain that interpretability layer of our reality.”
Let that phrase land. [This is still Dustin]
The interpretability layer of our reality.
We spent the last twenty years extracting data from the physical world.
We are about to spend the next twenty years using that data to build a mirror world.
One where every policy gets stress-tested before it’s enacted.
Every product before it ships. Every decision before its consequences become irreversible.
Park: “If you want to shape new policies, new products that would actually serve the people, the best way is actually to understand those people and communicating with them at a scalable way to basically create those policies.”
We are not moving from ignorance to prediction.
We are moving from blind execution to total societal foresight.
And once that capability exists, everything changes.
Wars modeled before they’re fought. Economies stress-tested before they collapse. Policies run through billions of digital lives before a single real one is affected.
Reality becomes the place we execute what the mirror world already proved would work.
The ultimate application of artificial intelligence was never about replacing human judgment.
It was always about perfecting it.
We aren’t just predicting the future anymore.
We are rehearsing it.
And when you can rehearse the future before you live it, the most catastrophic word in human history finally becomes extinct.
That word is surprise.
Every major decision in human history has been made the same way.
— Dustin (@r0ck3t23) February 23, 2026
Guess. Execute. Hope it doesn’t break the world.
That era is ending.
Simile CEO Joon Sung Park just revealed what the most serious AI researchers are actually building toward.
Park: “Our goal eventually is to… pic.twitter.com/AY6Bc36uF2
As the interviewer in the video below points out, Park and his co-founders have an academic not a business background. To prove it here's the abstract, linked from the Simile.ai site, to the 2023 paper "Generative Agents: Interactive Simulacra of Human Behavior":
Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.
– source