progress is a bad optic and the failure of political imagination

progress is a bad optic and the failure of political imagination

Diplomatic progress has been abandoned in favour of war.

War is an academic exercise, with academics weighing the blood.

Blood weighs more if there exist geopolitical stakes. Gaza is weightless. War weighs more if it's weighed in ordnance which can be converted to orders and sales.

This is knowable. The questions put to academics beg the question of political epistemology. That is, they assume it, begging the question of its limits. Questions take the form of [You cannot know the answer, but] What's next?

Accept frustration. The full story is unlikely ever to come out, but AI brings the spoils of information, the spoils of knowledge. In an earlier post I found little to add to the bare picture of this extraction which extracts human agents from actual matters of political economy and replaces them with AI agents, apart from to say, Doesn't this just show you, we were not necessary in the first place.

A great sadness broke out with the surprise attack on Iran. My sadness is that it occurred in the second week of Ramadan when everywhere around where I am, earlier in Muscat, Oman, and earlier in Manama, Bahrain, now back in Riyadh, families are celebrating together, doors are open to strangers and there is a feeling of thanksgiving, in Arabic ٱلْحَمْدُ لِلَّٰهِ. At an altogether other level is the sadness of Saudi and other Arab mediators, negotiating a peaceful and diplomatic agreement, between the US and Iran, an agreement preemptively to stop war.

All this effort, over how many years, I heard a Saudi negotiator say, wasted: now comes the nightmare. The nightmare of nightmares.

It didn't look good for war, so war-stopping was stopped, diplomacy was over. What then is the entry point for the extraction of the spoils promised by the new future technology?

It's not the statistical extraction of all possible human actions, by simulating individual sims, to simulate likely outcomes. This political knowledge is pushed away beyond the epistemological horizon. Access is prohibited.

A guardrail discussion on the implementation of AI preceded the unprovoked attack on Iran. As well as refusing their use by the state for surveillance, tech companies refused US DOD demands to remove Asimov's first law, autonomous units may not harm or allow humans to come to harm. Weapons using AI follow the inverse of these laws and the rails they run on are the guardrails.

They are used for surveillance, to identify, tag and target humans. Between accuracy in targeting and what that means for the targets the scale goes from 0 to 1. The range in possible outcomes is ideally eliminated. It belongs to the same informational black hole as the answer to the question put to academics about this war, What's next?

Technological progress always equalled improving weapons to target prey. In the nuclear age targeting and what's next for the targets, death, injury or vaporisation, part company.

What happens is still important. It just can't be seen. It has no optical valency.

It could be shown. The girls killed in Minab, Iran, will be shown, as one commentator has said, every day for months on TV. They belong to a viable political optic, the targeted, and not in any measure to the futurity of those who will be next.

What has gone missing is a part of the imagination, particularly the political imagination, which is necessary for diplomacy. It is the capacity of a thing to take a virtual form that is no less real than the actual. In this case, the thing is an object of political perception, which has ceased to be perceptible to it.

A number of young girls. They may be counted, but as the ones who were targeted, after the fact, the countless victims.