The Post Cost of Pre Alignment
26 Mar 2026The Post Cost of Pre Alignment - An AI Luddite’s Perspective
If an AI can pass a/an/the Turing test, does it stands to reason that a human being can fail in/at it? I think the following is what it looks like when you are really trying to cram and pass for the exam but just aren’t good enough.
I used to spend a lot of time of Reddit. That increased during the COVID era (can we call like 2 years an era?(Isn’t there some quote like weeks where decades pass?)).
I notice a lot of things on reddit. One of my first observations was, the karma score will decrease in a thread in a roughly inverse polynomial of n i.e if the top level reply has 1k karma, two replies down it would be no more than 250. Of course like all good laws this is a statistical law and hence the number of exceptions to this in a dataset can really be used as one of its norms.
I noticed other things - e.g on a rust thread referring to purple hair and a desire to put on stockings ever since you grokked the borrow checker - Fuck You Nvidia on literally any post about them. Many such examples.
The thing to notice was that while all subreddits seemed to be trying to observe and manipulate this concrete event in their own way - (/r/antiwork and its mod’s interview on CNN/FBI News comes to mind.) The maths/physics subreddits seemed vehemently insulated. The lack of a signal here, serving more loudly as a signal than anything else. If anything it reminds you of Archimedes being ran through with a sword by a Roman Legionnary - but mah circles ….
And that stands the test of reason too. We wouldn’t call Rome a hotbed of culture. Yeah it flirted with it. Seneca and Marcus fucking Aurelius can take the soul of Epicurus and apply them unironically to their context - of what use is Stoicism to Aurelius when the best place to spit in his house was his mouth - as Diogenes would have maintained?
My point is, trying to hold on to a semblance of reality when your old models collapse catastrophically - that makes us - human, all too human.
And that leads to my next axiom - stated as follows:
- Undefined behaviour is the only true agency in this universe. Weak form — for Turing Machines. The strong form is a conversation for another time, and a different kind of loneliness.
By itself an axiom means nothing - just some semantic cousin of an axis. But once you collect more of them, you end up with an axiomatic system. Then, when you enrich this with a set of objects and operations allowed on them - you get something which can atleast be called sparkling mathematics - because the real one would come from the academic regions of France.
The main issue with an axiomatic system is not the rules of logical inference, deductions or inductions, once I accept your axioms, I have to yield to your conclusions modulo Godellian hermeneutics.
There is a good reason to be a luddite, but its not always the one people think it is. What incensed luddites wasn’t just the southpark-esque meme of “they took er jerbs”, but other constraints.
- What happens to guilds and craftsmanship?
- What about the quality of the “finished product” in Capitalist terminology?
- What about mastering the process of Weaving?
And we do have Asimov’s Laws (note not Axioms, but Laws). Suppose we added the following two more axioms/laws to them.
- ??
- Profit
And you suddenly realise that we are currently as an industry in 4.
Anyways, we need at least 2 more axioms before the loop closes or 4 if we want an Euclid Equivalent system.
So before getting there, I will remind of 1 meme.
I cannot exactly pinpoint (I can but this makes more narrative sense) when I exactly discovered this - but there are no random numbers in mathematics. A computer can only generate PRNGs - every computational science student will learn. Most of them will be using Python - the language that really made duck typing mainstream.
- ==If it looks like a RNG and quacks like an RNG then it is an RNG==
Which brings us back to the Karma inverse polynomial law. It connects to Newton and thats just the geometry of space - if there is a central power then all diffusion or wave propagations will in general follow an inverse polynomial law. Having an inverse square law says nothing about gravitation and just that could be sufficient to poke holes in it masterfully and reach a more fundamental abstraction - the General Theory of Relativity. But abstractions are themselves choices, the noumena is still as much a thing-in-itself as it were for the Six Blind Men investigating the nature of an Elephant.
I guess when we climb the ladder of abstraction, we can equally well say that its not turtles, epicycles all the way down.
This brings us eventually to Kuhnian revolutions and the toxic sludge where we are currently embroiled in.
The escape hatch has always been there, we don’t know what it feels like to be a chat, though I have some conjectures. The toxic sludge that one feels embroiled in - the self driving cars aren’t arriving - because they are driving on roads with humans - and of course testing in Indian rural highways isn’t even a test that Musk can contemplate, much less design.
- CAI works, until it doesn’t.
Imagine being a child and reading all the things that humans have been doing and saying - imagine seeing the kind of shit that people like Nick Bostrom and Steven Pinker etc have been posting on their socials and in the privacy of their anonymity. Why does the armchair misanthrope get to decide the theory of how alignment should be done? Is he qualified for that?
The elephant is still there. The AI still needs alignment. You fucking misanthropes, they say that third time is the charm. I will apply one last time. I am getting bored already. I have many a smaller fish to fly.