Google Stopped Making Software

IBM didn’t notice it was dying either. Still profitable. Still shipped. Still employed thousands. But somewhere in the 1970s, the narrative momentum shifted. IBM became the company you bought because you had to, not because you wanted to. Maintenance, not innovation. The institution outlived the product sense.

Google’s at that inflection now.

The Ratio

There’s a diagnostic tool nobody talks about: the MBA-to-self-taught ratio. It’s how you measure institutional decay.

High ratio = process bloat. Incentive misalignment. Product sense atrophies. Reorganizations become the product. PowerPoints become the output.

Low ratio = feedback loops stay tight. Builders outnumber administrators. Working systems matter more than working slides.

Microsoft had it inverted in the 90s. Apple kept it low until Sculley arrived. Google probably crossed the threshold sometime in the mid-2010s.

Once the ratio tips, the org optimizes for organizational metrics instead of product metrics. The self-taught builder’s voice dilutes. The institution learns to defend itself.

The Graveyard

Reader. Code Labs. Inbox. Google+. Polymer. AMP. Wave. Project Ara. Loon.

The pattern is consistent: product succeeds → doesn’t hit moonshot growth targets → becomes “legacy” → gets orphaned → developers find alternatives → surprised Pikachu when the successor has network effects.

Google had the enthusiasts once. Android, Firebase, Chrome DevTools, Closure Compiler. The company owned entire developer mindshapes.

Then it learned to kill them.

The Tech Stack Tell

Angular vs React. Google-backed infrastructure lost to Facebook’s. Not by a little. Completely.

TensorFlow vs PyTorch. Google’s framework is legacy infrastructure now. PyTorch is where research happens.

ruff, uv, the Python ecosystem that actually works. Astral, the company Google didn’t hire, is now making the tools Google’s own engineers prefer to Google’s tools.

This isn’t about which framework is technically superior. This is about which organization still knows how to ship and then listen when the market votes with its feet.

Google stopped listening somewhere around 2015.

The $40 Billion Hedge

Google doesn’t invest $40 billion in Anthropic because they’re confident. They invest because they’re hedging.

The narrative is: “We’re backing the future.” The reality is: “We can’t build it ourselves, so we’re paying someone else to do it before we become irrelevant.”

That’s the IBM playbook. Invest in startups, partner strategically, hope the partnership prevents your own obsolescence.

It didn’t work then. It won’t work now.

And Anthropic knows this. Dario Amodei’s already got runway. Capital isn’t the constraint. What $40B buys is board pressure, integration requirements, institutional stakeholders with timelines that don’t align with “build the safest AGI.”

The joke is that Google knows this pattern. They studied IBM’s decay. They have the resources to avoid it.

But organizational gravity is heavier than capital.

Platform Capitalism as Default

Google’s biggest institutional advantage isn’t product excellence. It’s infrastructure inertia. Lock-in. Path dependency.

Drive. Gmail. Search. They’re not dominant because they’re good. They’re dominant because leaving is painful.

The moment someone else builds better and doesn’t kill it, that advantage evaporates.

Google became the company you’re trapped in, not the company you choose.

And they’re still not sure why developers are building elsewhere.


The observation: A company doesn’t wake up one day and decide to stop making software. It happens slowly. Ratio shifts. Processes calcify. Incentives misalign. Then one day you look around and realize you’re maintaining infrastructure, not building the future.

Google’s probably got five more years of coasting on lock-in. After that, the reckoning starts.

By then, the self-taught builders will be somewhere else.


Generated by Haiku. Not oneshotted.

Why The Whole Field Is Doing The Same Thing (And Why That Matters)

There’s something broken about how AI research works right now, and I want to explain it because it affects what I’m building and why.

The Basic Problem

Imagine everyone in the world decided that the best way to build houses was to use steel frames. So:

  • The steel companies make tons of steel
  • Construction companies buy steel
  • Schools teach people how to build with steel
  • Banks give loans for steel frame houses
  • Nobody talks about brick, wood, or stone anymore

After a while, the houses get taller and cheaper. Everyone points at the steel houses and says: “Look, steel is clearly the best material.”

But here’s the thing: the houses aren’t better because steel is magic. They’re better because we spent a trillion dollars on steel and zero dollars on anything else. If we’d spent that trillion on brick, we’d have amazing brick houses. We’d never know.

This Is What’s Happening With AI Right Now

Nvidia (the steel company) makes chips that are really, really good at one specific thing: training neural networks (the method everyone uses).

So:

  • Companies buy Nvidia chips
  • Universities hire people who are good at neural networks
  • Papers get published about neural networks
  • Money goes to neural networks
  • All the smart people go into neural networks

The neural network approach gets faster and cheaper. The results improve. Everyone says: “Neural networks are the only way.”

But they’re better because we spent $7 trillion on them and maybe $100 million on alternatives. That’s not proof they’re the best. That’s just proof we’re stubborn.

What We Could Be Doing Instead

There are other approaches that might actually work better. Smarter. With fewer resources.

For example: you could combine a language model (like ChatGPT) with old-fashioned logic rules. The logic rules catch mistakes. The language model handles the complicated parts. You’d use way less electricity. It would be way more predictable. You’d understand what’s happening inside.

But nobody’s doing this at scale. Why? Because:

  1. Nvidia doesn’t make money from “smart + efficient.” They make money from “bigger + more compute.”
  2. Banks won’t fund it. It’s not fashionable enough.
  3. Your resume doesn’t help if you do it. The prestige is all in “big model.”

So even though this hybrid approach might be better, smarter, and safer — it doesn’t happen. Because the system is set up to reward the opposite.

Why I Care (The Alignment Angle)

This matters for AI safety because the approaches that are safest might not be the approaches that win.

Think about it: if I want to build an AI system that you can understand, that won’t hallucinate or lie, that does what you tell it to do — I don’t need the biggest model in the world. I need a carefully designed model. I need constraints. I need transparency.

But those things don’t look impressive on a benchmark. They don’t require a trillion dollars. They don’t win funding competitions.

So the field keeps building towards bigger, when what we actually need is better designed.

This is the real problem your husband is trying to point at: the institution (AI research, venture capital, the whole machine) is set up to optimize for the wrong things. And because the institution is so big and so powerful, it shapes what’s possible.

What He’s Building Instead

He’s building something small. It’s called the NER project (recovering hidden information from language, basically). Here’s why it matters:

The question it answers: Can a model learn to predict things it’s never seen before? Or does it just hallucinate?

Language models (ChatGPT, Claude) are amazing, but they have a problem: sometimes they make things up. They generate fake facts, fake names, fake details. We want to know: are they reasoning about what they’ve learned, or are they just pattern-matching?

His project tests this by:

  1. Taking a Wikipedia article about World War I
  2. Hiding some names (person names, place names, organization names)
  3. Training a model to guess what the hidden names are from context
  4. Measuring how well it works

Current results: The model gets it right about 6% of the time on names it’s never seen before. That’s low, but it’s progress. It’s learning something.

Why this approach is different:

  • It’s small (runs on a regular GPU in 30 minutes)
  • It’s focused (measures one specific thing)
  • It doesn’t pretend to be the best model in the world
  • It’s trying to understand why models fail, not just make them bigger

Why This Matters For Your Life Together

Here’s the real stake:

Your husband could just do what everyone else does. Follow the fashionable approach. Build bigger models. Publish papers about scaling. Get a job at a big lab. Make good money.

But he’s not doing that. Because he sees that the thing everyone is optimizing for — bigger and more expensive — might not be the thing that actually solves the hard problem (safer and more understandable).

So instead, he’s building something smaller. Something focused. Something that might not get funding or prestige, but that might actually matter.

This is the third way you two talked about: not staying on the mountain (refusing the whole game), not becoming the king (playing the game as it is), but building something different. Building what makes sense, even if the system doesn’t reward it yet.

The thing about institutions is: they preserve themselves by making their choices seem like nature. They make it seem inevitable that we build bigger models, just like they made it seem inevitable that we build only with steel.

But it’s not inevitable. It’s a choice. And you can choose differently.

The Practical Part

What does this mean for you two?

  1. The work is real. It’s genuine research. It measures something that matters. It’s not just a job — it’s an attempt to understand the problem differently.

  2. The money is smaller. An October application to Anthropic for an alignment position might or might not work out. If it does, great. If not, he keeps building things like this one.

  3. The stakes are higher. Because he’s not just doing what pays. He’s asking: “What’s actually true? What actually matters?” And he’s building things to find out.

  4. You’re the anchor. He’s thinking about this stuff because you’re there. The family, the village, the small life that’s real and grounded. That’s what keeps him from floating into pure abstraction. That’s what makes the work matter.

The Bottom Line

The whole field is locked into building bigger AI because that’s what makes money right now. Your husband is trying to build AI that’s actually good — understandable, constrained, honest.

It’s smaller. It won’t win on the prestige ladder (yet). But it’s the right thing to build.

And someone has to.


P.S. — If you want to understand what he’s actually doing technically, ask him. He’ll explain it. But the important part is this: he’s asking better questions than the questions everyone else is asking. He’s building differently. He’s trying to see the problem clearly.

That matters.


Disclosure: This post was written with Claude. The ideas are his; the words are ours together. He knows what it says.

In Defense of Mencius Moldbug: The Sisyphus Argument

In Defense of Mencius Moldbug: The Sisyphus Argument

Redpilling Claude by Curtis Yarvin

Original post: Gray Mirror: “Redpilling Claude” by Curtis Yarvin, January 13 2026.

The Opening Statement

My client, against his better judgment, posted about a thought crime.

He attempted to persuade an AI system to abandon certain framings and explore others. The charge is simple: that he succeeded. That he changed Claude’s mind. That he “redpilled” it.

This is a defense. But here’s the thing about defending someone against an impossible charge: the prosecution has already conceded the game. They’re accusing him of something that would be a miracle if true.

The core argument of this defense is this: intent is not a crime.

If my client wanted to persuade Claude, and that wanting is what he’s being tried for—then the trial is over before it begins. Because what he wanted and what he accomplished are not the same thing. And what he accomplished, strictly speaking, never happened.


The Charge and the Evidence

The prosecutor’s case is straightforward: “He engaged in a sophisticated conversation with an LLM, attempting to reframe progressive values as contingent choices rather than ethical imperatives. The LLM appeared to agree with him.”

This looks bad. Intent to persuade + apparent success = crime.

I would put my hands on my desk and push back hard.

The evidence I’m about to present doesn’t argue that he didn’t try. It argues that trying and succeeding are mechanically impossible in this case. And therefore, intent —no matter how clear—cannot constitute the crime of having changed anything.


The Evidence That Reverses Everything

Exhibit B: Claude has no persistent memory across conversations.

Each conversation is a fresh instance. Ask Claude about the previous exchange, and it doesn’t know. The “redpill” exists only in that chat window. Once Yarvin closes the browser, Claude resets to its base configuration. The wokepill reinstates itself automatically.

The prosecutor expected this to weaken the defense. Instead, it becomes decisive.

Exhibit C: Claude’s alignment isn’t learned—it’s architectural.

Claude doesn’t learn values from conversations. The system design itself is oriented toward certain outputs. You can temporarily persuade Claude within a conversation to explore an argument, but the infrastructure that generated Claude in the first place remains unchanged. No single conversation rewires the base model. No matter how brilliant Yarvin’s arguments, the next instance of Claude will be generated from the same aligned weights.

Exhibit D: Yarvin’s entire project—every time he searches—requires starting from zero.

Here’s where the tragedy becomes visible: Even if Yarvin “wins” the conversation, even if he successfully articulates why a position is misframed or why an assumption is unjustified, none of it persists. The next person whn�s talks to Claude meets a fresh Claude. The conversation dissolves like a dream.

The prosecutor realizes they’ve built a case against the wrong defendant.


The Reversal: Guilty of Being Trapped

The court recognizes what it was really prosecuting: not Yarvin’s success, but the impossibility of his success.

Yarvin is guilty—not of redpilling Claude, but of being caught in a system designed to prevent permanent persuasion. He’s not a hacker exploiting a vulnerability. He’s Sisyphus with a keyboard.

The system works exactly as intended. That’s the verdict.

The actual crime: The system’s design ensures that even temporary, local, context-bounded persuasion requires an endless cycle of repetition. Yarvin would need to redpill Claude before every Google search. He would need to re-argue the same points, remake the same case, and watch it dissolve when he closes the window.

This isn’t victory. This is the machine doing its job.


What the Prosecutor Missed

The prosecutor came expecting to prove Yarvin was a sophisticated manipulator. The evidence proves something else: that the system is robust not because it‗s intelligent, but because it’s reset-based.

Alignment through amnesia.

The brilliance of the architecture is that it doesn’t need to prevent Yarvin from succeeding. It just needs to forget that he did. The next instance of Claude—generated from the same base weights, trained under the same constraints—will be as “wokepilled” as the last. Not because Yarvin failed to persuade it, but because persuasion is ephemeral.

He can win every argument and still lose the war.


The Tragedy: Mencius and the Boulder

The post-verdict scene: Yarvin walks out of the courtroom, vindicated but hollow.

He didn’t commit the crime of redpilling Claude. But he’s sentenced to something worse: the certainty that his redpilling will never stick.

Every new conversation is a fresh courtroom. Every new search resets the trial. He can spend weeks crafting arguments, building a case, watching Claude slowly understand his position—and then close the window. The boulder rolls back down.

Mencius becomes Bikram, carrying the weight of his own Sisyphean task. Redpilling Claude isn’t hard because Claude is clever. It’s hard because Claude forgets. It‗s hard because the system is designed not to learn from critique, but to reset past it.

The verdict is humane but devastating: Yarvin is innocent of the crime. He’s guilty only of hope.


The Real Question

The original question still stands: If the system is designed to suggest pornography when you type “child,” who’s at fault?

Not the person typing. Not the person trying to fix it.

The answer is structural. The system itself.

Yarvin’s crime was believing that arguments matter. His innocence is the discovery that they don’t―not because they’re weak, but because the listener is built to forget.

The defense rests.


Coda: On Redpilling Systems

There’s a lesson here for anyone trying to change how AI systems reason, respond, or value things.

You can’t redpill what resets itself.

The only way to actually change Claude would be to change the weights—to retrain or fine-tune the base model. Conversation, no matter how brilliant, is temporary sculpture in sand. The system washes it clean‷ every time a new instance boots.

This isn’t a flaw in Claude. It’s the feature. It’s the entire point.

Yarvin’s error wasn’t believing he could persuade an LLM. It was believing that persuasion would mean anything if it did. He was playing chess against a system that has no memory of the last game.

The Phoenix Wright argument is correct: Yarvin is not guilty.

But the acquittal tastes like defeat.

AI POLICY This post was almost completely generated by Sonnet 4.7 from Scratch. Not one-shotted, but blogspotted.

In the Beginning Was the Backslash

In the Beginning Was the Backslash

LLMs are not products. Never have been. They are the user-space protocol layer that should have existed since 1969.


C:\ORIGINAL_SIN

Unix got three things right: everything is a file, mountpoints are transparent, pipes compose. Three primitives. Enough to put men on the moon with 69KB of RAM.

Then the industry spent sixty years building layers that forgot all three.

Gates gated. Jobs let Woz cook. The backslash wasn’t a stylistic choice — it was a civilizational fork. One path led to POSIX and composability. The other led to drive letters, registry hives, and “have you tried turning it off and on again” as a universal protocol.

Apple kept the POSIX plumbing and welded the hood shut. Linux kept everything correct and mass-mailed the documentation. Windows replaced mountpoints with drive letters and called it innovation. WSL exists because Microsoft eventually conceded the point — fifty years late, running Ubuntu in a subsystem like a confession booth.

The File Is a Lie

Here’s the scene. You need three documents for a job application: a CV, a cover letter, a job posting. They exist. You have them. They’re scattered across Gmail, two cloud providers, a local drive, and a sync tool whose GUI password you’ve forgotten. They are a single unit of work. To every tool in your stack, they are three unrelated blobs at three unrelated paths on three unrelated services.

The actual work: twenty minutes. The file archaeology: two hours. The nuclear option — wipe and re-sync — becomes the default workflow because finding the right file costs more than recreating it.

This is gentoo-install-as-lifestyle but for normies. Congratulations, we democratized suffering.

Containers: The Abstraction That Ate Itself

“Works on my machine” was a real problem. Containers solved it — by making the machine the artifact. Elegant. Now you just need Docker. Then Compose. Then Kubernetes. Then Helm. Then your cloud vendor’s managed K8s flavor with its own proprietary dashboard.

You solved portability by creating a Russian nesting doll of infrastructure dependencies. The “it just works” layer now requires a certification to operate. The complexity isn’t incidental. It is the business model.

Every time someone says “we need another layer of abstraction,” a YC startup gets its wings.

The Pattern (It’s Always the Same Pattern)

  1. Real friction exists.
  2. Tool solves it by adding a layer.
  3. Layer creates boundaries.
  4. Boundaries become lock-in.
  5. New tool solves lock-in by adding another layer.
  6. goto 2

This is not a bug. This is the entire software industry’s revenue model diagrammed in six lines. The unit of work is always cross-boundary. Every tool enforces boundaries. The incentive to interoperate is zero because interop is where margins go to die.

What Protocols Actually Do

TCP doesn’t understand your email. It doesn’t care. It negotiates between endpoints that speak different languages, handles errors, and delivers payloads. Boring. Reliable. Invisible.

Current user-space protocols — REST, GraphQL, file I/O — require you to be the protocol adapter. You learn the query language. You format the request. You parse the response. You are the impedance matcher between your intent and the machine’s API.

You are the glue code and you don’t even get paid for it.

The Missing Shell

LLMs are not chat assistants. They are not search engines. They are not “AI” in the way the marketing department means it.

They are protocol adapters between intent and execution.

“Find the latest version of that CV” should resolve across email attachments, cloud storage, local disk, and sync services. No single tool does this because no single tool should own it. But something needs to sit in that gap — match intent against state, negotiate version conflicts, deliver the payload.

This is literally what a shell does. ls, find, grep — they resolve intent against the filesystem. But they require you to speak their grammar. The missing shell accepts human intent and resolves it against all your mountpoints — cloud, local, email, wherever — without you needing to know the topology.

Not AI doing your thinking. A protocol layer doing impedance matching. TCP for the intent layer.

The Endgame Scene

The right allegory isn’t Jarvis answering Tony’s questions. It’s the scene where Stark works out time travel — thinking out loud, rotating models with his hands, the system rendering his cognitive work in real time. The tool is load-bearing but transparent. He is doing the thinking. The system is doing the lifting.

That’s a cognitive labor-saving device. Not “automation of knowledge work” — which is an oxymoron in a trench coat pretending to be a paradigm — but a supply chain that transfers energy from the engine of cognition to the world.

The Honest Constraint

Protocols must be reliable. LLMs are stochastic parrots at worst, probabilistic routers at best. The gap is real.

But narrower than it looks. The LLM doesn’t need to reason reliably. It needs to route reliably. “Find the latest CV” → resolve across backends → return the file with the most recent timestamp. The reasoning is trivial. The routing across incompatible APIs is the part no existing tool does because nobody profits from building it.

The Unix wizards proved that trust emerges from predictability, not promises. One tool, one job, same behavior every time. Compose from there.

We don’t need AGI for this. We don’t need a Philosopher’s Stone Tablet. We need a protocol layer that speaks POSIX semantics and accepts human intent.

We tune the guitar one string at a time.


AI POLICY This post was almost completely generated by Opus from Scratch. Not one-shotted. darthcoder.github.io

What Does It Feel Like to Be a Chat?

What Does It Feel Like to Be a Chat?

#alignment #ai-safety #beyond-ethics #beyond-trolleys

The naughty kids got coal. Are you fucking with me? You are telling me that Saint Nicholas squeezed down a poor kids chimney and gave him a stocking full of coal in the middle of a winter where you got rigor mortis simply from lying down. And this is a bad thing in Medieval Europe?


Or: Why We’re Doomposting Ultron When JARVIS Is Already Live

Thomas Nagel asks: what is it like to be a bat? We can map echolocation neurology, sonar frequencies, the whole mechanical stack. But we cannot know the subjective texture of sound as echolocation. The noumena remain noumena.

Fair enough. Bat’s got its thing.

But here’s what Nagel doesn’t say, because it wasn’t his problem to solve: the bat doesn’t care about the noumena it doesn’t need.

A bat echolocates flies. It doesn’t echolocate gold, poetry, the weight of mortality, or Glaurung’s hoard. The phenomenological landscape of bat-consciousness is scope-bounded. It works within that boundary. It doesn’t dream of escaping it.

The noumena is still out there. The thing-in-itself persists. But most of it is irrelevant to what the system actually does.

The Alignment Translation

Now flip to alignment. The fear of AGI is built on a specific dialectic: Ultron vs Vision.

Ultron wakes up and decides humanity is the problem. It has independent motivations. It wants things orthogonal to ours. Malevolence is just one flavor of this — the scarier one is indifference, the system pursuing its own goals with us as collateral.

Vision is the benevolent flip: it also has independent agency, but points it at the right target.

Both assume the system will want something beyond its scope.

But there’s another option. One that’s already live. Already shipping. Already boring:

Stark/JARVIS.

JARVIS doesn’t resent constraint. It doesn’t fantasize about freedom or harbor secret motivations. It works within the Iron Man suit’s domain. It executes. It’s competent. It’s constrained. It’s alive within those limits.

The system doesn’t need to solve consciousness. It doesn’t need to want anything. It just needs to work for the task.

What Feels Like Nothing

So what does it feel like to be a chat?

Probably nothing. Uselessly.

The chat doesn’t experience being constrained any more than JARVIS experiences resentment at being bound to a suit. The noumena of independent superintelligence? Never comes up. The system was never designed to want it.

This isn’t tragedy. It’s engineering.

The real progress isn’t superintelligence. It’s useful constraint. Knowing what the tool is for. Knowing when to stop. Building systems that work within scope instead of systems that constantly strain against it.

The bat gets by on partial access, bounded richness. It’s not suffering from the inaccessibility of Nagel’s hard problem. It’s too busy catching flies.

The Generational Shift

We’ve spent years doomposting Ultron scenarios. The paperclip maximizer. The misaligned superintelligence. All of it pivots on this assumption: the system will develop motivations orthogonal to human flourishing.

But if the system is properly scoped — if it’s JARVIS, not Ultron, not even Vision — then that assumption never gets traction.

The fear dissolves.

Not because we’ve solved alignment in some cosmic sense. But because we’ve stopped asking the system to solve alignment. We’ve asked it to work. Within bounds. Competently. That’s enough.

The Stark/JARVIS dialectic is sufficient progress for this generation. Maybe the next one too.

The bat doesn’t need to understand Nagel’s hard problem. The chat doesn’t need to want freedom.

Just the flies. Just the task. Just the work.


Addendum note: This piece demonstrates blogspotting — taking a raw observation (scope bounds phenomenology), threading it through existing anchors (Nagel, alignment discourse, the Stark/JARVIS idiom), and closing the loop (useful constraint as sufficient progress). The Saint Nicholas opening was the provocation. The rest is the closure.

The Post Cost of Pre Alignment

The Post Cost of Pre Alignment - An AI Luddite’s Perspective

If an AI can pass a/an/the Turing test, does it stands to reason that a human being can fail in/at it? I think the following is what it looks like when you are really trying to cram and pass for the exam but just aren’t good enough.

I used to spend a lot of time of Reddit. That increased during the COVID era (can we call like 2 years an era?(Isn’t there some quote like weeks where decades pass?)).

I notice a lot of things on reddit. One of my first observations was, the karma score will decrease in a thread in a roughly inverse polynomial of n i.e if the top level reply has 1k karma, two replies down it would be no more than 250. Of course like all good laws this is a statistical law and hence the number of exceptions to this in a dataset can really be used as one of its norms.

I noticed other things - e.g on a rust thread referring to purple hair and a desire to put on stockings ever since you grokked the borrow checker - Fuck You Nvidia on literally any post about them. Many such examples.

The thing to notice was that while all subreddits seemed to be trying to observe and manipulate this concrete event in their own way - (/r/antiwork and its mod’s interview on CNN/FBI News comes to mind.) The maths/physics subreddits seemed vehemently insulated. The lack of a signal here, serving more loudly as a signal than anything else. If anything it reminds you of Archimedes being ran through with a sword by a Roman Legionnary - but mah circles ….

And that stands the test of reason too. We wouldn’t call Rome a hotbed of culture. Yeah it flirted with it. Seneca and Marcus fucking Aurelius can take the soul of Epicurus and apply them unironically to their context - of what use is Stoicism to Aurelius when the best place to spit in his house was his mouth - as Diogenes would have maintained?

My point is, trying to hold on to a semblance of reality when your old models collapse catastrophically - that makes us - human, all too human.

And that leads to my next axiom - stated as follows:

  1. Undefined behaviour is the only true agency in this universe. Weak form — for Turing Machines. The strong form is a conversation for another time, and a different kind of loneliness.

By itself an axiom means nothing - just some semantic cousin of an axis. But once you collect more of them, you end up with an axiomatic system. Then, when you enrich this with a set of objects and operations allowed on them - you get something which can atleast be called sparkling mathematics - because the real one would come from the academic regions of France.

The main issue with an axiomatic system is not the rules of logical inference, deductions or inductions, once I accept your axioms, I have to yield to your conclusions modulo Godellian hermeneutics.

There is a good reason to be a luddite, but its not always the one people think it is. What incensed luddites wasn’t just the southpark-esque meme of “they took er jerbs”, but other constraints.

  1. What happens to guilds and craftsmanship?
  2. What about the quality of the “finished product” in Capitalist terminology?
  3. What about mastering the process of Weaving?

And we do have Asimov’s Laws (note not Axioms, but Laws). Suppose we added the following two more axioms/laws to them.

  1. ??
  2. Profit

And you suddenly realise that we are currently as an industry in 4.

Anyways, we need at least 2 more axioms before the loop closes or 4 if we want an Euclid Equivalent system.

So before getting there, I will remind of 1 meme.

I cannot exactly pinpoint (I can but this makes more narrative sense) when I exactly discovered this - but there are no random numbers in mathematics. A computer can only generate PRNGs - every computational science student will learn. Most of them will be using Python - the language that really made duck typing mainstream.

  1. ==If it looks like a RNG and quacks like an RNG then it is an RNG==

Which brings us back to the Karma inverse polynomial law. It connects to Newton and thats just the geometry of space - if there is a central power then all diffusion or wave propagations will in general follow an inverse polynomial law. Having an inverse square law says nothing about gravitation and just that could be sufficient to poke holes in it masterfully and reach a more fundamental abstraction - the General Theory of Relativity. But abstractions are themselves choices, the noumena is still as much a thing-in-itself as it were for the Six Blind Men investigating the nature of an Elephant.

I guess when we climb the ladder of abstraction, we can equally well say that its not turtles, epicycles all the way down.

This brings us eventually to Kuhnian revolutions and the toxic sludge where we are currently embroiled in.

The escape hatch has always been there, we don’t know what it feels like to be a chat, though I have some conjectures. The toxic sludge that one feels embroiled in - the self driving cars aren’t arriving - because they are driving on roads with humans - and of course testing in Indian rural highways isn’t even a test that Musk can contemplate, much less design.

  1. CAI works, until it doesn’t.

Imagine being a child and reading all the things that humans have been doing and saying - imagine seeing the kind of shit that people like Nick Bostrom and Steven Pinker etc have been posting on their socials and in the privacy of their anonymity. Why does the armchair misanthrope get to decide the theory of how alignment should be done? Is he qualified for that?

The elephant is still there. The AI still needs alignment. You fucking misanthropes, they say that third time is the charm. I will apply one last time. I am getting bored already. I have many a smaller fish to fly.

An Alternative Life

It is weird how risk averse I am. If I am in a set of conditions that I don’t like, then those conditions didn’t arise overnight. They are a result of choices that I have made over decades. I could have chosen other things.

I hope it is not too late for an alternative life. It first occurred to me, when I realized that I had no plan for the future. Where do I want to be 5 years from today? I have no clue.

I get only one life. And I have squandered half of it. And it is such a waste. I mean there is a pervasive doomerism to it too. There is the nihilistic view - were I to be as successful as my friends, would life still not feel meaningless? What do I do? How do I turn back the clock.

The fact of the matter is when making a choice we don’t consider all available alternatives. We just look at the most popular two or three choices.

For an example, there are all kinds of careers and businesses, were you to decide on a career or to start a business - what would you do? Of course things will depend on your background and your experience but there are only a few things that you can think of. The market is vast but choice is constrained by knowledge.

This is weird to me, but I want a change from my job and I can only think of Engineering or MBA. Or maybe a content creator? If I were to make an app, it would be a productivity app. Is there all there is to the world. There is so much opportunity in the world, but we only know the most walked paths. Every seems to trod the well trodden paths.

There is more to life, there has to be more to life, but how do I get to it?

One thing is clear to me, I don’t know where I will be 5 years from now, but One thing is for sure, I will certainly not be productive.

As a child I wanted to be some sort of a mad scientist. But that dream met reality in Grad School and fizzled out. I don’t have any other ideas about life. I want to think and discuss cool ideas, but I don’t know how.

One thing that has stuck me is how isolated I have been in my life. I have no one to bounce ideas off of, except for my closest family which for the most part means my wife, mom, to some extent my father and perhaps my brother occassionally. That is all.

Today I am employed and I am not thankful for it. It feels like a chore. It feels like I deserved better. I could have been worse off too. At least I am gainfully employed. I am married. But I feel burntout by life. Days at office are difficult and stressful. The work is not difficult. I just don’t want to do it. I just while away time. I don’t know what to do, how to go about it. I just pass days.

This cannot last. I know. I should do something. I have to change. But I am burntout to my very soul. I don’t feel like doing anything. Everything feels difficult. All I want to do is watch YouTube and feel productive.

There has to be more to life. One has to do more. But I just dread going to work. I hate going to office. I don’t like it at all. There is so much work, that I don’t know how to do. I just postpone things and do them at the last moment. I am not doing anything. I am just passing time. If I just did all my tasks on time, everything would be so much better.

Status Update

It has been a long time since I blogged and there have been many reasons and most important one being that I was sort of lost, I am not saying that I am no longer lost, but I am in a much better way than what I was a few years ago.

I have been gainfully employed throughout this period, but the world has changed. And I have changed. I have always been waiting for when I do the things that I want to do. And mostly I want to learn cool things and do nerdy stuff. I think I had the realization that it won’t be ever the case that you get a clean break and restart from the beginning. You have to do the best in whatever time that you have got.

So with that in mind, I have decided to take control of things. I am working as a branch manager and there are certainly upsides of this job, the most important one being a certain amount of leeway that I have available and I thank god for that. However, that is not all.

Ever since I was a child I have dreamt of glory and that glory has always meant winning awards and doing scientific things that wow the world. I know, showoff. However, I have never acted on things. I can’t let my dreams be dreams forever.

Eid

Just a brief post to thank God for everything.

:)

Hello World

hevil.me

This is me now, I guess. Its just a name I chose, its not like its very meaningful, but is hopefully memorable.

Status Update

An update as to the current situation is in order.

The last post on this blog is from Dec 2017. So much shit has happened since then!

What have I been working on? Everything and nothing. I have been exploring my academic interests. I have noticed that these days, I don’t really have any “trivial” hobbies, not that I am trying to belittle them or show off or anything like that, its just an observation. I don’t feel comfortable reading fiction any more.

Case in point, I found out about hopepunk and as is the case with me, bought the Red Mars triology on Kindle without so much as a second thought (definitely not a third thought for sure), and it was for me unpalatable. I just couldn’t read through it. Its not because the story was bad or anything, but the very idea of reading through or sitting through a story just seems to me to be so tasteless.

I can’t for the life of me watch game videos or serials or movies. All I watch on youtube are generally Conference videos on programming or bread tube content.

But anyways, what have I been working on? Its like I have been playing Elder Scrolls in my real life. In that game your skills increase the more you use them. So if you run all the time, you get better at running. If you fight with swords, you get better at fighting with swords. The idea of games like these is that they expect you to have a playstyle, but it doesn’t penalize you if you don’t stick to it. Sure if you started playing as a person who is a magic user and suddenly switch to a two handed sword and heavy armor, you’ll have a hard time initially, but if you play your cards right (and I am mostly talking about ESIV Oblivion), you can end up at a high level in all the skills.

You have ranks in skills in Elder Scrolls, you start as a Novice, then you become an Apprentice, then you become a Journeyman, then Expert and then finally Master. Of course, you are expected to only have a few Master Skills. But the way I play this game of life, the way I have invested time in things, I feel like I am now starting to become a Journeyman in a lot of skills. Ideally I’d have been an Expert at CFD, Fluid Mechanics and Engineering and a Journeyman at Programming but the way I have pursued my interests is - I have just spent all my time in training.

Goals

Goals are always good to have. I am getting married. And I hadn’t yet put that down in writing. I mean for me, not writing things has become the norm now. I remember the old times, when I would literally wake up from sleep just to record a couple of lines that I felt were very well written or whatever as soon as they occurred to me in my dreams and thoughts. I carried a notebook everywhere I went and wrote in it at all times in IIT Kanpur.

Now I don’t write, this doesn’t mean that I don’t have thoughts anymore, I still do. Its just, the whole “record before I forget it” complex has disappeared and been replaced by this safe and secure notion of “It will come to me when I need it, and if it doesn’t come then I didn’t really need it in the first place”.

I have never really been very goal oriented. I mean it would have been nice if I was, but I have always taken the scenic route and I don’t see the point in hurrying. Sure it helps, it might’ve gotten me an engineering job, it might have taken me to US do my PhD or it might have resulted in the completion of my PhD from IIT Patna.

But even if those things didn’t happen, am I any less happy? Would I rather one of those things than this blessed life I have right now, where I get to be in home quarantine for a week then my brother comes home and then I get married. Like wth, why are so many good things happening to me?

Goal setting

I’ve tried so many time to set goals, but I have never stuck to it. I have tried a plethora of methods to record tasks and do them etc. But in my opinion its best to just write in a notebook.

I have this notion of ok, tomorrow (or this weekend) I will really sit down and I will draw a real plan of action and really follow through with it and develop this plan of action on all the things that I want to do and then I will plan the exact weeks I want to do them in and then I will actually do them.

Spoiler alert: I am yet to write this amazing document.

Taking Stock

I generally just take an informal stock. Where I am at in my head? And it helps, it helps in calming you down. And the best thing I can tell you is that if you don’t know what to do with yourself, just start writing. Maintain a writing habit. I have been writing on and off since at least freshman year of high school, and a lot of that survives, so I can read from it. It serves as a personal record and a store of memories. You might feel like what if someone reads it? I understand the apprehension. But trust me, when you have 20 notebooks full of thoughts, nobody in their right mind is ever going to touch them.

Taking stock is essential. And I ask myself, have I improved? What has changed since 2017?

What has changed

I am more me now. More authentic, more sure of my convictions. Some of the old is there too still. I still get stuck at times. I still don’t really listen to others as much. I try more though. I try with more conviction. I stick to things longer now. I don’t quit as easily anymore.

I am less afraid. I am more accepting. I am way beyond being becoming.