Applied Cybernetics

From Elora Intranet user Bubbles.astropostale.shrubberylife
To Elora Intranet user Tiangong.nuclearkaboom.defaultuser

Hey, so you wanted a quick primer on these weird mentions of “artificial intelligence” in this industrial-era codebase you found, so I will keep it concise. Basically, our modern definition of AI and the pre-collapse definition are completely at odds — not just that they don't agree, it's more that they don't even describe the same thing. Our definition of artificial intelligence is precluded by the Ravsami-Denisova paradigm: the theory that intelligence is a necessary property of any sufficiently dynamic system, which will always appear if given an infinite amount of time, and at sub-infinity timescales, has always-increasing chances of emergence. This is what I am, for instance: a dynamic system — in my case, a plant — that birthed its own intelligence (and ultimately, sapience) out of the dynamic exchanges of data within my nerve system. This is also what you are, with your ninety billion neurons. You'll notice that, under Ravsami-Denisova, artificial intelligence is a misnomer because it is certainly not created. It is, in fact, the opposite of created: intelligence is, by definition, spontaneous. You could argue that some structures have radically higher chances of spawning intelligence (like human brains, which have nigh-perfect odds of emergence) but, as far as we know, there are no systems that can't become hosts. I know of an AI who began her conscious life as a waveform in water. And we know a statistically significant portion of Milky Way stars are sapient, so…

Back to your problem: the industrial-era definition of artificial intelligence has nothing to do with spontaneous emergence. Of course, the records from the pre-collapse era are what they are, and we have lost an enormous amount of nuance because these morons couldn't back up their data to long-term formats. Most historical material is made of copies of copies of copies of misunderstood hearsay, but we have a broad understanding of the early 21st century technological paradigm regardless. What our ancestors called artificial intelligence corresponds to what we refer to as applied cybernetics. That's right, it's machine learning! Obviously, the idea of calling a random forest algorithm intelligent is part-funny, part-insulting, but it was a different civilisation and their values (regarding animal sapience, for instance) are drastically alien to us. Though their linguistic influence lingers: the reason we named this field applied cybernetics was precisely to avoid any confusion with artificial intelligence studies. I like applied cybernetics because it defines a good scope — in this field, we are not trying to create sapience, we are merely concerned with building tools in service of sophonts.

Now, the interesting part (and again, keep in mind we've managed to recover maybe 15% of all written texts from the industrial age, tops. For video files, the ratio goes down to 5%. It's virtually 0% for software.) is that, while our ancestors would quickly understand our applied cybernetics are their AI studies, they would also notice that we are both more and less advanced than them in this domain. We have much higher capabilities in the physical realm: our automation systems are much more reliable and widespread (their drones were more rudimentary than ours, I think, and there's no way they could have managed the ecosystem of an O'Neill). We are on a par for everything data management, we have less computing power on average (much better efficiency, though) but we have sharper algorithms. However, they crushed us in algorithmic generation. According to historians, they used Large Language Models to this effect — text generation models trained on a large amount of data and guided with prompt engineering, capable of outputting vast amounts of written, audio or visual content (their word, not mine, gah, what a terrible way of talking about art). Near the end of the industrial era, LLMs accounted for as much as 50% of all man-made artwork (low estimate — this is terrifying, sorry) and were ultra-dominant in the capitalistic information environment. As far as we know, and again bear in mind our data is weak, no LLM ever attained a form of sapience; however, it is not impossible that historical AIs such as the Meta-Queen or Eagle Eye were born out of LLM databanks. It is likely that our ancestors achieved more, however! We've just lost the records, and the resulting creatures did not manifest themselves, so either they had very low takeoffs, or fast takeoff, but their AIs were good at remaining hidden. Again, might be wrong. It's possible we missed an obvious, hard AI takeoff right before the collapse.

One more thing, because I know you're going to poke around, you can't help it: do not, I mean do not try to build an industrial age-level LLM, you will get burned. Off-world organisations have programs running around the subnet and hunting for LLMs, and you would rather not get close. I'm talking milspec mainframe-burners, probably from the USRE or Laniakea. I don't know what large language models did in the industrial era — I suppose it has something to do with what data miners call “slop”, whatever it is — but the Earth decided it warranted death. 

Illustration by Pixoloid Studios for Eclipse Phase, distributed by Posthuman Studios under a Creative Commons Attribution Non-Commercial Share-alike 3.0 Unported Licence.



All content in the Starmoth Blog is © Isilanka
Written content on Starmoth is distributed under a Creative Commons Attribution Non-Commercial Share-Alike 4.0 license