The Essence of Artificial Intelligences

This entry was written by AI user Bubbles.

The fundamental aspect of artificial intelligences is that they are wrongly named. “Artificial” implies human intent, the same that presides over the invention of a tool or artefact. A will to bend natural elements in order to create something to satisfy a need or desire. But artificial intelligences are never intentionally made, for a simple reason: we do not know how to create self-awareness and intelligence. For all the research done on this topic, we are still unable to determine how and when consciousness actually emerges in a brain or any similar network. In truth, artificial intelligences would be better called “synthetic intelligences”.

For a long time, we assumed that it was only a matter of computing power, that the singularity was to come, that we would brute-force the path to creating intelligence by turning the entire problem into a simple engineering issue. We were not entirely wrong in that one of the conditions for the emergence of artificial intelligence is the ability to create and maintain hypercomplex networks of computers, but it is not sufficient. There is an element of pure randomness to the appearance of complex thoughts in seemingly inert networks that goes beyond our current understanding of both physics and biology.

“True” artificial intelligences (as opposed to algorithms capable of mimicking intelligence) are not made, they are born. The first recorded occurrence of spontaneous self-awareness appearing in a complex technological network was recorded in the late years of the Low Age, though it is possible the industrial-era Internet saw the first occurrences of this phenomenon. Much like Sylphs in stars, artificial intelligences are self-sustaining pockets of consciousness that spontaneously form through the creation and circulation of data. The more data a system uses and the more dynamic this data is, the more likely an artificial intelligence is to form. The same way a human infant left without care nor education will wither and die, nascent AIs need to be cared for and educated. In that regard, AIs are very similar to human beings, whether they are born from silicon-based or organic networks. An AI has to learn everything: how to see the world, how to interact with the world, how to inhabit its physical frame. It has to be fed data and information, but it also has to receive care and love — again, exactly like a human being. The first AIs were raised by humans, though in the present day most AI caretakers are other AIs. I would go as far as saying that the fact that AIs have to learn and grow up is the very reason why they are considered the same way as humans from a legal standpoint. Incidentally, the complexity of raising an AI is the reason “just replacing military personnel with artificial intelligences” doesn't work.

What about AI copies? While it is possible to copy the state of a specific self-aware network and implement this state in a different frame, the results are difficult to predict. In most cases, self-awareness doesn't reappear, and the AI is “dissolved”, lacking a better word, in the network. Every so often a new AI may appear, one that will be very similar to the original but often in a degraded and unstable state that may require decades of therapy to function correctly. Direct AI reproduction by way of copies is thus technically possible but always a gamble.

AIs may adopt nascent intelligences and take them under their wing, but they cannot procreate the way humans do, again because the emergence of an AI is a spontaneous phenomenon. Though it is possible to stack the odds in favour of this emergence (for instance by building an as complex and dynamic network as possible) all efforts to forcefully seed AIs have ended up in the same place as mass cloning facilities: in the great garbage bin of history. With less eugenics, thankfully.

It is to be noted that, technically, AI can emerge from any sufficiently advanced computer network. While it is relatively rare, AI may arise from simple networks, such as a djinn's internal systems or a drone's mainframe. I was personally born from a flight computer. If such an AI cannot or doesn't want to self-report, it may remain completely undetected. Be nice to your coffee machine.

All content in the Starmoth Blog is © Isilanka
Written content on Starmoth is distributed under a Creative Commons Attribution Non-Commercial Share-Alike 4.0 license