
When it comes to intelligence, I reckon the best way to define the artificial type is imitative: by design, it’s expected to mimic or replicate the natural human intelligence. It’s designed to simulate, to feign, to mirror, to emulate, to echo authentic intelligent behaviour.
A post recently caught my eye on a professional social network. An AI was asked to optimize a transatlantic flight route and, operating on a flat map with an implicitly Euclidean framing, produced a straight line between two points. The result looked intelligent because it mirrored a familiar human shortcut: the belief that the shortest distance is always a straight line. That assumption holds on a plane, but not on a sphere. The curved route airlines actually fly is close to the true great-circle path imposed by Earth’s geometry. The AI didn’t ignore physics—it was never given physics to reason about.
AI was tasked with finding a shorter, more efficient, and cost-effective route. The AI did what it was told: it mimicked human intelligence and devised a solution based on its parameters and the superficial prompt it was most likely given, with complete and utter ignorance of physics laws and, ultimately, 6th grade curved geometry.
But here comes the OMG moment. The solution was based on an authentic human mind’s interpretation of the Euclidean postulate of flat geometry: the straight line is the shortest distance between two points. It simulated the most basic human rational process of deriving the solution from a basic flat geometry postulate. Simply put, if the AI is not informed of the existence of the curved space where the two points exist, the nature of the Earth’s dimensions, its geometrical form, movements, and basic laws of physics, it will have nothing else to mirror, imitate, or simulate. Or, more accurately, it simulates the processes into which it is fed. There is nothing else to imitate in the process.
The death of a civilization is scientifically and masterfully demonstrated in the last part of the Remembrance of Earth’s Past trilogy by computer engineer and science fiction writer Liu Cixin. Long and complex story short and simple: faced with an alien civilization, Earth must come to the harsh realization that there is factual truth to the dark forest hypothesis. In this universe, it is highly probable that there is extraterrestrial life, but all remains hidden and silent out of the more probable fear that revealing themselves would lead to destruction by a more developed one. In a dark forest where you know nothing, and you have never met anyone before, you don’t reveal yourself to try to see what the intentions of the other are; you stay hidden or strike preemptively, even if, by doing so, this means revealing yourself to a third or fourth unknown. And the death of a solar system (roughly) comes from a dimensional strike: a fatal dimensionality reduction of space to 2 dimensions. The only survival chance is to alter life so that it can survive in one less dimension.
My real concern is that the excessive, and ultimately invalid, uncertified use of an emulating, mirroring, and imitating tool to reason, think, and trust cognitive processes as authentic will, at some point, strike us all with a sort of dimensionality reduction. Human intelligence will become obsolete due to lack of use because it is difficult, complex, flawed, unreliable, and full of consciousness and emotions. Adapting and, ultimately, surviving will require full acceptance of the mimicking tool because it simplifies truth and reality, generates efficient solutions, and saves time and energy. It thinks and acts on our behalf.




Leave a comment