Member-only story
AI: Where the Debate is Taking Us
Asking the uncomfortable question

Despite it not working, AI is being touted as the future. Specifically, the companies that own or would profit from an AI future, are saying it is the future. When people point out how completely useless without human intervention these algorithms are, the common response by true believers is that “it will get there”, or “look how far it’s come already”. Their evidence usually rests on AI ‘art’ and how it has ‘fooled’ people with AI images winning competitions and so on. They ignore the fact that this is because all AI does, is regurgitate an impression of someone else’s art, but because it essentially does a lot of this highly skilled labour for free, the people that profit most want you to believe it not only works, but works better than a human. This is where the problem starts.
AI does not work “Better than a human”. For procedural tasks it certainly works faster than a human, but how many of us, every single day, have some form of struggle with technology? Whether it’s as simple as predictive text turning the word “food” into “good” or your bank declining you your own money based on ‘communication errors’, these programs are functionally useless without constant human supervision. And why is that? Because AI has always lacked probably one of the most significant things that makes humans “intelligent”: Context.
Human intelligence is an emergent property. We have evolved over the millennia to grow intelligence, which, coincidentally, matches our awareness of our surroundings. Intelligence requires an awareness of context. This context is gathered in a variety of different ways, typically through our senses. We see the world with our visual sense, we can smell it, touch it, taste it, it can cause us pain, we feel heat, cold, we have threat detection, we get hungry, sad, scared. All of this sensory information helps give us context, and the ability to retain this information is how we are able to develop our intelligence. AI developers cheat this development by importing available data and using that as the learning model. Unfortunately, it cannot impart the context nor the critical skills involved in decision making. So why do developers do this? Because, put bluntly, we don’t even understand our own intelligence, let alone how to embed it in an object.