3 levels of AI
It is important to understand a little about AI before continuing into how this connects to the magical world of Hogwarts.
Theoretically we could distinguish 3 levels of AI, based on the 'power' of the machine, what it can do and how it interacts with our environment:
- "Artificial Narrow Intelligence" (ANI)or "weak AI" is what we call computers or programs that perform a single task very well. It is the form of AI humanity has achieved to build so far, like Netflix's recommendation engine.
- "Artificial General Intelligence" (AGI), "human-level AI" or "strong AI", are computers or programs that can understand, answer and reason with human beings on an equally intelligent level. It's what we tend to see in a lot of science fiction movies or stories such as i-Robot, Chappy, or Ex Machina. Humanity has not yet achieved building a machine this powerful.
- "Artificial Super Intelligence" (ASI) is what we call a machine that is capable of outperforming any human being (and the human species as a whole) on any particular field, including scientific creativity, general wisdom and social skills. A lot of people warn us about the dangers that lie in uncontrolled AI, others believe that reaching ASI could introduce the end of humanity as a species.
Now who's intelligent?
Google is basically a major AI that 'interprets' a question and provides the best answer it can find on the internet. It does that extremely well for very simple questions such as
"What's the time?"
Google will come up with a correct time in your region in mere milliseconds. It is extremely good in answering unbiased, objective questions with extreme accuracy and speed. Much faster and better than any human being could ever do.
What Google has more problems with is interpreting the intention, context or morality of a question. Let's say we ask Google a moral question
"Are aliens bad?"
Suddenly, Google has difficulties with this question. It lists us a bunch of articles about the existence of aliens. Google just provides what it thinks to be the 'best answer'. If you would ask the same question to a human, the answer would most likely be something like 'it depends'.
We should either be very specific about what we ask Google or, more importantly, contextualize the answer it provides us based upon what we know about the machine and its power.
The black box of Machine Learning
Since Machine Learning and auto-improving algorithms are taking major leaps forward these days, we don't know of what is happening inside of ANI. It is exactly this power or capability that we need to assess, but we don't know how. In that sense, we could say that the AI has become a black box.
We cannot comprehend how it operates. We - users and developers alike - input something and the machine magically outputs something else. Even if the machine would be capable of showing us its learning, it would be unreadable or incomprehensible for us because the machine does not speak our language.
Because we are in no condition to comprehend what is happening inside the machine, we could say that it is magic. Of course it is not real magic but it is something that happens in a way that we are unable to understand. It happens "magically".
Siri, Alexa, Google home, they all perform tasks based on voice input and Natural Language Processing. Even though they can perform certain tasks very well, the command needs to be very precise in order for them to work.
Even more so, the command should be pronounced correctly, with the right tone of voice and articulation. Almost like... an incantation:
- Hey Google, play my discover weekly
- Hey Google, skip this track
- Hey Siri, how warm is it going to be today?
- Alexa, turn living room lights on
It's WingArdium LeviOsa, not LeviosAR...
See where I'm going? Magic spells do not work if you pronounce them incorrectly... Our machines will not understand what we mean if we pronounce it incorrectly or in a different order...
Although Google, Siri, Alexa and others all support hundreds of commands already, only a few of them are used regularly by the broader audience. This is because it takes time for us users to LEARN the exact incantations, remember them and then use them on the device which executes the magic.
While the wand does not understand the intention, it does perform the magic and gets the result. The magic does not judge. The magic does its duty, based on the "master's" command.
Kranzberg once provided 6 laws of technology of which the first goes as follows:
"Technology is neither good or bad, nor is it neutral."
He means that “technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”
So, why go to Hogwarts?
In order to use magic, wizards have to be educated (for 7 years)
- on how to use it
- to learn with which purpose it should or can be used (context, environment, morality etc...)
- to learn the exact incantations and wand movements
In a way, AI developers should be educated in some sort of AI-Hogwarts, not only on how to build these systems, but more so on
- how to properly use it
- with which purpose and intention it should be used
- predicting the possible outcomes when not used with the proper morality, and how to build in failsafes
We could say that designing for uncertainty is already one step in the right direction. Let’s think about the worst case scenario and design our product as such that users can find their way in case things go awry.
Developers should be aware that with AI, the answer might not be binary, but that there is a spectrum of truth to it which they need to account for.
For example: if the question is not an unbiased, objective question, we could enrich the result with a confidentiality score. That will tell the user that the system is not entirely sure about the answer, more like a “I think I know the answer, but you be the judge”. It provides a certain context for the user, a warning to keep in mind that it might not be 100% correct and so not to rely on it too much.
Users of these AI systems on the other hand, should understand what to expect from a current day ANI, should we fully benefit from it.
- first of all, we need to learn the exact commands. We need to know the incantations that power the machine.
- secondly, we should be aware of what to expect from the ANI, and recognise it for what it still is. We should be aware of the dubiousness of our question, in order to know not to rely too heavily on the answer. Understanding its capabilities is an important - if not crucial - step towards successful use.
- finally, placing its answer in a moral context is crucial to the proper interpretation of the answer and the safe use of the technology.
We can conclude that ANI to date has difficulties with morality. ANI can be used by anyone with any intention or morality, and it is up to us to play moral partner of these machines. In order to properly use ANI to its full capabilities, we need to understand it, and educate ourselves in the proper development and use of the machines.
There are multiple AI schools existing today but it worries me a little that all focus on building AI, rather than ethically and morally use them. Preparing for the dangers ahead will be crucial for our coexistence with the machines.
Therefore, a Hogwarts School of AI could help us on the way.
Curious how our magic can help your business?