(Adapted by Jason Andrews at https://twitter.com/PersuasionRisng)
There are two types of AI:
– those trained on human examples
– those given rules and told to figure things out on their own.
Everything we have seen so far is the former kind.
Why does this matter?
Let me tell you a story…. There is a game called Go that is similar to othello. It’s popular in Asia, chess-like in its mental demands.
AI researchers trained a computer on the best games of human players and told it to play against itself a few million times.
That AI beat the best human players in the world.
Then the researchers gave another AI nothing but the rules of Go – a few lines of code – and told it to play against itself many millions of times.
Then they matched the AI’s against each other.
What do you think happened?
The self-trained AI beat the human-trained AI 100-0.
It was clear the AI trained on human examples would never beat the self-trained AI.
So what does this mean for the AI we see now?
Everything we see now is human-trained. They give the AI human-made writings, human-written problems and answers, human-created art to copy.
Its starting point is the best that humans have come up with so far.
So this AI will have many of the same blind spots and weaknesses humans have in searching for answers – because its starting point has already provided a context and limits.
But eventually… there will an AI that is given nothing but the laws of physics and will be tasked with figuring everything out for itself.
It will then be a race between the galactically long time required for processing and the galactically fast processing ability of computing.
And when it finishes it will reach a model of reality incomprehensibly different from our own. It will be far superior, just like the self-trained Go AI discovered tactics unavailable to the Go AI trained on human examples.