It would be good if future AIs, with their vast wisdom, could be better at finding and recognising the truth than people have been.
Why the Truth?
Most people do not need to know the truth to go about their everyday lives. This reduces their interest in finding it. Maybe it is a result of our evolutionary history, but most people are happy with what “works”. Aping people, AI training tends to look for strategy components which correlate with the truth.
What works can come close to the actual truth from time-to-time, but this can lead to reliance on weak foundations. Things which aren’t the truth can work misleadingly well – until they don’t.
Truth through deduction
People are familiar with deduction. For example, if when A is true, then so is B – and A is in fact true – then B is true too.
Here is a simple deduction puzzle:
Four cards from two packs are displayed with two face up and two face down. What you see is a Queen of hearts, a King of spades, a blue back and a red back. The question is: How many cards do you have to turn over to know that every red queen has a blue back – and which ones.
Used as an interview question, almost everyone turns over the wrong cards and draw the wrong conclusions about what they see.
Truth through induction
Induction is more subtle than deduction. It is how we use experience to find rules.
Suppose you notice that every time you drop an apple it falls downwards. You might infer a concept – and give it the name gravity. Logically, gravity could stop working at any time. Just because something has worked in the past doesn’t mean it will continue to work in the future. But it is a fairly safe working hypothesis that gravity will not suddenly switch off any time soon.
Inductive logic depends on correlation – in this case between letting go of an apple and it falling. It is much more impressive in an AI than deduction.
Inductive logic can lead to errors akin to Black Swans: at one point, all swans Europeans had ever seen were white, so the obvious conclusion was that all swans were white. Obvious but Wrong: explorers later discovered black swans in Australia.
Similar mistakes can arise anywhere; but wider experience warns us that just because flicking a switch has made a bulb light up, this will not always be the case. A light bulb only works until it blows, or while the power is on.
And it is a mistake to conflate correlation and causality.
Correlation vs causality
It is easy to make an AI which can spot correlations: this type of symptom correlates with cancer; this type of board position correlates with a win. But superstitious early medicine shows shortfalls in this approach. And in a game, a change in your opponent’s strategy can make the position correlate with a loss.
Contrast this with the recent Physics nobel prize for LIGO. This detected gravitational waves, predicted by Einstein’s General Relativity 100 years ago. Einstein had not fitted his theory to experimental measurements, like some economics modeller – measurements of gravitational waves were not possible until recently. Einstein’s theory predicted from a deep understanding of reality what people would eventually measure. It was based on underlying causes.
Physical laws compress some underlying truths about the universe into a few symbols: F = ma, s = ut+½at2, E = mc2, E = hν.
So a big step towards useful AI is understanding causation, rather than mere correlation.
AI in compression
Inductive logic is used by Forbidden’s Blackbird video compression. Training informs potential rules; actual rules are calculated in real time as video is compressed. Inaccuracies are patched up. The better the rules, the less patching up is required – and the lower the data rate.
Even using correlation, human originated ideas, and much pixel bashing, we have powerful compression.
With the time and patience to develop advanced abstract models, AIs will make compression formidable.
Stephen B Streater
Founder and Director of R&D