In the last two decades the field of Artificial Intelligence (AI) has rapidly gained momentum. Especially 'deep learning' - computer programs which work like brains - is a term that most people have heard of. Many things that seemed at first impossible, were achieved through deep learning: from the recognition of objects on images to the translation of texts. But algorithms like these are also used for more serious business. Corporations and governments use deep learning algorithms to predict insurance, credit and fraud risk. And police agencies and courts are experimenting with AI solutions to predict where crime is going to happen and who is likely to commit crimes again in the future.
Deep learning has to do with a type of computer algorithm called an Artificial Neural Network (ANN). ANNs are algorithms which are programmed to learn like a human brain. Just like the human brain consists of multiple layers of neurons, ANNs can have more than one layers of artificial neurons. Because the speed of computers has increased in the last decades, ANNs are now able to consist of many layers of neurons. The word "deep" is thus used to describe ANNs with more than just a few layers.
Artificial intelligence is often presented as being neutral as opposed to humans who are biased. Sometimes this might be true, but other times these algorithms are caught reproducing the exact same biases that human have. The information or data from which an Artificial Neural Network learns establishes how the algorithm will 'think'. Imagine how you learned as a child from what you saw, interacted with and were told by your parents. The same goes for an ANN: it only learns from the data which it is fed by its 'parents'. These parents can be the engineers which work for companies or governments. This can work well if the data is nicely balanced and there are patterns in the data which it can learn. But when the data is not well presented or unbalanced, then the ANN can learn to see weird patterns in the data.