barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.
There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.
So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.
What is artificial intelligence?
The term is almost as old as electronic computers themselves, coined back in 1955 by a team including legendary Harvard computer scientist Marvin Minsky.
In some respects, it is already in our lives in ways you may not realise. The special effects in some films and voice assistants like Amazon’s Alexa all use simple forms of artificial intelligence. But in the current debate, AI has come to mean something else.
It boils down to this: most old-school computers do what they are told. They follow instructions given to them in the form of code. But if we want computers to solve more complex tasks, they need to do more than that. To be smarter, we are trying to train them how to learn in a way that imitates human behaviour.
Computers cannot be taught to think for themselves, but they can be taught how to analyse information and draw inferences from patterns within datasets. And the more you give them – computer systems can now cope with truly vast amounts of information – the better they should get at it.
The most successful versions of machine learning in recent years have used a system known as a neural network, which is modelled at a very simple level on how we think a brain works.
What are the different types of artificial intelligence?
With no strict definition of the phrase, and the lure of billions of dollars of funding for anyone who sprinkles AI into pitch documents, almost anything more complex than a calculator has been called artificial intelligence by someone.
There is no easy categorisation of artificial intelligence and the field is growing so quickly that even at the cutting edge, new approaches are being uncovered every month. Here are some of the main ones you may hear about:
-
Reinforcement learning
Perhaps the most basic form of training there is, reinforcement learning involves giving feedback each time the system performs a task, so that it learns from doing things correctly. It can be a slow and expensive process, but for systems that interact with the real world, there is sometimes no better way.
-
Large-language models
This is one of the so-called neural networks. Large-language models are trained by pouring into them billions of words of everyday text, gathered from sources ranging from books to tweets and everything in between. The LLMs draw on all this material to predict words and sentences in certain sequences.
-
Generative adversarial networks (GANs)
This is a way of pairing two neural networks together to make something new. The networks are used in creative work in music, visual art or film-making. One network is given the role of creator while a second is given the role of marker, and the first learns to create things that the second will approve of.
-
Symbolic AI
There are even AI techniques that look to the past for inspiration. Symbolic AI is an approach that rejects the idea that a simple neural network is the best option, and tries to mix machine learning with more diligently structured facts about the world.
You must be logged in to post a comment.