What AlphaGo can teach us about how people learn

Of course, we are looking at ways to apply MuZero to real world problems, and there are some encouraging initial results. To give a concrete example, internet traffic is dominated by video, and a major open issue is how to compress these videos as efficiently as possible. You may think of this as a reinforcement learning problem because there are these very complicated programs that compress the video, but what you see next is unknown. But when you plug something like MuZero into it, our initial results look very promising in terms of saving significant amounts of data, perhaps something like 5 percent of the bits used to compress a video.

In the long run, where do you think reinforcement learning will have the biggest impact?

I am thinking of a system that can help you as a user achieve your goals as effectively as possible. A truly powerful system that sees all the things you see, that has all the same senses that you have, and that is able to help you achieve your goals in your life. I think that’s really important. Another transformative that looks long-term is something that can provide a personalized health care solution. There are privacy and ethical issues that need to be addressed, but it will have enormous transforming value; it will change the face of medicine and people’s quality of life.

Is there anything you think machines will learn to do within your lifetime?

I will not put a schedule on it, but I will say that all that a human being can achieve, I ultimately believe that a machine can. The brain is a computational process, I do not think there is any magic going on there.

Can we reach the point where we can understand and implement algorithms that are as efficient and powerful as the human brain? I do not know what the time scale is. But I think the journey is exciting. And we must aim to achieve that. The first step in taking that journey is to try to understand what it even means to achieve intelligence? What problem are we trying to solve to solve intelligence?

In addition to practical applications, are you sure you can go from mastering games like chess and Atari to true intelligence? What makes you believe that reinforcement learning will lead to machines with common sense understanding?

There is a hypothesis, we call it the reward-is-enough hypothesis, that says that the essential intelligence process can be as simple as a system that seeks to maximize its reward, and the process of trying to reach a goal and try maximizing the reward is enough to give rise to all the qualities of intelligence that we see in natural intelligence. It’s a hypothesis, we do not know if it is true, but it provides a direction for research.

If we take common sense specifically, the reward-is-enough-hypothesis says well, if common sense is useful for a system, it means that it should actually help it better achieve its goals.

It sounds as if you believe that your area of ​​expertise – reinforcement of learning – in some sense is fundamental to understanding or “solution”, intelligence. Is that right?

I really see it as very important. I think the big question is, is it true? Because it certainly flies in the light of how many people see AI, which is that there is this incredibly complex collection of mechanisms involved in intelligence, and each one of them has its own kind of problem that it solves or its own special way that works, or maybe there is not even a clear problem definition at all for something like common sense. This theory says, no, in fact, there can be this very clear and simple way of thinking about all intelligence, namely that it is a goal optimization system, and that if we find the way to optimize goals really, really well, then all these other things will will come out of this process.

.Source