
Why is chatroulette so bad now?
Well, it's not just the dangers of its platform, which are disturbing. And we have to face the fact that, as with any new technology, its underlying principles are fundamentally insecure. So are its products and the underlying assumptions it's built on. And the solution is not to ban chatroulette or to force the underlying technology to change. The underlying technology solves one of the most important problems facing our times: who has information and how do they share it? And chatroulette is a great example of just how extremely difficult this is right now. It's 2015, what's the big fuss? But the underlying technology we're talking about, the one that actually solves this problem, is incredibly difficult to build and scales really really well. It's a system in which hundreds of millions of people can really really really compete for a very small amount of computing time.
And so I did a study a few years ago where I actually pitted two very different chat platforms against one another and actually got hundreds of thousands of responses. And in that study, we actually found something that basically solved the underlying technology, which is a technique called deep reinforcement learning. And by deep I mean, basically we're going to be using algorithms to teach chat to do things like, reward your posts with more likes or demote your posts with more likes.
And what this actually teaches you is that even though chat still behaves a lot like a game, but it does so in a much more sophisticated and sophisticated game-like way. So you can imagine a little bot that you can punch to get more examples of in your chat. Or you can imagine a neural network that learns to play this game-like game of chat. And what's amazing is that this really teaches you nothing that doesn't already exist in text.
How does it learn? Well, we actually recently demonstrated how this kind of learning is actually possible. In fact, the neural net learned to play this game of Jeopardy! by actually playing against itself. It played against its life expectancy. And the thing is, even though we can't simulate what happens in a computer's head, we can actually simulate what happens in a computer's head playing against itself. And this is exactly what we saw with the neural net.
So the lesson to take away from all of this is: if you want something to happen, model it. If you want a product to work, adopt it. If you want someone to respond to a message, accept it, and de-escalate it, then you have to play the game of its evolution.
Play the game of human evolution.
Play the game of human evolution.
Thank you very much.
As you can imagine, our program has many features. Its primary one being, obviously,