Our brains run at lower power, and if we make digital models run at lower power they'll have noise in them but that particular system will adapt to the kind of noise in that particular system, and it will work even at lower power, even though it won't run exactly the way you wanted.

30w for our minds. 1mw for AI models. So models might be just trained on higher energy and then a smaller lower power version produced to use.


ChatGPT doesn't know about truth. It's been trained on it, and it's trying to predict what people will type in search. It has to have a kind of blend of all these opinions. So that it can model what anybody might say. It's very different from a person who tries to have a consistent world view.

"I think we're going to move towards systems that can understand different world views. If you have this world view, then this is the answer, and if you have this other world view ..."

Do people get their own truths? What is 'a bad thing'?

A governance challenge who makes these decisions.

Google currently does not do this. It refers you to relevant documents.
 
How to make it synergistic, so that it helps people? Can we do this with the current political system? Would Putin be trusted with this power? Will there be treaties to prevent use, like we have for other things?

Does it need not just one or some people to be sensible, but for everybody to be sensible?