February 28
Low-quality MP3 recordings: intro, neocortex, building a brain, Q&A
Jeff Hawkins, you may recall, is n man with the most amazing story. As a college student, after reading an issue of Scientific American, he decided that he wanted to figure out how the brain worked so he could build one. Neuroscience departments turned him down — they who too busy collecting low-level details about what the brain looked like. AI departments turned him down — they already knew how the brain worked, it was just like a computer. (They were wrong, of course.) Companies like Intel turned him down — it wouldn’t turn a profit for many years.
So Hawkins decided that if no one would give him money to do this, he’d go out and make some money of his own. He invented the Palm Pilot, the Handspring, and the Treo, and made millions. And then he turned around and used the money to start a neuroscience institute hear in Silicon Valley to do the work on brains that nobody else was doing.
OK, already you have a great story. Yet, to my knowledge, it was never covered anywhere. Hawkins didn’t get a 60 Minutes story or a magazine article. But it gets even better: it worked. Pretty quickly, they figured out how the brain actually worked, in a detailed theory that explains everything from your everyday experience down to the actual physical wiring of the brain. And then Hawkins wrote a book on it for the lay audience, explaining this ingenious theory, which actually turns out to be quite simple.
But still: hardly anything. I heard about Hawkins because he happened to be at the Stanford bookstore and I was sort of bored. He’s also gotten on a local NPR show, but there’s been very little coverage. Perhaps part of it is the title of his book. He named his book On Intelligence, which made me think it was about IQs and stuff. It turns out Hawkins is actually referring to the intelligence in Artificial Intelligence and so on, but it’d be much clearer if he called it “How Brains Work: A Huge New Discovery” or something really obvious.
It’s not that the media doesn’t cover science — when Steven Wolfram came out with his book A New Kind of Science (see, that’s a clear name) he was all over the place. Big stories in the New York Times, dates on Charlie Rose (my personal favorite moment was when he pulled out this completely incomprehensible chart from under Charlie’s table), and on and on. He was the talk of the intelligentsia for weeks. Not that anyone understood what he was saying, figure out what it was supposed to do, or tell if it was true — his book was like 900 pages of boring and incomprehensible charts.
But Hawkin’s work is pretty easily comprehensible, has huge applications, is so obviously correct, and is just really fun to know. He needs a better publicist or something. But more importantly, we need better science coverage. Someone tell me: where’s the journal that covers things like this, where do I sign up to read big new ideas and discoveries explained simply?
I’m so lucky to be at Stanford so that the man who discovered how the brain works can explain it to me himself. He’s back, now teaching a class about Artificial Intelligence of his discovery. It’s sort of a sad sight to tell the truth — Hawkins work is mindblowing stuff and these are probably exactly the kids who should be building it, but they’re also exactly the kids who have been indoctrinated with years of AI nonsense, so when he explains the theory they just don’t get it and when he discusses the results they just don’t believe him. Try to make artificial intelligence by studying the actual brain? It makes no sense to them. (It probably doesn’t help that Hawkins starts by telling them he calls his work “Real Intelligence” — “to distance itself from Artificial Intelligence”.)
Anyway, I won’t go over the whole of Hawkins’s explanation of his theory, but he did explain how the brain works in a new, simpler way, I thought I’d share. The brain, you will recall, is made up of a bunch of modules, which are wired hierarchically together. At the bottom there ar elots of modules — one for components of sounds, one for lines in sight, etc. As you go up, the modules become more abstract — one combines vision and sound, etc. — forming a pyramid shape. At the top is the really high-level thought. Each big module follows a five step process:
- Store sequences of patterns.
- Pass the name of that pattern upwards, to a higher module.
- Predict the next element of the sequence.
- Convert that invariant prediction into a specific prediction. (Invariant predictions are relative — if the sequence is a series of notes, it’s something like “a fifth”. A specific prediction would be the note itself — if you know the previous note, you can add a fifth to it to get the previous note. The same is true for sight and touch and so on — we don’t notice things, we notice the relation between things. That’s why things look the same color in different shades of light and so on.)
- Pass this specific prediction down a level.
That may not be all comprehensible if you haven’t read the book. Sorry, but I want to get to the new stuff.
So a Stanford graduate student, Dileep George, saw this theory and decided to build a brain with it. (One new thing I learned is that Dileep’s code resolves the disagreement between various sections of the brain using something called “belief propagation”.) You have to start somewhere, so he decided to teach it to recognize little pictograms. Some look recognizable — a cat, a duck, a dog, a boat — others are just sort of random. He trained a simple three-level brain to recognize each of these symbols, sometimes with a few modifications (like flipping the image), and then he tried to attack it.
He drew the symbols with dotted lines, with squiggly lines, he drew them too big, he drew them too small, he drew them with gaps and breaks, he drew them with heads chopped off, he drew them with all sorts of speckles and noise, and on and on. The brain could still recognize them.
This may not sound like much of an achievement, but it’s important to understand that artificial intelligence people have been working on vision systems for decades. Their technology is very specialized — written specifically for recognizing these pictures — and it can’t handle any of this. It looks for lines and corners. If you have squiggly lines and rounded corners, it’s useless. This is just orders of magnitude better than anything out there.
And it’s completely generic. The simulated brain has no idea it’s looking at pictograms — all it sees are zeroes and ones, just like the electrical pulses of the real brain. You’ve got to assume that you could show it sounds or smells or even weather patterns and it’d do just as well. (If you know Matlab, you can grab Dileep’s code and try it). But this is just concrete proof of a huge breakthrough.
What do the neuroscientists think? Well, Hawkins says, nobody has been able to prove him wrong. They just look at his book and say “Well, there’s some evidence [that this is true] and we have to test and it could take years [before we’re sure] and blah blah blah.” (Neuroscience experiments take a really long time for some reason. “A typical animal experiment, even a simple one…takes one to four years.”) Hawkins isn’t in the mood to wait. He did have one piece of good news — one neuroscientist in Japan is working on a relatively obscure are of the brain had lunch with him and told him about her unpublished research. (She was excited; nobody ever really cared about her work before.) And her results, which nobody really new about, definitely confirm his theory. So everything is looking good.
The response from computer science people (even though “the book wasn’t really written for them”) is much more positive. “They are the ones looking at it and they go ‘Oh, this is an algorithm and I understand algorithms — intuitively obvious! Yes, I get it! What else could it be?’ … ‘Oh, I get it. I can build this.’” That’s certainly the reaction I had: the theory is just so obvious and beautifully true. (beat) I have to build it. Right now.
Apparently, I’m not the only one. Hawkins is transforming the research institute into a company and moving it to Berkeley. I missed some of the details, but apparently the idea is to build toolkits that other people can use and plug stuff into and work on, so that people around the world can help out. The intellectual property, he says, won’t be completely free — it’s a business, with venture capital money and a need to turn a profit — but they want as many people as they can get helping. Some people have even offered to build the brain technology into silicon, which will probably eventually be a good idea for speed reasons.
Why a business? Because the “profit motive” is faster than academia. Hawkins can’t wait to see these ideas out there — he wants his discovery to have as much impact as possible. I don’t know if that’s true or not, but it makes starting a start up sound oddly noble.
Anyway, the business is just starting — they recently incorporated — and they’re looking for a small team of really smart people to help out. World-changing technology is coming soon.
posted March 26, 2005 05:06 PM (Education) (3 comments) #