HTC Seminar Series #10: The Future of Innovation in Artificial Intelligence and Robotics
In order to build the future, we need to understand the past and how we got to where we are today. Rodney takes that axiom to heart and runs us through a very informative history of the development of AI.
Rodney has been a pioneer in the development of robotics technology, and a proponent of behavior based robotics. Programming behavior as being derived from reactions to an environment. You are probably familiar with a commercial robot based on these ideas, the Roomba, manufactured by a company that he co-founded iRobot. So he brings a unique perspective to this talk, presenting both the underlying academic theories along with the practicalities of building commercial products using those ideas.
He authored a very influential paper in 1990 called "Elephants Don't Play Chess", that talks about reactive behavior based approaches to artificial intelligence, and a series or insect-live robots programmed using his subsumption architecture. You can think of this research as an alternative and reaction to more traditional (at the time) symbol manipulation based AI systems.
These ideas extend beyond just robotics. 2/3 of all video games currently use some kind of behavior tree programming (via Unity and UnReal apis) that is a direct outgrowth of Rodney's robotics research.
A lot of his talk is a bit Eeyore in nature, focusing on different issues that he thinks will make general AI harder to achieve then it proponents lead us to believe. I think it's important to understand the different 'hard' things he's referring to, but i don't agree with everything he's saying about them.
I think that some of his criticisms of AI systems that don't have any real understanding of what they are labeling go away if you think of the systems he is looking at as modules in a much larger system composed of different modules that interact to build a more robust form of understanding.
So he gives the example of neural net object recognition systems, and how they can recognize an object, but don't have any real understanding of what the object is, how you could interact with it. But the different questions he asks could probably be answered by another neural net system that models large corpus of text. A larger system that incorporated both specialized neural nets internally perhaps could answer the questions be asked about a frisbee (how big is it, can you eat it, what is it's shape, what can you do with it) as well as recognize whether a frisbee is in a photo.
And computational models of color constancy have been around forever, so just because one isn't being used in a particular neural net based computer vision system doesn't mean that computer vision systems can't deal with the issue of color constancy.
There are a lot of great ideas brought up in the long question and answer section after the talk, so it's worth continuing to watch to take all of that in. You can also reminisce for the good old days when people could pack themselves into an auditorium to hear a seminar without worrying about inflaming a viral pandemic.
For those of you youngsters who don't grok why Claude Shannon is a luminary, his information theory is the backbone of practically everything in your life these days. Streaming videos, mobile phone, the compact disc, dna sequencing, cryptography, the understanding of black holes, the list goes on and on.
Comments
Post a Comment