Embodied Cognition

language is made of the world

Posted by Martin Ames Harrison on Thu, Aug 24, 2017

I have been thinking about the importance of embodiment to AI since learning Brooks’ ideas on subsumption architecture and on the limitations of planning in robotics. Recently, I found Embodied Cognition: A field guide and was reminded of the best reason for reading survey papers, namely, that the good ones articulate for you thoughts that’ve been floating around your head, or that’ve been “in the air” in a research circle.

Two points, superficially at odds, were my main takeaways from this survey. So here I go, distilling a distillation.

Situated Goal-orientation

Among the fundamental considerations in building intelligent robotics are relevance and dynamics. Given the state of existing hardware, signal processing and image recognition technologies, these two may be the most important to the advancement of robotics (and, because it hinges on embodiment (I think), AI generally). The latter is more obvious than the former, and refers to the challenge of achieving goals in a real environment where obstacles and targets are shifting in unpredictable ways. But relevance may be even harder and more fundamental. How does a robot decide what matters, even if it has a clearly defined goal? Jordan Peterson, of all people, touches quite often on the embodiment problem, mentioning Brooks by name, and saying things like this:

When you look at a hammer, you don’t see a hammer. You see the ability to hammer things.

Something close to that has to be true. Here, let him speak for himself:

Anderson, the author of that “field guide”, argues that Brooks’ approach is a logical limit of the GOFAI approach ( at least along the dynamics dimension).

The above problems seem to suggest their own solution: shorter plans, more frequent attention to the environment, and selective representation. But the logical end of shortening plan length is the plan-less, immediate action; likewise the limit of more frequent attention to the environment is constant attention, which is just to use the world as its own model. Finally, extending the notion of selective representation leads to closing the gap between perception and action, perhaps even casting perception largely in terms of action. Thus, the problems of dynamics and relevance push us toward adopting a more reactive, agent-relative model of real-world action, what I call situated goal-orientation.

This. Exactly what I was thinking from the opening of the paper. He goes on to quote Brooks for the umteenth time.

Representation

Brooks famously wrote Intelligence without Representation. He argued not that representation has no role in AI, but that the real world should serve as its own model. It is easy to get carried away with Brooks’ very elegant and effective ideas (the man runs an actual company building actual robots). But let’s revisit the idea of relevance for second. Things have unseen and even unobservable relevance. An altar is not just another part of a room, and Anderson points to the subtle behavioral conventions surrounding such an artifact (which gives no indication of said conventions to a naïve observer). Others’ minds, future selves and social hierarchies are all real enough entities whose representations must enter into an intelligent system’s calculations, and yet they are imperceptible.

My conclusion is that representation is absolutely necessary, but that we may not be able to get our hands on it. I wonder whether the most profound steps needed to create AI can be engineered, and I suspect that they must evolve. Brooks writes something that seems to run counter to my claim:

It is instructive to reflect on the way in which earth-based biological evolution spent its time. Single-cell entities arose out of the primordial soup roughly 3.5 billion years ago. A billion years passed before photosynthetic plants appeared. After almost another billion and a half years, around 550 million years ago, the first fish and vertebrates appeared, and then insects 450 million years ago. Then things started moving fast. Reptiles arrived 370 million years ago, followed by dinosaurs at 330 and mammals at 250 million years ago. The first primates appeared 120 million years ago and the immediate predecessors to the great apes a mere 18 million years ago. Man arrived in roughly his present form 2.5 million years ago. He invented agriculture a mere 10,000 years ago, writing less than 5000 years ago and ‘expert’ knowledge only over the last few hundred years. This suggests that problem solving behavior, language, expert knowledge and application, and reason, are all pretty simple once the essence of being and reacting are available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. This part of intelligence is where evolution has concentrated its time–it is much harder. This is the physically grounded part of animal systems these groundings provide the constraints on symbols necessary from them to be truly useful

It is true that evolution appears to have concentrated its time on sensing and reacting. But that is only because of the framing of the above statement. It may be true that human-level intelligence evolved very late because it was not as inevitable as, say, the eye (whose emergence is believed to have triggered the Cambrian Explosion). By this I mean just that plenty of creatures got along, and still get along, just fine without intelligence like ours. It is not obvious that developing those more rudimentary systems was harder, nor what “harder” means exactly. Furthermore, that X is harder than Y for biological evolution does not imply that X is harder than Y for human engineers.

Still, I think that we will need to implement a form or forms of evolution if we want to achieve real AI. Reaching it may require the creation of self-reproducing machines. Whether we want to achieve real AI is a separate, serious question. While I somewhat disagree with Brooks’ conclusion about the relative difficulty of sensing/reacting and language, I do like the above quotation very much. In combination with the clip below, it makes me wonder whether humans are in fact the best thing there will ever be.

And just like that, we land unavoidably on another CH maxim.

The sexual market is the one market to rule them all.


comments powered by Disqus