Wednesday, July 18, 2007

Musings on Pattern Maps

A pattern can offer instructions. It can act as a 'map' for thinking and behaviour, if a person, or a quasi-intelligent 'thing' can 'see' or receive the information of the pattern in an intelligible way.

What if our brains record pattern maps in their neuronal trees (exactly how a brain does this biochemically, we we don't need to worry about right at the moment) and then these pattern maps act as guides for cognition and behaviour?

Are there pattern maps in cognition? And what might they 'look' like?

The reason I started thinking about this is because we know that the mind records information similar to the way a neural network records patterns. But it's not known exactly how this information is recalled or "re-membered". Some memory specialists theorize that each time we remember something, it is slightly different, because human memory is thought to be an active process of re-membering or re-associating, and that it is context-sensitive. Each time a brain remembers something, it's in a different predisposition, or state. What if a memory is stored -- fundamentally -- as a pattern image. A pattern is a very efficient way to store information.

I started to wonder what it would be like if a pattern map was used to give something or someone instructions. How would this work? Maps are good for instructions in a spatial context. And we live and move through space, right?

Someone can look at a map (after knowing their own orientation – eg. 'You are HERE') and follow it to move to another place, in another space – using n-s-e-w coordinate orientation. Go up 2 and left 1 . Go up 4 and right 6. This movement can be represented as a directional path and can be drawn as a picture.

When people were first trying to give robots instructions, they'd write out the directions (in code the robot could understand) and the robot would follow it. But the instructions were fixed, linear in structure and lengthy. If a robot had an image that it could refer to and follow, how would this be different than what we already have? Would this be an improvement? What if computers and/or robots could follow maps without step-by-step instructions? What if they could traverse an image for instructions? How might this change things?

The first step is to have devices that can learn images and recall them accurately. (See Numenta www.numenta.com) The next step is to provide visual maps to these devices so that they can do things in various situations-- and yet learn in such a way as to adapt their 'understanding' of a space and self-modify their maps. For example, if a robot were given a map of a room to clean (and had a movement algorithm to traverse the space), it would first actively explore to verify the physical space matched the virtual map. Then it would do its cleaning job. If a new piece of furniture was put into the room, it would adapt its movements as needed. It could learn. If an object was in the room that wasn't there before, it would be discovered, and the robot would navigate around it and make a note in its room map.

I know there are small robotic vaccuum cleaners that many people have in their homes right now. But I think their behaviour is fairly localized. They interact with whatever is directly in front of them. I don't think these little robot vaccuums can reproduce a map of the room they've just cleaned. But I may find soon enough that they can.