Robot Learning

Posted on July 18, 2006  Comments (0)

photo of robot dog playpen

This is very cool stuff:

Indeed, as opposed to the work in classical artificial intelligence in which engineers impose pre-defined anthropocentric tasks to robots, the techniques we develop endow the robots with the capacity of deciding by themselves which are the activities that are maximally fitted to their current capabilities. Our developmental robots autonomously and actively choose their learning situations, thus beginning by simple ones and progressively increasing their complexity. No tasks are pre-specified to the robots, which are only provided with an internal abstract reward function. For example, in the case of the Intelligent Adaptive Curiosity which we developped, this internal reward function pushes the robot to search for situations where its learning progress is maximal.

Very interesting article from Sony Computer Science Laboratory Paris (Developmental Robotics): Discovering Communication by Pierre-Yves Oudeyer and Frederic Kaplan, abstract:

The considered robotic agent is intrinsically motivated towards situations in which it optimally progresses in learning. To experience optimal learning progress, it must avoid situations already familiar but also situations where nothing can be learnt. The robot is placed in an environment in which both communicating and non-communicating objects are present. As a consequence of its intrinsic motivation, the robot explores this environment in an organized manner focusing first on non-communicative activities and then discovering the learning potential of certain types of interactive behaviour. In this experiment, the agent ends up being interested by communication through vocal interactions without having a specific drive for communication.

Leave a Reply