MindRACES

Skip to content
You are here: Home » Publications » Dissemination » MindRACES - Demos Movies
Sections
Personal tools

MindRACES Demo Movies



Analogical Aibo


The video shows an AIBO robot which is able to find objects hidden behind shelters by reasoning by analogy with previously experienced situations. This is an example of an ‘embodied’ model of analogy, since the robot is able to form internal representations based on its perception, to manipulate these representations (make analogies) and therefore to move on the basis of the analogical prediction.


Reference:

Kiryazov, K., Petkov, G., Grinberg, M. , Kokinov, B., and Balkenius, C. (2007). The Interplay of Analogy-Making with Active Vision and Motor Control in Anticipatory Robots. In Butz, M. et al. (Ed.) Anticipatory Behavior in Adaptive Learning Systems: From Brains to Individual and Social Behavior. LNAI, 4520, Springer-Verlag.

Guard and thief



The video shows the navigation and planning capabilities of a simulated robot playing the role of a thief in a 3D guards-and-thieves scenario. The robot architecture is layered. The lower (sensorimotor) layer manages is composed of multiple anticipatory schemas (e.g., detect treasure, escape guard) which compete for being executed. The higher (deliberative) layer plans on the basis of the robot’s goals (find the treasure and escape the guard) and of beliefs which are derived on the fly on the basis of the schema’s success or unsuccess in prediction.


Reference:

Pezzulo, G., Calvi, G., and Castelfranchi, C. (2007). Dipra: Distributed practical reasoning architecture. In Proceedings of
the Twentieth International Joint Conference on Artificial Intelligence, pages 1458–1464.

Sure moving arm (1 and 2)



These two videos (1 and 2) show the SURE_REACH architecture for flexible goal-directed control of action. SURE REACH (a loose acronym for Sensorimotor,Unsupervised, REdundancy-REsolving control ArCHchitecture) is a hierarchically structured control architecture, which builds its internal representations from scratch. Initially, it explores its environment by means of random motor babbling. The knowledge of SURE REACH about its body and environment consists of two population-encoded sp atial body representations (the neurons of the population code are currently uniformly distributed in space, adaptive spatial coverage methods are being investigated) and two associative structures (learned). SURE REACH has been applied to the control of a 3-DOF arm in a 2-D environment so that each target position can be reached with various goal postures and on various paths.


Reference:

Butz, M.V., Herbort, O., & Hoffmann, J. (2007). Exploiting Redundancy for Flexible Behavior: Unsupervised Learning in Modular Sensorimotor Control Architecture. Psychological Review, 114, 1015-1046.

Anticipatory Robotic Hand



The video illustrates a demo on the functioning of a neural-network architecture which controls a robot composed of a webcam and a 2-link 4-DOF robotic arm engaged in a reaching task. The demo shows the work carried out within the EU funded project MindRACES with the aim of integrating attention models and arm-control models respectively developed by LUCS and CNR-ISTC within the project. The web-cam looks at the arm from top. The arm acts on a working plane formed by a computer screen that projects various trees items in a sequence of multiple learning trials. A tree is formed by various coloured squares: green squares for the foliage, blue squares for the trunk and red squares for the apple (the target of the arm). Even if the webcam is still, at each time step a pixel sub region is extracted from the camera image to mimic a moving eye composed of (a) a small fovea which perceives the colour of the fixated square and (b) a periphery that perceives only the presence/absence of squares surrounding the fovea (it perceives them as grey). The task of the system is to learn to move the eye on relevant parts of the image - in particular to learn to look below the foliage, and aside the trunk, as the apples grow there - and to keep the eye on the apple, once this is found, for multiple saccades: this triggers an arms reaching movement (see below). The eye movement is controlled by an architecture composed of neural maps which encode information in eye-centred coordinate frames. These maps implement: (a) a neural competition between alternative locations that the fovea might fixate; (b) a bottom-up attention process which leads the fovea on regions with high contrasts; (c) a top-down attention process that learns, by reinforcement learning, to guide the fovea on regions with high information gain: this part is also capable of accumulating - in a potential action map, the most innovative component of the system - information about multiple promising possible saccade targets on the basis of the various objects (squares) fixated in time. The arm is controlled by a neural biased-competition, fuelled by the current proprioception of the eyes gaze direction, which selects possible targets for the arm-reaching movements and triggers them when the eye fixates for a long time the same spot. Once the target for reaching is selected, this is transformed into the corresponding desired arm posture by a previously-trained neural network implementing an inverse model, and then this posture is issued to the arm motors for execution. Importantly, the reinforcement learning of the top-down attention system is guided by the reward obtained by the arm movement: in this respect the model integrates epistemic and pragmatic actions in an unprecedented fashion. The systems functioning is based on various anticipatory mechanisms: (a) the bottom-up attention component anticipates potentially interesting targets for the epistemic eye movements; (b) the top-down attention component learns to anticipate potential interesting eye targets and builds a dynamic mapping of their locations; (c) both the eye and the arm are not controlled on the basis of movements but on the basis of actions goals (i.e. desired anticipated states) in line with the ideomotor principle.

Reference:

Ognibene, Balkenius & Baldassarre (in press 2008). Integrating epistemic action (active vision) and pragmatic action (reaching): a neural architecture for camera-arm robots. In: From Animals to Animats 10: Proceedings of the Tenth International Conference on the Simulation of Adaptive Behavior (SAB2008)

Anticipatory Grasping Hand


The video shows a robotic arm which is able to adapt to several sensorimotor contexts and grasp objects of different sizes and weight (in this case, balloons remplished with water) by autonomousy arbitraring among different anticipatory schemas ono the basis of their prediction error.



Created by admin
Last modified 2008-05-16 04:20 PM
 

Powered by Plone

Anticipatory Cognitive Science is a research field that ensembles artificial intelligence, biology, psychology, neurology, engineering and philosophy in order to build anticipatory cognitive systems that are able to face human tasks with the same anticipatory capabilities and performance. In deep: Cognitive science is the interdisciplinary study of mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology. Its intellectual origins are in the mid-1950s when researchers in several fields began to develop theories of mind based on complex representations and computational procedures. Its organizational origins are in the mid-1970s when the Cognitive Science Society was formed and the journal Cognitive Science began. Since then, more than sixty universities in North America, Europe, Asia, and Australia have established cognitive science programs, and many others have instituted courses in cognitive science.