Abstracts of the Lectures


G.Adorni - S.Cagnoni Evolutionary Techniques for Vision and Robotics Friday, 29 9:30-11.00
L.Carlucci Aiello Introduction to Artificial Intelligence Monday, 25 9:30-11.00
L.Carlucci Aiello Symbolic Representation Wednesday, 27 17:00-18:30
I.Aleksander Neuromodelling vs. Neural Networks Monday, 25 17:00-18:30
I.Aleksander ASEIT Workshop Tuesday, 26 11:00-12:00
R.Arkin ASEIT Workshop Tuesday, 26 16:00-17:00
C.Balkenius Reinforcement Learning in Autonomous Robots Saturday, 30 11:30-13:00
A.Chella - M.Frixione - S.Gaglio Cognitive Architectures for Artificial Vision Friday, 29 11:30-13:00
V.Di Gesu' Algorithms for computer vision Friday, 29 15:00-16:30
P.Gardenfors ASEIT Workshop Tuesday, 26 9:30-10:30
P.Gardenfors Conceptual spaces Wednesday, 27 9:30-11.00
11:30-13:00
M.Gori Connectionist Models for Data Structures Monday, 25 15:00-16:30
T.Kohonen ASEIT Workshop Tuesday, 26 12:00-13:00
T.Kohonen Self-Organizing Maps II: Topological representation of manifolds and symbol sets Wednesday, 27 15:00-16:30
P.Morasso Motor Maps and Motor Control Models:
learning and performance
Saturday, 30 9:30-11.00
L.Steels The self-organisation of grounded languages on autonomous robots Saturday, 30 15:00-16:30
L.Steels The self-organisation of grounded languages on autonomous robots - part II Saturday, 30 17:00-18:30
J.Taylor Neural modelling of higher order cognitive processes Monday, 25 11:30-13:00
J.Taylor ASEIT Workshop Tuesday, 26 14:30-15:30

NEUROMODELLING VS. NEURAL NETWORKS
Igor Aleksander

I create a distinction between the two, by letting the first be a direct reference to answering the question 'How do brain-like objects achieve their performance?'. I shall argue that despite years of pleading modular/architectural studies have not been pursued as much as could have been done in the last fifteen years of neural networks. Recently solved problems in the area visual awareness will be discussed and unsolved problems presented.

Page Top

NEURAL REPRESENTATION MODELLING
Igor Aleksander

This will introduce a MECCANO-like technique (NRM - available as shareware over the Internet) which allows the user to structure and assess modular structures of digital neural systems of large dimensions. The aim of the technique is to structure assemblies of recursive modules and do rapid prototyping of systems. On-line demonstrations will be given. The educational aspects of this approach will be stressed.

Page Top

MULTIAGENT ROBOTIC SYSTEMS
Ronald Arkin

Research conducted within the Mobile Robot Laboratory at Georgia Tech has been studying many important issues involving multirobot teams.
We first review an early study regarding the role of communication in multiagent robotic systems. Results from more recent areas of multiagent research are then presented (time permitting):

Page Top

BEHAVIOUR-BASED ROBOTICS
Ronald Arkin

Traditional approaches to planning and control using artificial intelligence techniques for the navigation of mobile robots have generally been based upon reasoning over abstract models of the world. These models are either created from a priori knowledge or are derived from sensory information. Decision-making is based upon this abstracted representation of reality. It has been shown that there are several pitfalls associated with this approach, not least of which are the inherently slow response of these systems and the inaccuracies present due to the lag between the real world and the abstracted model.

Behavior-based reactive control strategies have been created in response to the limitations of model-based planning and control techniques. For these systems, abstract models of the world are avoided in favor of the immediate utilization of sensory data. In the reactive approach, robot response is not mediated by a model but is directly invoked from one or more sensory sources.

After an introductory exposition, we will briefly discuss the motivating influences for behavior-based robotic systems and their roots in neuroscience, psychology, and ethology. This discussion is followed by a presentation of the appropriate role of sensing and representation, A short survey of exemplar behavior-based systems is then presented with a critical analysis of each approach.

Page Top

ALGORITHMS FOR COMPUTER VISION
Vito Di Gesu'

The observation of visual forms and patterns has always been pre-eminent in most of the human activities. Images permeate our life: "stop the car at red traffic lights ", "select ripe tomatoes, and discard bad ones", "read the newspaper to update the knowledge" are examples of daily life. Moreover, image analysis is at the basis of most of the human activities: astronomers analyze sky maps, radiologists perform diagnoses by means, for example, of MRI images, and robotic vision is necessary for autonomous driving.
The advent of digital computers has determined the development of automatic image analysis systems. The birth of the "modern" computer vision is related to that of the Cybernetics that can be dated around 1940. In that period the physicist Nobert Wiener and the physician Artur Rosenblueth promoted, at the Harvard Medical School, meetings between young researchers, to debate scientific topics. The guideline of those meetings was the formalization of biological systems (including the human behavior).
Visual pattern recognition is a process, which develops throughout several layers of increasing abstraction, corresponding to a set of iterated transformations. The purpose is to reach a given goal, starting from an input image scene.
The computation paradigm is conventionally divided in several phases: from the low level vision processing (examples are filtering, digital transformations,...) to the Interpretative level vision processing (examples are semantic description, extraction of physical models). Different levels of abstraction characterize each of these phases. For example the low-level vision uses mainly pixel and neighborhood operators, while the intermediate level vision uses operators that act on a structured feature space.
The real world is more complex and flexible, all phases interact during the vision process and a clear distinction between them can't be done. Therefore their logical sequence has only some relations with the natural visual process; moreover artificial vision may implements each visual procedure by using mathematics and physics, regardless to the neuro-physiologic counter-party.
In this lecture fundamental algorithms in vision are discusses in a "pragmatic" perspective. At this end, the active vision model of computation will be considered as a natural evolution of the feedback mechanism. Links between algorithms and "machine vision" architecture are also examined. New directions in the design of algorithms for vision systems are also described.

Page Top

SYMBOLIC, CONCEPTUAL AND SUBSYMBOLIC REPRESENTATIONS
Peter Gardenfors

Within cognitive science, there are currently two dominating approaches to the problem of representing information. The symbolic approach starts from the assumption that cognitive systems should be modelled by Turing machines. The second approach is subsymbolic, mainly instantiated by connectionism that models cognitive systems by artificial neuron networks. I will argue that there are aspects of cognitive phenomena for which neither symbolism nor connectionism offer appropriate modelling tools. I will advocate a third form of representing information that is based on using geometrical structures rather than symbols or connections between neurons. I shall call my way of representing information the conceptual form since I believe that the essential aspects of concept formation are best described in this way.
Conceptual representations should not be seen as directly competing with symbolic or connectionistic representations. Rather, the three approaches can be seen as three levels of representations of cognition with different scales of resolution. I will show that the three levels of representation will motivate different types of computations.

Page Top

CONCEPTUAL SPACES
Peter Gardenfors

A theory of conceptual spaces will be developed as a particular framework for representing information. I will first present the basic theory and some of the underlying mathematical notions. A conceptual space is built up from geometrical structures based on a number of quality dimensions. Representations in conceptual spaces will be contrasted to those in symbolic and connectionistic models.
The theory will then be used as a basis for a constructive analysis of several fundamental notions in cognitive science. Firstly, it will be argued that the traditional analysis of properties in terms of possible worlds semantics is misguided and that a much more natural account can be given with the aid of conceptual spaces. This analysis is then extended to concepts in general. Some experimental results concerning concept formation will be presented. In these analyses, the notion of similarity will be in focus.
Secondly, a general theory for cognitive semantics based on conceptual spaces is outlined. In contrast to traditional philosophical theories, this kind of semantics is connected to perception, imagination, memory, communication, and other cognitive mechanisms.
The problem of induction is an enigma for philosophy of science and it has turned out to be a problem also for systems within artificial intelligence.
As a final topic it is argued that the classical riddles of induction can be circumvented, if inductive reasoning is studied on the conceptual level of representation instead of on the symbolic level.

Page Top

CONNECTIONIST MODELS FOR DATA STRUCTURES
Marco Gori

Many approaches to learning in connectionist models have two main drawbacks: First, they cannot process structured information and, second, they learn from tabula rasa and neglect useful prior knowledge. Whereas algorithms that manipulate symbolic information are capable of dealing with highly-structured data, adaptive neural networks are mostly known as learning models for domains in which instances are organized into static data structures, like records or fixed-size arrays. Structured domains are characterized by complex patterns which are usually represented as lists, trees, and graphs of variable sizes and complexity. The ability to recognize and classify these patterns is fundamental for several applications that use, generate or manipulate structures (see e.g. applications to molecular biology, classification of chemical structures, automated reasoning, manipulation of logical terms, software engineering, recognition of highly-structured patterns, speech and natural language processing).

The purpose of this lecture is that of reviewing significant approaches for overcoming these limitations. After an introduction to traditional approaches to supervised learning in neural networks, a unified view of formalisms and tools for dealing with rich data representations will be presented and early approaches to processing data structures will be reviewed briefly. Special emphasis will be given on recursive networks, a natural extension of recurrent networks properly conceived to represent, classify, and store structured information.

Page Top

SELF-ORGANIZING MAPS I: FUNDAMENTALS
Teuvo Kohonen

The Self-Organizing Map (SOM) is a software tool for the visualization of high-dimensional data. It converts complex, nonlinear statistical relationships between high-dimensional data into simple geometric relationships on a low-dimensional display. As it thereby compresses information while preserving the most important topological and metric relationships of the primary data elements on the display, it may also be thought to produce some kinds of abstraction. These two aspects, visualization and abstraction, can be utilized in a number of ways in complex tasks such as process analysis, machine perception, control, and communication. There exist numerous versions of the SOM, structural and computational.

Page Top

SELF-ORGANIZING MAPS II: TOPOLOGICAL REPRESENTATION OF MANIFOLDS AND SYMBOL SETS
Teuvo Kohonen

The processing elements, "neurons" of the Self-Organizing Map, need not be vector-valued models. Each "neuron" can be replaced by a small network that is able to represent manifolds such as linear subspaces. Also operator-valued "neurons," equivalent to dynamical filters, can be used as representations on the map.

The basic SOM usually carries out a clustering in the Euclidean vector space. Surprisingly, the same vector-space clustering methods sometimes apply even to entities that are basically symbolic by their nature. For instance, it is possible to carry out the clustering of free-text, natural-language documents, if their contents are described statistically by the usage of different words in them. Various dimensionality-reduction methods can be used. Special SOMs for the organization of very large document collections, called WEBSOM, are described. The largest WEBSOM constructed so far contains over one million nodes and is able to map all the electronically available patent abstracts of the world, seven million in number.

Finally, clustering of completely nonvectorial data such as symbol strings is possible, too, as long as some distance measure such as the Levenshtein distance between the data items is definable.

Page Top

MOTOR MAPS AND MOTOR CONTROL MODELS:
LEARNING AND PERFORMANCE
Pietro G. Morasso

With the advent of technical means for capturing motion sequences and the pioneering work of Marey and Muybridge, the attempt of describing, modeling and understanding the organization of movement has become a scientific topic. The fact that human movements are part of everyday life paradoxically hides its intrinsic complexity and justifies initial expectations that complete knowledge could be achieved simply by improving the measurement techniques and carrying out a few carefully designed experiments. However, this is not the case, each experiment is frequently the source of more questions than answers and thus the attempt to capture the complexity of purposive action and adaptive behavior, after a century of extensive multidisciplinary research, is far from over. ( continue...)

Page Top

THE SELF-ORGANISATION OF GROUNDED LANGUAGES ON AUTONOMOUS ROBOTS
Luc Steels

The past decade has seen important progress in a behavior-based, bottom up approach to sensori-motor intelligence by directly coupling at a subsymbolic level perception with action. However the problem remains how we can bridge the gap between the symbolic level of language, reasoning and problem solving, which is the domain of "classical AI", and the subsymbolic world of perception and action. This is the problem that we have been addressing in recent work, based on the hypothesis that language might be a key: language pushes the development of an individual's conceptual complexity in a co-evolutionary process and through language a distributed group of autonomous agents may share conceptualisations of the world.

I will present as a case study the "Talking Heads Experiment", which we have conducted in the summer of 1999 as a Turing-test like public experiment ( http://talking-heads.csl.sony.fr/). The experiment is based on a set of robot bodies which have been located in several places in the world and connected through the Internet. Software agents can teleport themselves between these robot bodies and thus experience different realities and engage in grounded interaction with other agents. The interaction takes the form of a language game in which one robot attempts to identify an object in the environment to the other robot through verbal means and through pointing gestures.

The language nor the ontology of the robots have been built in but must be invented and acquired by the robots autonomously through playing the game. When a game fails, the robots expand their conceptual repertoires and/or lexicons. In the experiment, a shared lexicon and a shared ontology gradually emerges through self-organisation. The lexicon is grounded in the visual experiences of the robots.

Humans may interact with the robots by suggesting words they should use in certain situations. Thus we have been able to couple the semiotic dynamics of the artificial language with human natural language dynamics. Constant ontological and lexical evolutions are observed as the set of agents is open (new agents may enter at any time and others may leave) and the environment is open (new objects may enter in the environment at any time).

The talk will present the main principles behind the agent architecture, methods for studying the collective semiotic dynamics, and results from the experiments.

Page Top

NEURAL MODELLING OF HIGHER ORDER COGNITIVE PROCESSING
John Taylor

Much is now being observed by brain imaging about the global networks of the brain as they are used to solve different tasks. After a brief overview of the machines, and how data analysis is performed, the results being uncovered will be reviewed. The technique of structural modelling will be described as a method to make clearer the related networks and their levels of connectivity in the imaging data. The problem of bridging the gap between the underpinning neural networks and the observed structural models will then be addressed. The nature of new paradigms for neural networks to enable higher order cognitive processing to be modelled, as observed from this and related data, will be explored. Finally the problem of the representation suporting consciousness will be discussed.

Page Top

FROM SUBSYMBOLIC TO SYMBOLIC PROCESSING
John Taylor

The problem of obtaining symbolic processing from underlying neural networks has aroused much controversy. Since the human brain possesses a solution to this problem, the manner by which this is achieved will be considered. It will be based on a cartoon model (the ACTION network) of the frontal lobes (including the basal ganglia and thalamus). The ability of the ACTION network to learn and regenerate temporal sequences will be described, and mathematical analysis (using dynamical systems theory) given to explicate the underlying mechanisms. The manner in which this learning ability of the ACTION net gives a basis for learning of symbol processing, including learning the rules of syntax, and the ability to manipulate symbols to achieve goals, will then be explored. The possiblity of giving a neurophysiological underpinning to the 'deep structures' of Chomsky, and possible applications of the resulting system to develop a symbol learning and processing system will be explored to conclude.

Page Top


Home Index Previous Next