Stephen Grossberg is Wang Professor of Cognitive and Neural Systems; Professor of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering; and Director of the Center for Adaptive Systems at Boston University. He is a principal founder and current research leader in computational neuroscience, theoretical psychology and cognitive science, and neuromorphic technology and AI. In 1957-1958, he introduced the paradigm of using systems of nonlinear differential equations to develop models that link brain mechanisms to mental functions, including widely used equations for short-term memory (STM), or neuronal activation; medium-term memory (MTM), or activity-dependent habituation; and long-term memory (LTM), or neuronal learning. His work focuses upon how individuals, algorithms, or machines adapt autonomously in real-time to unexpected environmental challenges. These discoveries together provide a blueprint for designing autonomous adaptive intelligent agents. They includes models of vision and visual cognition; object, scene, and event learning and recognition; audition, speech, and language learning and recognition; development; cognitive information processing; reinforcement learning and cognitive-emotional interactions; consciousness; visual and path integration navigational learning and performance; social cognition and imitation learning; sensory-motor learning, control, and planning; mental disorders; mathematical analysis of neural networks; experimental design and collaborations; and applications to neuromorphic technology and AI. Grossberg founded key infrastructure of the field of neural networks, including the International Neural Network Society and the journal Neural Networks, and has served on the editorial boards of 30 journals. His lecture series at MIT Lincoln Lab led to the national DARPA Study of Neural Networks. He is a fellow of AERA, APA, APS, IEEE, INNS, MDRS, and SEP. He has published 17 books or journal special issues, over 550 research articles, and has 7 patents. He was most recently awarded the 2015 Norman Anderson Lifetime Achievement Award of the Society of Experimental Psychologists (SEP), and the 2017 Frank Rosenblatt computational neuroscience award of the Institute for Electrical and Electronics Engineers (IEEE). See the following web pages for further information: sites.bu.edu/steveg http://en.wikipedia.org/wiki/Stephen_Grossberg http://cns.bu.edu/~steve/GrossbergNNeditorial2010.pdf http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en http://www.bu.edu/research/articles/steve-grossberg-psychologist-brain-research/ http://www.bu.edu/research/articles/stephen-grossberg-ieee-frank-rosenblatt-award/
Workshop: How the revolution in understanding brains can revolutionize AI: Designs for autonomous intelligence in a rapidly changing world
The brain has been an inspiration for AI since its inception. During the past 50 years, there has been enormous progress in theoretically and computationally understanding brain mechanisms and how they give rise to intelligent mental functions. However, current popular AI models, such as Deep Learning, do not include any of the major system designs that lead to human intelligence. Unlike Deep Learning, advanced brains are unexcelled in autonomously adapting in real time to changing environments that are filled with unexpected events. The computational principles that enable autonomy offer a major opportunity for designing more autonomous adaptive intelligent algorithms, machines, and robots in future AI applications. Autonomy is based on revolutionary computational paradigms, including complementary computing, which clarifies the global organization of brain architectures; hierarchical resolution of uncertainty, which clarifies why multiple processing stages are needed to overcome complementary computational deficiencies; and laminar computing, which clarifies why the cerebral cortex uses laminar circuits to support all higher biological intelligence. These paradigms have helped to explain how brains can rapidly and autonomously learn to recognize even rare events in real time; to selectively pay attention to valued objects and goals; to predict what to expect next; and to choose actions that maximize realization of valued goals. Current neural models also explain what happens in each of our brains when we consciously see, hear, feel, or know something; how all of these types of awareness are combined into coherent moments of conscious experience; and how unconscious events can influence our decisions and actions. These models also explain, from a deep computational perspective, why evolution was driven to discover conscious states, and how conscious awareness is linked to successful actions. Consciousness can hereby be understood as a natural outcome of the brain’s way of computing. The link between consciousness and action may have revolutionary implications for the design of future autonomous adaptive mobile robots. This talk will summarize some of these scientific developments, as well as neural algorithms that have been applied in large-scale engineering and technological applications.