COMPUTATION AND NEURAL SYSTEMS SERIES SERIES EDITOR Christof Koch California Institute of Technology EDITORIAL ADVISORY ...
185 downloads
985 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
COMPUTATION AND NEURAL SYSTEMS SERIES SERIES EDITOR Christof Koch California Institute of Technology EDITORIAL ADVISORY BOARD MEMBERS Dana Anderson University of Colorado, Boulder Michael Arbib University of Southern California Dana Ballard University of Rochester James Bower California Institute of Technology
Walter Heiligenberg Scripps Institute of Oceanography, La Jolla Shaul Hochstein Hebrew University, Jerusalem Alan Lapedes Los Alamos National Laboratory Carver Mead California Institute of Technology
Gerard Dreyfus Ecole Superieure de Physique el de Chimie Industrie/les de la Ville de Paris
Guy Orban Catholic University of Leuven
Rolf Eckmiller University of Diisseldorf
Haim Sompolinsky Hebrew University, Jerusalem
Kunihiko Fukushima Osaka University
John Wyatt, Jr. Massachusetts Institute of Technology
The series editor, Dr. Christof Koch, is Assistant Professor of Computation and Neural Systems at the California Institute of Technology. Dr. Koch works at both the biophysical level, investigating information processing in single neurons and in networks such as the visual cortex, as well as studying and implementing simple resistive networks for computing motion, stereo, and color in biological and artificial systems.
Neural Networks Algorithms, Applications, and Programming Techniques
James A. Freeman David M. Skapura Loral Space Information Systems and Adjunct Faculty, School of Natural and Applied Sciences University of Houston at Clear Lake
TV
AddisonWesley Publishing Company Reading, Massachusetts • Menlo Park, California • New York Don Mills, Ontario • Wokingham, England • Amsterdam • Bonn Sydney • Singapore • Tokyo • Madrid • San Juan • Milan • Paris
Library of Congress CataloginginPublication Data Freeman, James A. Neural networks : algorithms, applications, and programming techniques / James A. Freeman and David M. Skapura. p. cm. Includes bibliographical references and index. ISBN 0201513765 1. Neural networks (Computer science) 2. Algorithms. I. Skapura, David M. II. Title. QA76.87.F74 1991 006.3dc20 9023758
CIP
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and AddisonWesley was aware of a trademark claim, the designations have been printed in initial caps or all caps. The programs and applications presented in this book have been included for their instructional value. They have been tested with care, but are not guaranteed for any particular purpose. The publisher does not offer any warranties or representations, nor does it accept any liabilities with respect to the programs or applications.
Copyright ©1991 by AddisonWesley Publishing Company, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. 1 2 3 4 5 6 7 8 9 10MA9594939291
R
The appearance of digital computers and the development of modern theories of learning and neural processing both occurred at about the same time, during the late 1940s. Since that time, the digital computer has been used as a tool to model individual neurons as well as clusters of neurons, which are called neural networks. A large body of neurophysiological research has accumulated since then. For a good review of this research, see Neural and Brain Modeling by Ronald J. MacGregor [21]. The study of artificial neural systems (ANS) on computers remains an active field of biomedical research. Our interest in this text is not primarily neurological research. Rather, we wish to borrow concepts and ideas from the neuroscience field and to apply them to the solution of problems in other areas of science and engineering. The ANS models that are developed here may or may not have neurological relevance. Therefore, we have broadened the scope of the definition of ANS to include models that have been inspired by our current understanding of the brain, but that do not necessarily conform strictly to that understanding. The first examples of these new systems appeared in the late 1950s. The most common historical reference is to the work done by Frank Rosenblatt on a device called the perceptron. There are other examples, however, such as the development of the Adaline by Professor Bernard Widrow. Unfortunately, ANS technology has not always enjoyed the status in the fields of engineering or computer science that it has gained in the neuroscience community. Early pessimism concerning the limited capability of the perceptron effectively curtailed most research that might have paralleled the neurological research into ANS. From 1969 until the early 1980s, the field languished. The appearance, in 1969, of the book, Perceptrons, by Marvin Minsky and Seymour Papert [26], is often credited with causing the demise of this technology. Whether this causal connection actually holds continues to be a subject for debate. Still, during those years, isolated pockets of research continued. Many of
the network architectures discussed in this book were developed by researchers who remained active through the lean years. We owe the modern renaissance of neuralnet work technology to the successful efforts of those persistent workers. Today, we are witnessing substantial growth in funding for neuralnetwork research and development. Conferences dedicated to neural networks and a
CLEMSON UNIVERSITY
vi
Preface
new professional society have appeared, and many new educational programs at colleges and universities are beginning to train students in neuralnetwork technology. In 1986, another book appeared that has had a significant positive effect on the field. Parallel Distributed Processing (PDF), Vols. I and II, by David Rumelhart and James McClelland [23], and the accompanying handbook [22] are the place most often recommended to begin a study of neural networks. Although biased toward physiological and cognitivepsychology issues, it is highly readable and contains a large amount of basic background material. POP is certainly not the only book in the field, although many others tend to be compilations of individual papers from professional journals and conferences. That statement is not a criticism of these texts. Researchers in the field publish in a wide variety of journals, making accessibility a problem. Collecting a series of related papers in a single volume can overcome that problem. Nevertheless, there is a continuing need for books that survey the field and are more suitable to be used as textbooks. In this book, we attempt to address that need. The material from which this book was written was originally developed for a series of short courses and seminars for practicing engineers. For many of our students, the courses provided a first exposure to the technology. Some were computerscience majors with specialties in artificial intelligence, but many came from a variety of engineering backgrounds. Some were recent graduates; others held Ph.Ds. Since it was impossible to prepare separate courses tailored to individual backgrounds, we were faced with the challenge of designing material that would meet the needs of the entire spectrum of our student population. We retain that ambition for the material presented in this book. This text contains a survey of neuralnetwork architectures that we believe represents a core of knowledge that all practitioners should have. We have attempted, in this text, to supply readers with solid background information, rather than to present the latest research results; the latter task is left to the proceedings and compendia, as described later. Our choice of topics was based on this philosophy. It is significant that we refer to the readers of this book as practitioners. We expect that most of the people who use this book will be using neural networks to solve real problems. For that reason, we have included material on the application of neural networks to engineering problems. Moreover, we have included sections that describe suitable methodologies for simulating neuralnetwork architectures on traditional digital computing systems. We have done so because we believe that the bulk of ANS research and applications will be developed on traditional computers, even though analog VLSI and optical implementations will play key roles in the future. The book is suitable both for selfstudy and as a classroom text. The level is appropriate for an advanced undergraduate or beginning graduate course in neural networks. The material should be accessible to students and professionals in a variety of technical disciplines. The mathematical prerequisites are the
Preface
vii
standard set of courses in calculus, differential equations, and advanced engineering mathematics normally taken during the first 3 years in an engineering curriculum. These prerequisites may make computerscience students uneasy, but the material can easily be tailored by an instructor to suit students' backgrounds. There are mathematical derivations and exercises in the text; however, our approach is to give an understanding of how the networks operate, rather
that to concentrate on pure theory. There is a sufficient amount of material in the text to support a twosemester course. Because each chapter is virtually selfcontained, there is considerable flexibility in the choice of topics that could be presented in a single semester.
Chapter 1 provides necessary background material for all the remaining chapters; it should be the first chapter studied in any course. The first part of Chapter 6 (Section 6.1) contains background material that is necessary for a complete understanding of Chapters 7 (SelfOrganizing Maps) and 8 (Adaptive Resonance Theory). Other than these two dependencies, you are free to move around at will without being concerned about missing required background material. Chapter 3 (Backpropagation) naturally follows Chapter 2 (Adaline and Madaline) because of the relationship between the delta rule, derived in Chapter 2, and the generalized delta rule, derived in Chapter 3. Nevertheless, these two chapters are sufficiently selfcontained that there is no need to treat them in order. To achieve full benefit from the material, you must do programming of neuralnet work simulation software and must carry out experiments training the networks to solve problems. For this reason, you should have the ability to program in a highlevel language, such as Ada or C. Prior familiarity with the concepts of pointers, arrays, linked lists, and dynamic memory management will be of value. Furthermore, because our simulators emphasize efficiency in order to reduce the amount of time needed to simulate large neural networks, you will find it helpful to have a basic understanding of computer architecture, data structures, and assembly language concepts. In view of the availability of comercial hardware and software that comes with a development environment for building and experimenting with ANS models, our emphasis on the need to program from scratch requires explanation. Our experience has been that largescale ANS applications require highly optimized software due to the extreme computational load that neural networks place on computing systems. Specialized environments often place a significant overhead on the system, resulting in decreased performance. Moreover, certain issues—such as design flexibility, portability, and the ability to embed neuralnetwork software into an application—become much less of a concern when programming is done directly in a language such as C. Chapter 1, Introduction to ANS Technology, provides background material that is common to many of the discussions in following chapters. The two major
topics in this chapter are a description of a general neuralnetwork processing model and an overview of simulation techniques.
In the description of the
viii
Preface
processing model, we have adhered, as much as possible, to the notation in
the PDF series. The simulation overview presents a general framework for the simulations discussed in subsequent chapters. Following this introductory chapter is a series of chapters, each devoted to a specific network or class of networks. There are nine such chapters: Chapter 2, Adaline and Madaline Chapter 3, Backpropagation Chapter 4, The BAM and the Hopfield Memory Chapter 5, Simulated Annealing: Networks discussed include the Boltzmann completion and inputoutput networks Chapter 6, The Counterpropagation Network Chapter 7, SelfOrganizing Maps: includes the Kohonen topologypreserving map and the featuremap classifier
Chapter 8, Adaptive Resonance Theory: Networks discussed include both ART1 and ART2 Chapter 9, Spatiotemporal Pattern Classification: discusses HechtNielsen's
spatiotemporal network Chapter 10, The Neocognitron Each of these nine chapters contains a general description of the network architecture and a detailed discussion of the theory of operation of the network. Most chapters contain examples of applications that use the particular network. Chapters 2 through 9 include detailed instructions on how to build software
simulations of the networks within the general framework given in Chapter 1. Exercises based on the material are interspersed throughout the text. A list of suggested programming exercises and projects appears at the end of each chapter. We have chosen not to include the usual pseudocode for the neocognitron network described in Chapter 10. We believe that the complexity of this network
makes the neocognitron inappropriate as a programming exercise for students. To compile this survey, we had to borrow ideas from many different sources.
We have attempted to give credit to the original developers of these networks, but it was impossible to define a source for every idea in the text. To help alleviate this deficiency, we have included a list of suggested readings after each chapter. We have not, however, attempted to provide anything approaching an exhaustive bibliography for each of the topics that we discuss. Each chapter bibliography contains a few references to key sources and supplementary material in support of the chapter. Often, the sources we quote are older references, rather than the newest research on a particular topic. Many of the later research results are easy to find: Since 1987, the majority of technical papers on ANSrelated topics has congregated in a few journals and conference
Acknowledgments
ix
proceedings. In particular, the journals Neural Networks, published by the International Neural Network Society (INNS), and Neural Computation, published by MIT Press, are two important periodicals. A newcomer at the time of this writing is the IEEE specialinterest group on neural networks, which has its own periodical. The primary conference in the United States is the International Joint Conference on Neural Networks, sponsored by the IEEE and INNS. This conference series was inaugurated in June of 1987, sponsored by the IEEE. The conferences have produced a number of large proceedings, which should be the primary source for anyone interested in the field. The proceedings of the annual conference on Neural Information Processing Systems (NIPS), published by MorganKaufmann, is another good source. There are other conferences as well, both in the United States and in Europe. As a comprehensive bibliography of the field, Casey Klimausauskas has compiled The 1989 NeuroComputing Bibliography, published by MIT Press [17]. Finally, we believe this book will be successful if our readers gain
• A firm understanding of the operation of the specific networks presented • The ability to program simulations of those networks successfully • The ability to apply neural networks to real engineering and scientific problems • A sufficient background to permit access to the professional literature • The enthusiasm that we feel for this relatively new technology and the respect we have for its ability to solve problems that have eluded other approaches
ACKNOWLEDGMENTS As this page is being written, several associates are outside our offices, discussing the New York Giants' win over the Buffalo Bills in Super Bowl XXV last night. Their comments describing the affair range from the typical superlatives, "The Giants' offensive line overwhelmed the Bills' defense," to denials of any skill, training, or teamwork attributable to the participants, "They were just plain lucky." By way of analogy, we have now arrived at our Super Bowl. The text is written, the artwork done, the manuscript reviewed, the editing completed, and the book is now ready for typesetting. Undoubtedly, after the book is published many will comment on the quality of the effort, although we hope no one will attribute the quality to "just plain luck." We have survived the arduous process of publishing a textbook, and like the teams that went to the Super Bowl, we have succeeded because of the combined efforts of many, many people. Space does not allow us to mention each person by name, but we are deeply gratefu' to everyone that has been associated with this project.
x
Preface
There are, however, several individuals that have gone well beyond the normal call of duty, and we would now like to thank these people by name. First of all, Dr. John Engvall and Mr. John Frere of Loral Space Information Systems were kind enough to encourage us in the exploration of neuralnetwork technology and in the development of this book. Mr. Gary Mclntire, Ms. Sheryl Knotts, and Mr. Matt Hanson all of the Loral Space Information Systems Anificial Intelligence Laboratory proofread early versions of the manuscript and helped us to debug our algorithms. We would also like to thank our reviewers: Dr. Marijke Augusteijn, Department of Computer Science, University of Colorado; Dr. Daniel Kammen, Division of Biology, California Institute of Technology; Dr. E. L. Perry, Loral Command and Control Systems; Dr. Gerald Tesauro, IBM Thomas J. Watson Research Center; and Dr. John Vittal, GTE Laboratories, Inc. We found their many comments and suggestions quite useful, and we believe that the end product is much better because of their efforts. We received funding for several of the applications described in the text from sources outside our own company. In that regard, we would like to thank Dr. Hossein Nivi of the Ford Motor Company, and Dr. Jon Erickson, Mr. Ken Baker, and Mr. Robert Savely of the NASA Johnson Space Center. We are also deeply grateful to our publishers, particularly Mr. Peter Gordon, Ms. Helen Goldstein, and Mr. Mark McFarland, all of whom offered helpful insights and suggestions and also took the risk of publishing two unknown authors. We also owe a great debt to our production staff, specifically, Ms. Loren Hilgenhurst Stevens, Ms. Mona Zeftel, and Ms. Mary Dyer, who guided us through the maze of details associated with publishing a book and to our patient copy editor, Ms. Lyn Dupre, who taught us much about the craft of writing. Finally, to Peggy, Carolyn, Geoffrey, Deborah, and Danielle, our wives and children, who patiently accepted the fact that we could not be all things to them and published authors, we offer our deepest and most heartfelt thanks. Houston, Texas
J. A. F. D. M. S.
O
N
T
E
N
Chapter 1
Introduction to ANS Technology 1 1.1 Elementary Neurophysiology 1.2 From Neurons to ANS 17 1.3 ANS Simulation 30 Bibliography 41
8
Chapter 2
Adaline and Madaline 45 2.1 Review of Signal Processing 45 2.2 Adaline and the Adaptive Linear Combiner 55 2.3 Applications of Adaptive Signal Processing 68 2.4 The Madaline 72 2.5 Simulating the Adaline 79 Bibliography 86
Chapter 3
Backpropagation 89 3.1 The Backpropagation Network 89 3.2 The Generalized Delta Rule 93 3.3 Practical Considerations 103 3.4 BPN Applications 106 3.5 The Backpropagation Simulator 114 Bibliography 124
Chapter 4
The BAM and the Hopfield Memory 727 4.1
AssociativeMemory Definitions
4.2
The BAM
131 xi
128
xii
Contents
4.3 The Hopfield Memory 141 4.4 Simulating the BAM 156 Bibliography 167
Chapter 5
Simulated Annealing 769 5.1 5.2 5.3 5.4
Information Theory and Statistical Mechanics The Boltzmann Machine 179 The Boltzmann Simulator 189 Using the Boltzmann Simulator 207 Bibliography 212
171
Chapter 6
The Counterpropagation Network 273 6.7 6.2 6.3 6.4
CPN Building Blocks 215 CPN Data Processing 235 An ImageClassification Example the CPN Simulator 247 Bibliography 262
244
Chapter 7
SelfOrganizing Maps 263 7.7 7.2 7.3
SOM Data Processing 265 Applications of SelfOrganizing Maps Simulating the SOM 279 Bibliography 289
274
Chapter 8
Adaptive Resonance Theory 297 8.1 ART Network Description 293 8.2 ART1 298 8.3 ART2 316 8.4 The ART1 Simulator 327 8.5 ART2 Simulation 336 Bibliography 338
Chapter 9
Spatiotemporal Pattern Classification 347 9.7
The Formal Avalanche
9.2
Architectures of Spatiotemporal Networks (STNS)
342
345
Contents
9.3 9.4 9.5
The Sequential Competitive Avalanche Field Applications of STNS 363 STN Simulation 364 Bibliography 371
xiii
355
Chapter 10
The Neocognitron 373 10.1 10.2 10.3 10.4
Neocognitron Architecture 376 Neocognitron Data Processing 381 Performance of the Neocognitron 389 Addition of Lateral Inhibition and Feedback to the Neocognitron 390 Bibliography 393
H
Introduction to ANS Technology
When the only tool you have is a hammer, every problem you encounter tends to resemble a nail. —Source unknown
Why can't we build a computer that thinks? Why can't we expect machines that can perform 100 million floatingpoint calculations per second to be able to comprehend the meaning of shapes in visual images, or even to distinguish between different kinds of similar objects? Why can't that same machine learn from experience, rather than repeating forever an explicit set of instructions generated by a human programmer? These are only a few of the many questions facing computer designers, engineers, and programmers, all of whom are striving to create more "intelligent" computer systems. The inability of the current generation of computer systems to interpret the world at large does not, however, indicate that these machines are completely inadequate. There are many tasks that are ideally suited to solution by conventional computers: scientific and mathematical problem solving; database creation, manipulation, and maintenance; electronic communication; word processing, graphics, and desktop publication; even the simple control functions that add intelligence to and simplify our household tools and appliances are handled quite effectively by today's computers. In contrast, there are many applications that we would like to automate, but have not automated due to the complexities associated with programming a computer to perform the tasks. To a large extent, the problems are not unsolvable; rather, they are difficult to solve using sequential computer systems. This distinction is important. If the only tool we have is a sequential computer, then we will naturally try to cast every problem in terms of sequential algorithms. Many problems are not suited to this approach, however, causing us to expend
2
Introduction to ANS Technology
a great deal of effort on the development of sophisticated algorithms, perhaps even failing to find an acceptable solution. In the remainder of this text, we will examine many parallelprocessing architectures that provide us with new tools that can be used in a variety of applications. Perhaps, with these tools, we will be able to solve more easily currently difficulttosolve, or unsolved, problems. Of course, our proverbial
hammer will still be extremely useful, but with a full toolbox we should be able to accomplish much more. As an example of the difficulties we encounter when we try to make a sequential computer system perform an inherently parallel task, consider the problem of visual pattern recognition. Complex patterns consisting of numerous elements that, individually, reveal little of the total pattern, yet collectively represent easily recognizable (by humans) objects, are typical of the kinds of patterns that have proven most difficult for computers to recognize. For example, examine the illustration presented in Figure 1.1. If we focus strictly on the black splotches, the picture is devoid of meaning. Yet, if we allow our perspective to encompass all the components, we can see the image of a commonly recognizable object in the picture. Furthermore, once we see the image, it is difficult for us not to see it whenever we again see this picture. Now, let's consider the techniques we would apply were we to program a conventional computer to recognize the object in that picture. The first thing our program would attempt to do is to locate the primary area or areas of interest in the picture. That is, we would try to segment or cluster the splotches into groups, such that each group could be uniquely associated with one object. We might then attempt to find edges in the image by completing line segments. We could continue by examining the resulting set of edges for consistency, trying to determine whether or not the edges found made sense in the context of the other line segments. Lines that did not abide by some predefined rules describing the way lines and edges appear in the real world would then be attributed to noise in the image and thus would be eliminated. Finally, we would attempt to isolate regions that indicated common textures, thus filling in the holes and completing the image. The illustration of Figure 1.1 is one of a dalmatian seen in profile, facing left, with head lowered to sniff at the ground. The image indicates the complexity of the type of problem we have been discussing. Since the dog is illustrated as a series of black spots on a white background, how can we write a computer program to determine accurately which spots form the outline of the dog, which spots can be attributed to the spots on his coat, and which spots are simply distractions? An even better question is this: How is it that we can see the dog in. the image quickly, yet a computer cannot perform this discrimination? This question is especially poignant when we consider that the switching time of the components in modern electronic computers are more than seven orders of magnitude faster than the cells that comprise our neurobiological systems. This
Introduction to ANS Technology
Figure 1.1
The picture is an example of a complex pattern. Notice how the image of the object in the foreground blends with the background clutter. Yet, there is enough information in this picture to enable us to perceive the image of a commonly recognizable object. Source: Photo courtesy of Ron James.
question is partially answered by the fact that the architecture of the human brain is significantly different from the architecture of a conventional computer. Whereas the response time of the individual neural cells is typically on the order of a few tens of milliseconds, the massive parallelism and interconnectivity observed in the biological systems evidently account for the ability of the brain to perform complex pattern recognition in a few hundred milliseconds. In many realworld applications, we want our computers to perform complex pattern recognition problems, such as the one just described. Since our conventional computers are obviously not suited to this type of problem, we therefore borrow features from the physiology of the brain as the basis for our new processing models. Hence, the technology has come to be known as artificial neural systems (ANS) technology, or simply neural networks. Perhaps the models we discuss here will enable us eventually to produce machines that can interpret complex patterns such as the one in Figure 1.1. In the next section, we will discuss aspects of neurophysiology that contribute to the ANS models we will examine. Before we do that, let's first consider how an ANS might be used to formulate a computer solution to a patternmatching problem similar to, but much simpler than, the problem of
4
Introduction to ANS Technology
recognizing the dalmation in Figure 1.1. Specifically, the problem we will address is recognition of handdrawn alphanumeric characters. This example is particularly interesting for two reasons: •
Even though a character set can be defined rigorously, people tend to personalize the manner in which they write the characters. This subtle variation in style is difficult to deal with when an algorithmic patternmatching approach is used, because it combinatorially increases the size of the legal input space to be examined.
• As we will see in later chapters, the neuralnetwork approach to solving the problem not only can provide a feasible solution, but also can be used to gain insight into the nature of the problem.
We begin by defining a neuralnetwork structure as a collection of parallel processors connected together in the form of a directed graph, organized such that the network structure lends itself to the problem being considered. Referring to Figure 1.2 as a typical network diagram, we can schematically represent each processing element (or unit) in the network as a node, with connections between units indicated by the arcs. We shall indicate the direction of information flow in the network through the use of the arrowheads on the connections. To simplify our example, we will restrict the number of characters the neural network must recognize to the 10 decimal digits, 0 , 1 , . . . , 9, rather than using the full ASCII character set. We adopt this constraint only to clarify the example; there is no reason why an ANS could not be used to recognize all characters, regardless of case or style. Since our objective is to have the neural network determine which of the 10 digits a particular handdrawn character is, we can create a network structure that has 10 discrete output units (or processors), one for each character to be identified. This strategy simplifies the characterdiscrimination function of the network, as it allows us to use a network that contains binary units on the output
layer (e.g., for any given input pattern, our network should activate one and only one of the 10 output units, representing which of the 10 digits that we are attempting to recognize the input most resembles). Furthermore, if we insist that the output units behave according to a simple onoff strategy, the process of converting an input signal to an output signal becomes a simple majority function. Based on these considerations, we now know that our network should contain 10 binary units as its output structure. Similarly, we must determine how we will model the character input for the network. Keeping in mind that we have already indicated a preference for binary output units, we can again simplify our task if we model the input data as a vector containing binary elements, which will allow us to use a network with only one type of processing unit. To create this type of input, we borrow an idea from the video world and pixelize the character. We will arbitrarily size the pixel image as a 10 x 8 matrix, using a 1 to represent a pixel that is "on," and a 0 to represent a pixel that is "off."
Introduction to ANS Technology Outputs
Hiddens
Inputs
Figure 1.2
This schematic represents the characterrecognition problem described in the text. In this example, application of an input pattern on the bottom layer of processors can cause many of the secondlayer, or hiddenlayer, units to activate. The activity on the hidden layer should then cause exactly one of the output' layer units to activate—the one associated with the pattern being identified. You should also note the large number of connections needed for this relatively small network.
Furthermore, we can dissect this matrix into a set of row vectors, which can then be concatenated into a single row vector of dimension 80. Thus, we have now defined the dimension and characteristics of the input pattern for our network. At this point, all that remains is to size the number of processing units (called hidden units) that must be used internally, to connect them to the input and output units already defined using weighted connections, and to train the network with example data pairs.' This concept of learning by example is extremely important. As we shall see, a significant advantage of an ANS approach to solving a problem is that we need not have a welldefined process for algorimmically converting an input to an output. Rather, all that we need for most 1
Details of how this training is accomplished will occupy much of the remainder of the text.
6
Introduction to ANS Technology
networks is a collection of representative examples of the desired translation. The ANS then adapts itself to reproduce the desired outputs when presented with the example inputs. In addition, as our example network illustrates, an ANS is robust in the sense that it will respond with an output even when presented with inputs that it
has never seen before, such as patterns containing noise. If the input noise has not obliterated the image of the character, the network will produce a good guess using those portions of the image that were not obscured and the information that it has stored about how the characters are supposed to look. The inherent ability to deal with noisy or obscured patterns is a significant advantage of an ANS approach over a traditional algorithmic solution. It also illustrates a neuralnetwork maxim: The power of an ANS approach lies not necessarily in the elegance of the particular solution, but rather in the generality of the network to find its own solution to particular problems, given only examples of the desired behavior. Once our network is trained adequately, we can show it images of numerals written by people whose writing was not used to train the network. If the training has been adequate, the information propagating through the network will result in a single element at the output having a binary 1 value, and that unit will be the one that corresponds to the numeral that was written. Figure 1.3 illustrates characters that the trained network can recognize, as well as several it cannot. In the previous discussion, we alluded to two different types of network operation: training mode and production mode. The distinct nature of these two modes of operation is another useful feature of ANS technology. If we note that the process of training the network is simply a means of encoding information about the problem to be solved, and that the network spends most of its productive time being exercised after the training has completed, we will have uncovered a means of allowing automated systems to evolve without explicit reprogramming. As an example of how we might benefit from this separation, consider a system that utilizes a software simulation of a neural network as part of its programming. In this case, the network would be modeled in the host computer system as a set of data structures that represents the current state of the network. The process of training the network is simply a matter of altering the connection weights systematically to encode the desired inputoutput relationships. If we code the network simulator such that the data structures used by the network are allocated dynamically, and are initialized by reading of connectionweight data from a disk file, we can also create a network simulator with a similar structure in another, offline computer system. When the online system must change to satisfy new operational requirements, we can develop the new connection weights offline by training the network simulator in the remote system. Later, we can update the operational system by simply changing the connectionweight initialization file from the previous version to the new version produced by the offline system.
Introduction to ANS Technology
(b) Figure 1.3
Handwritten characters vary greatly, (a) These characters were recognized by the network in Figure 1.2; (b) these characters were not recognized.
These examples hint at the ability of neural networks to deal with complex patternrecognition problems, but they are by no means indicative of the limits of the technology. In later chapters, we will describe networks that can be used to diagnose problems from symptoms, networks that can adapt themselves to model a topological mapping accurately, and even networks that can learn to recognize and reproduce a temporal sequence of patterns. All these networks are based on the simple building blocks discussed previously, and derived from the topics we shall discuss in the next two sections. Finally, the distinction made between the artificial and natural systems is intentional. We cannot overemphasize the fact that the ANS models we will
examine bear only a perfunctory resemblance to their biological counterparts. What is important about these models is that they all exhibit the useful behaviors of learning, recognizing, and applying relationships between objects and patterns of objects in the real world. In this regard, they provide us with a whole new set of tools that we can use to solve "difficult" problems.
Introduction to ANS Technology
1.1
ELEMENTARY NEUROPHYSIOLOGY
From time to time throughout this text, we shall cite specific results from neurobiology that pertain to a particular ANS architecture. There are also basic concepts that have a more universal significance. In this regard, we look first at individual neurons, then at the synaptic junctions between neurons. We describe the McCullochPitts model of neural computation, and examine its specific relationship to our neuralnetwork models. We finish the section with a look at
Hebb's theory of learning. Bear in mind that the following discussion is a simplified overview; the subject of neurophysiology is vastly more complicated than is the picture we paint here.
1.1.1
SingleNeuron Physiology
Figure 1.4 depicts the major components of a typical nerve cell in the central nervous system. The membrane of a neuron separates the intracellular plasma from the interstitial fluid external to the cell. The membrane is permeable to certain ionic species, and acts to maintain a potential difference between the
Myelin sheath Axon hillock
Nucleus
Dendrites Figure 1.4
The major structures of a typical nerve cell include dendrites, the cell body, and a single axon. The axon of many neurons is surrounded by a membrane called the myelin sheath. Nodes of Ranvier interrupt the myelin sheath periodically along the length of the axon. Synapses connect the axons of one neuron to various parts of other neurons.
1.1
Elementary Neurophysiology
Cell membrane
Na+ External electrode
Figure 1.5
Q ~~
This figure illustrates the resting potential developed across the cell membrane of a neuron. The relative sizes of the labels for the ionic species indicate roughly the relative concentration of each species in the regions internal and external to the cell.
intracellular fluid and the extracellular fluid. It accomplishes this task primarily by the action of a sodiumpotassium pump. This mechanism transports sodium ions out of the cell and potassium ions into the cell. Other ionic species present are chloride ions and negative organic ions. All the ionic species can diffuse across the cell membrane, with the exception of the organic ions, which are too large. Since the organic ions cannot diffuse out of the cell, their net negative charge makes chloride diffusion into the cell unfavorable; thus, there will be a higher concentration of chloride ions outside of the cell. The sodiumpotassium pump forces a higher concentration of potassium inside the cell and a higher concentration of sodium outside the cell. The cell membrane is selectively more permeable to potassium ions than to sodium ions. The chemical gradient of potassium tends to cause potassium ions to diffuse out of the cell, but the strong attraction of the negative organic ions tends to keep the potassium inside. The result of these opposing forces is that an equilibrium is reached where there are significantly more sodium and chloride ions outside the cell, and more potassium and organic ions inside the cell. Moreover, the resulting equilibrium leaves a potential difference across the cell membrane of about 70 to 100 millivolts (mV), with the intracellular fluid being more negative. This potential, called the resting potential of the cell, is depicted schematically in Figure 1.5. Figure 1.6 illustrates a neuron with several incoming connections, and the potentials that occur at various locations. The figure shows the axon with a covering called a myelin sheath. This insulating layer is interrupted at various points by the nodes of Ranvier. Excitatory inputs to the cell reduce the potential difference across the cell membrane. The resulting depolarization at the axon hillock alters the permeability of the cell membrane to sodium ions. As a result, there is a large influx
10
Introduction to ANS Technology
Action potential spike propagates along axon
Excitatory, depolarizing potential
Inhibitory, polarizing potential
Figure 1.6
Connections to the neuron from other neurons occur at various locations on the cell that are known as synapses. Nerve impulses through these connecting neurons can result in local changes in the potential in the cell body of the receiving neuron. These potentials, called graded potentials or input potentials, can spread through the main body of the cell. They can be either excitatory (decreasing the polarization of the cell) or inhibitory (increasing the polarization of the cell). The input potentials are summed at the axon hillock. If the amount of depolarization at the axon hillock is sufficient, an action potential is generated; it travels down the axon away from the main cell body.
of positive sodium ions into the cell, contributing further to the depolarization. This selfgenerating effect results in the action potential. Nerve fibers themselves are poor conductors. The transmission of the action potential down the axon is a result of a sequence of depolarizations that occur at the nodes of Ranvier. As one node depolarizes, it triggers the depolarization of the next node. The action potential travels down the fiber in a discontinuous fashion, from node to node. Once an action potential has passed a given point,
1.1
Elementary Neurophysiology
11
Presynaptic membrane Neurotransmitter release Postsynaptic membrane
Synaptic vesicle
Figure 1.7
Neurotransmitters are held in vesicles near the presynaptic membrane. These chemicals are released into the synaptic cleft and diffuse to the postsynaptic membrane, where they are subsequently absorbed.
that point is incapable of being reexcited for about 1 millisecond, while it is restored to its resting potential. This refractory period limits the frequency of nervepulse transmission to about 1000 per second.
1.1.2
The Synaptic junction
Let's take a brief look at the activity that occurs at the connection between two neurons called the synaptic junction or synapse. Communication between neurons occurs as a result of the release by the presynaptic cell of substances called neurotransmitters, and of the subsequent absorption of these substances by the postsynaptic cell. Figure 1.7 shows this activity. When the action potential arrives as the presynaptic membrane, changes in the permeability of the membrane cause an influx of calcium ions. These ions cause the vesicles containing the neurotransmitters to fuse with the presynaptic membrane and to release their neurotransmitters into the synaptic cleft.
12
Introduction to ANS Technology
The neurotransmitters diffuse across the junction and join to the postsynaptic membrane at certain receptor sites. The chemical action at the receptor sites results in changes in the permeability of the postsynaptic membrane to certain ionic species. An influx of positive species into the cell will tend to depolarize the resting potential; this effect is excitatory. If negative ions enter, a hyperpolarization effect occurs; this effect is inhibitory. Both effects are local
effects that spread a short distance into the cell body and are summed at the axon hillock. If the sum is greater than a certain threshold, an action potential is generated.
1.1.3
Neural Circuits and Computation
Figure 1.8 illustrates several basic neural circuits that are found in the central nervous system. Figures 1.8(a) and (b) illustrate the principles of divergence and convergence in neural circuitry. Each neuron sends impulses to many other neurons (divergence), and receives impulses from many neurons (convergence). This simple idea appears to be the foundation for all activity in the central nervous system, and forms the basis for most neuralnetwork models that we shall discuss in later chapters. Notice the feedback paths in the circuits of Figure 1.8(b), (c), and (d). Since synaptic connections can be either excitatory or inhibitory, these circuits facilitate control systems having either positive or negative feedback. Of course, these simple circuits do not adequately portray the vast complexity of neuroanatomy. Now that we have an idea of how individual neurons operate and of how they are put together, we can pose a fundamental question: How do these relatively simple concepts combine to give the brain its enormous abilities? The first significant attempt to answer this question was made in 1943, through the seminal work by McCulloch and Pitts [24]. This work is important for many reasons, not the least of which is that the investigators were the first people to treat the brain as a computational organism. The McCullochPitts theory is founded on five assumptions:
1. The activity of a neuron is an allornone process. 2. A certain fixed number of synapses (> 1) must be excited within a period of latent addition for a neuron to be excited. 3. The only significant delay within the nervous system is synaptic delay.
4. The activity of any inhibitory synapse absolutely prevents excitation of the neuron at that time. 5. The structure of the interconnection network does not change with time. Assumption 1 identifies the neurons as being binary: They are either on or off. We can therefore define a predicate, Nt(t), which denotes the assertion that the ith neuron fires at time t. The notation, iATj(t), denotes the assertion that the ith neuron did not fire at time t. Using this notation, we can describe
1.1
Elementary Neurophysiology
Figure 1.8
13
These schematics show examples of neural circuits in the central nervous system. The cell bodies (including the dendrites) are represented by the large circles. Small circles appear at the ends of the axons. Illustrated in (a) and (b) are the concepts of divergence and convergence. Shown in (b), (c), and (d) are examples of circuits with feedback paths.
the action of certain networks using propositional logic. Figure 1.9 shows five simple networks. We can write simple propositional expressions to describe the behavior of the first four (the fifth one appears in Exercise 1.1). Figure 1.9(a) describes precession: neuron 2 fires after neuron 1. The expression is N2(t) = Ni(t — 1). Similarly, the expressions for parts (b) through (d) of this figure are •
AT3(i) = N^t  1) V N2(t  1) (disjunction),
•
N3(t) = Ni(t  {)&N2(t  1) (conjunction), and
•
N3(t) = Ni(t l)&^N2(t  1) (conjoined negation).
One of the powerful proofs in this theory was that any network that does not have feedback connections can be described in terms of combinations of these four
Introduction to ANS Technology
14
(a)
(e) Figure 1.9
These drawings are examples of simple McCullochPitts networks that can be defined in terms of the notation of prepositional logic. Large circles with labels represent cell bodies. The small, filled circles represent excitatory connections; the small, open circles represent inhibitory connections. The networks illustrate (a) precession, (b) disjunction, (c) conjunction, and (d) conjoined negation. Shown in (e) is a combination of networks (a)(d).
simple expressions, and vice versa. Figure 1.9(e) is an example of a network made from a combination of the networks in parts (a) through (d). Although the McCullochPitts theory has turned out not to be an accurate model of brain activity, the importance of the work cannot be overstated. The theory helped to shape the thinking of many people who were influential in the development of modern computer science. As Anderson and Rosenfeld point out, one critical idea was left unstated in the McCullochPitts paper: Although neurons are simple devices, great computational power can be realized
1.1
Elementary Neurophysiology
15
when these neurons are suitably connected and are embedded within the nervous system [2]. Exercise 1.1: Write the prepositional expression for JV3(i) and JV4(i), of Figure 1.9(e). Exercise 1.2: Construct McCullochPitts networks for the following expressions:
1. N)(t) = N2(t  2)&^Ni(t  3) 2. N4(t) = [N2(t  l)&>JV,(t  1)] V [JV3(i  1)&.JV,(<  1)] V[N2(t \)&N3(t 1)]
1.1.4
Hebbian Learning
Biological neural systems are not born preprogrammed with all the knowledge and abilities that they will eventually have. A learning process that takes place over a period of time somehow modifies the network to incorporate new information. In the previous section, we began to see how a relatively simple neuron might result in a sophisticated computational device. In this section, we shall explore a relatively simple learning theory that suggests an elegant answer to this question: How do we learn? The basic theory comes from a 1949 book by Hebb, Organization of Behavior. The main idea was stated in the form of an assumption, which we reproduce here for historical interest: When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased. [10, p. 50]
As with the McCullochPitts model, this learning law does not tell the whole story. Nevertheless, it appears in one form or another in many of the neuralnetwork models that exist today. To illustrate the basic idea, we consider the example of classical conditioning, using the familiar experiment of Pavlov. Figure 1.10 shows three idealized neurons that participate in the process. Suppose that the excitation of C, caused by the sight of food, is sufficient to excite B, causing salivation. Furthermore, suppose that, in the absence of additional stimulation, the excitation of A, resulting from hearing a bell, is not sufficient to cause the firing of B. Let's allow C to cause B to fire by showing food to the subject, and while B is still firing, stimulate A by ringing a bell. Because B is still firing, A is now participating in the excitation of B, even though by itself A would be insufficient to cause B to fire. In this situation, Hebb's assumption dictates that some change occur between A and B, so that A's influence on B is increased.
Introduction to ANS Technology
16
Salivation signal
Sight input
Figure 1.10
Two neurons, A and C, are stimulated by the sensory inputs of sound and sight, respectively. The third neuron, B, causes salivation. The two synaptic junctions are labeled SB A anc'
If the experiment is repeated often enough, A will eventually be able to cause B to fire even in the absence of the visual stimulation from C. Then, if the bell is rung, but no food is shown, salivation will still occur, because the excitation due to A alone is now sufficient to cause B to fire. Because the connection between neurons is through the synapse, it is reasonable to guess that whatever changes occur during learning take place there. Hebb theorized that the area of the synaptic junction increased. More recent theories assert that an increase in the rate of neurotransmitter release by the presynaptic cell is responsible. In any event, changes certainly occur at the synapse. If either the pre or postsynaptic cell were altered as a whole, other responses could be reinforced that are unrelated to the conditioning experiment. Thus we conclude our brief look at neurophysiology. Before moving on, however, we reiterate a caution and issue a challenge to you. On the one hand, although there are many analogies between the basic concepts of neurophysiology and the neuralnetwork models described in this book, we caution you not to portray these systems as actually modeling the brain. We prefer to say that these networks have been inspired by our current understanding of neurophysiology. On the other hand, it is often too easy for engineers, in their pursuit of solutions to specific problems, to ignore completely the neurophysiological foundations of the technology. We believe that this tendency is unfortunate. Therefore, we challenge ANS practitioners to keep abreast of the developments in neurobiology so as to be able to incorporate significant results into their systems. After all, what better model is there than the one example of a neural network with existing capabilities that far surpass any of our artificial systems?
1 .2
From Neurons to ANS
17
Exercise 1.3: The analysis of highdimensional data sets is often a complex task. One way to simplify the task is to use the KarhunenLoeve (KL) matrix, which is defined as
where N is the number of vectors, and //' is the ith component of the /xth vector. The KL matrix extracts the principal components, or directions of maximum information (correlation) from a data set. Determine the relationship between the KL formulation and the popular version of the Hebb rule known as the Oja rule: at
where O(t) is the output of a simple, linear processing element; /;(£) are the inputs; and <j>i(t) are the synaptic strengths. (This exercise was suggested by Dr. Daniel Kammen, California Institute of Technology.)
1.2
FROM NEURONS TO ANS
In this section, we make a transition from some of the ideas gleaned from neurobiology to the idealized structures that form the basis of most ANS models. We first describe a general artificial neuron that incorporates most features we shall need for future discussions of specific models. Later in the section, we take a brief look at a particular example of an ANS called the perceptron. The perceptron was the result of an early attempt to simulate neural computation in order to perform complex tasks. We shall examine in particular what several limitations of this approach are and how they might be overcome.
1.2.1
The General Processing Element
The individual computational elements that make up most artificial neuralsystem models are rarely called artificial neurons; they are more often referred to as nodes, units, or processing elements (PEs). All these terms are used interchangeably throughout this book. Another point to bear in mind is that it is not always appropriate to think of the processing elements in a neural network as being in a onetoone relationship with actual biological neurons. It is sometimes better to imagine a single processing element as representative of the collective activity of a group of neurons. Not only will this interpretation help us to avoid the trap of speaking as though our systems were actual brain models, but also it will make the problem more tractable when we are attempting to model the behavior of some biological structure. Figure 1.11 shows our general PE model. Each PE is numbered, the one in the figure being the zth. Having cautioned you not to make too many biological
Introduction to ANS Technology
18
Output
Type inputs
Type n inputs
Type 2 inputs
Figure 1.11
This structure represents a single PE in a network. The input connections are modeled as arrows from other processing elements. Each input connection has associated with it a quantity, w.tj, called a weight. There is a single output value, which can fan out to other units.
analogies, we shall now ignore our own advice and make a few ourselves. For example, like a real neuron, the PE has many inputs, but has only a single output, which can fan out to many other PEs in the network. The input the zth receives from the jth PE is indicated as Xj (note that this value is also the output
of the jth node, just as the output generated by the ith node is labeled x^). Each connection to the ith PE has associated with it a quantity called a weight or connection strength. The weight on the connection from the jth node to the ilh node is denoted wtj. All these quantities have analogues in the standard neuron model: The output of the PE corresponds to the firing frequency of the neuron, and the weight corresponds to the strength of the synaptic connection between neurons. In our models, these quantities will be represented as real numbers.
1.2
From Neurons to ANS
19
Notice that the inputs to the PE are segregated into various types. This segregation acknowledges that a particular input connection may have one of several effects. An input connection may be excitatory or inhibitory, for example. In our models, excitatory connections have positive weights, and inhibitory connections have negative weights. Other types are possible. The terms gain, quenching, and nonspecific arousal describe other, specialpurpose connections; the characteristics of these other connections will be described later in the book. Excitatory and inhibitory connections are usually considered together, and constitute the most common forms of input to a PE. Each PE determines a netinput value based on all its input connections.
In the absence of special connections, we typically calculate the net input by summing the input values, gated (multiplied) by their corresponding weights. In other words, the net input to the ith unit can be written as
neti = "^XjWij j
(1.1)
where the index, j, runs over all connections to the PE. Note that excitation
and inhibition are accounted for automatically by the sign of the weights. This sumofproducts calculation plays an important role in the network simulations that we will be describing later. Because there is often a very large number of interconnects in a network, the speed at which this calculation can be performed usually determines the performance of any given network simulation. Once the net input is calculated, it is converted to an activation value, or simply activation, for the PE. We can write this activation value as ai(t)
= F,{a,(t  l),net,(t))
(1.2)
to denote that the activation is an explicit function of the net input. Notice that the current activation may depend on the previous value of the activation,
a(t  I). 2 We include this dependence in the definition for generality. In the majority of cases, the activation and net input are identical, and the terms often are used interchangeably. Sometimes, activation and net input are not the same, and we must pay attention to the difference. For the most part, however, we will be able to use activation to mean net input, and vice versa. Once the activation of the PE is calculated, we can determine the output value by applying an output function:
x, = /i(ai)
(1.3)
Since, usually, a, = net,, this function is normally written as xt = /,(net,)
(1.4)
One reason for belaboring the issue of activation versus net input is that the term activation function is sometimes used to refer to the function, /,, that ecause of the emphasis on digital simulations in this text, we generally consider time to be measured in discrete steps. The notation t — 1 indicates one timestep prior to time t.
Introduction to ANS Technology
20
converts the net input value, net;, to the node's output value, Xj. In this text, we shall consistently use the term output function for /,() of Eqs. (1.3) and (1.4). Be aware, however, that the literature is not always consistent in this respect. When we are describing the mathematical basis for network models, it will often be useful to think of the network as a dynamical system—that is, as a system that evolves over time. To describe such a network, we shall write differential equations that describe the time rate of change of the outputs of the various PEs. For example, ±, — gi(xt, net,) represents a general differential equation for the output of the ith PE, where the dot above the x refers to
differentiation with respect to time. Since netj depends on the outputs of many other units, we actually have a system of coupled differential equations. As an example, let's look at the equation ±i = Xi + /j(neti)
for the output of the itii processing element. We apply some input values to the PE so that net; > 0. If the inputs remain for a sufficiently long time, the output value will reach an equilibrium value, when x, = 0, given by
which is identical to Eq. (1.4). We can often assume that input values remain until equilibrium has been achieved. Once the unit has a nonzero output value, removal of the inputs will cause the output to return to zero. If net; = 0, then
which means that x —> 0. It is also useful to view the collection of weight values as a dynamical system. Recall the discussion in the previous section, where we asserted that learning is a result of the modification of the strength of synaptic junctions between neurons. In an ANS, learning usually is accomplished by modification of the weight values. We can write a system of differential equations for the weight values, Wij = GZ(WJJ, z;,Xj,...), where G, represents the learning law. The learning process consists of finding weights that encode the knowledge that we want the system to learn. For most realistic systems, it is not easy to determine a closedform solution for this system of equations. Techniques exist, however, that result in an acceptable approximation to a solution. Proving the existence of stable solutions to such systems of equations is an active area of research in neural networks today, and probably will continue to be so for some time.
1.2.2
Vector Formulation
In many of the network models that we shall discuss, it is useful to describe certain quantities in terms of vectors. Think of a neural network composed of several layers of identical processing elements. If a particular layer contains n
1 .2
From Neurons to ANS
21
units, the outputs of that layer can be thought of as an ndimensional vector, X = (x\ , X2, • • • , xnY, where the t superscript means transpose. In our notation, vectors written in boldface type, such as x, will be assumed to be column vectors. When they are written row form, the transpose symbol will be added to indicate that the vector is actually to be thought of as a column vector. Conversely, the notation \f indicates a row vector. Suppose the ndimensional output vector of the previous paragraph provides
the input values to each unit in an mdimensional layer (a layer with m units). Each unit on the mdimensional layer will have n weights associated with the
connections from the previous layer. Thus, there are m ndimensional weight vectors associated with this layer; there is one ndimensional weight vector for each of the m units. The weight vector of the ith unit can be written as Y/I = (wi\ , Wi2, • • • , Winf. A superscript can be added to the weight notation to distinguish between weights on different layers. The net input to the ith unit can be written in terms of the inner product, or dot product, of the input vector and the weight vector. For vectors of equal dimensions, the inner product is denned as the sum of the products of the corresponding components of the two vectors. In the notation of the previous section,
where n is the number of connections to the ith unit. This equation can be written succinctly in vector notation as
neti = x • wz or neti = x*w,
Also note that, because of the rules of multiplication of vectors,
We shall often speak of input vectors and output vectors and weight vectors,
but we tend to reserve the vector notation for cases where it is particularly appropriate. Additional vector concepts will be introduced later as needed. In the next section, we shall use the notation presented here to describe a neuralnetwork model that has an important place in history: the perceptron.
T2.3
The Perceptron: Part 1
The device known as the perceptron was invented by psychologist Frank Rosenblatt m the late 1950s. It represented his attempt to "illustrate some of the undamental properties of intelligent systems in general, without becoming too
Introduction to ANS Technology
22
Sensory (S) area
Association (A) area —————°
Inhibitory connection
————•
Excitatory connection
————— Figure 1.12
Response (R) area
Either inhibitory or excitatory
A simple photoperceptron has a sensory area, an association area, and a response area. The connections shown between units in the various areas are illustrative, and are not meant to be an exhaustive representation.
deeply enmeshed in the special, and frequently unknown, conditions which hold for particular biological organisms" [29, p. 387]. Rosenblatt believed that the connectivity that develops in biological networks contains a large random element. Thus, he took exception to previous analyses, such as the McCullochPitts model, where symbolic logic was employed to analyze rather idealized structures. Rather, Rosenblatt believed that the most appropriate analysis tool was probability theory. He developed a theory of statistical separability that he used to characterize the gross properties of these somewhat randomly interconnected networks. The photoperceptron is a device that responds to optical patterns. We show an example in Figure 1.12. In this device, light impinges on the sensory (S) points of the retina structure. Each S point responds in an allornothing manner to the incoming light. Impulses generated by the S points are transmitted to the associator (A) units in the association layer. Each A unit is connected to a random set of S points, called the A unit's source set, and the connections may be either excitatory or inhibitory. The connections have the possible values, +1, — 1, and 0. When a stimulus pattern appears on the retina, an A unit becomes active if the sum of its inputs exceeds some threshold value. If active, the A unit produces an output, which is sent to the next layer of units. In a similar manner, A units are connected to response (R) units in the response layer. The pattern of connectivity is again random between the layers, but there is the addition of inhibitory feedback connections from the response
1.2
From Neurons to ANS
Sensory (S) area
23
Association (A) area
Response (R) area
~° Inhibitory connection
Figure 1.13
•
Excitatory connection
—
Either inhibitory or excitatory
This Venn diagram shows the connectivity scheme for a simple perceptron. Each R unit receives excitatory connections from a group of units in the association area that is called the source set of the R unit. Notice that some A units are in the source set for both R units.
layer to the association layer, and of inhibitory connections between R units. The entire connectivity scheme is depicted in the form of a Venn diagram in Figure 1.13 for a simple perceptron with two R units. This drawing shows that each R unit inhibits the A units in the complement to its own source set. Furthermore, each R unit inhibits the other. These factors aid in the establishment of a single, winning R unit for each stimulus pattern appearing on the retina. The R units respond in much the same way as do the A units. If the sum of their inputs exceeds a threshold, they give an output value of +1; otherwise, the output is — 1 . An alternative feedback mechanism would connect excitatory feedback connections from each R unit to that R unit's respective source set in the association layer.
A system such as the one just described can be used to classify patterns appearing on the retina into categories, according to the number of response units in the system. Patterns that are sufficiently similar should excite the same R unit. Thus, the problem is one of separability: Is it possible to construct a perceptron such that it can successfully distinguish between different pattern classes? The answer is "yes," but with certain conditions that we shall explore later. The perceptron was a learning device. In its initial configuration, the perceptron was incapable of distinguishing the patterns of interest; through a training process, however, it could learn this capability. In essence, training involved
Introduction to ANS Technology
24
a reinforcement process whereby the output of A units was either increased or decreased depending on whether or not the A units contributed to the correct response of the perceptron for a given pattern. A pattern was applied to the retina, and the stimulus was propagated through the layers until a response unit was activated. If the correct response unit was active, the output of the contributing A units was increased. If the incorrect R unit was active, the output of the contributing A units was decreased. Using such a scheme, Rosenblatt was able to show that the perceptron could classify patterns successfully in what he termed a differentiated environment, where each class consisted of patterns that were in some sense similar to
one another. The perceptron was also able to respond consistently to random patterns, but its accuracy diminished as the number of patterns that it attempted to learn increased.
Rosenblatt' s work resulted in the proof of an important result known as the perceptron convergence theorem. The theorem is proved for a perceptron with one R unit that is learning to differentiate patterns of two distinct classes. It states, in essence, that, if the classification can be learned by the perceptron, then the procedure we have described guarantees that it will be learned in a finite number of training cycles. Unfortunately, perceptrons caused a fair amount of controversy at the time they were described. Unrealistic expectations and exaggerated claims no doubt played a part in this controversy. The end result was that the field of artificial neural networks was almost entirely abandoned, except by a few diehard researchers. We hinted at one of the major problems with perceptrons when we suggested that there were conditions attached to the successful operation of the perceptron. In the next section, we explore and evaluate these considerations. Exercise 1.4: Consider a perceptron with one R unit and Na association units,
a/i, which is attempting to learn to differentiate i patterns, Sj, each of which falls into one of two categories. For one category, the R unit gives an output of +1; for the other, it gives an output of —1. Let W M be the output of the p,th A unit. Further, let pt be ±1, depending on the class of 5*;, and let eM; be 1 if a M is in the source set for 5;, and 0 otherwise. Show that the successful classification of patterns Si requires that the following condition be satisfied:
where 0 is the threshold value of the R unit.
1.2.4
The Perceptron: Part 2
In 1969, a book appeared that some people consider to have sounded the death knell for neural networks. The book was aptly entitled Perceptrons: An Introduction to Computational Geometry and was written by Marvin Minsky and
1.2
25
From Neurons to ANS
e Threshold condition
Retina Figure 1.14
The simple perceptron structure is similar in structure to general processing element shown in Figure 1.11. Note addition of a threshold condition on the output. If the input is greater than the threshold value, the output of device is +1; otherwise, the output is 0.
the the net the
Seymour Papert, both of MIT [26]. They presented an astute and detailed analysis of the perceptron in terms of its capabilities and limitations. Whether their intention was to defuse popular support for neuralnetwork research remains a matter for debate. Nevertheless, the analysis is as timely today as it was in 1969, and many of the conclusions and concerns raised continue to be valid. In particular, one of the points made in the previous section—a point treated in detail in Minsky and Papert's book—is the idea that there are certain restrictions on the class of problems for which the perceptron is suitable. Perceptrons can differentiate patterns only if the patterns are linearly separable. The meaning of the term linearly separable should become clear shortly. Because many classification problems do not possess linearly separable classes, this condition places a severe restriction on the applicability of the perceptron. Minsky and Papert departed from the probabilistic approach championed by Rosenblatt, and returned to the ideas of predicate calculus in their analysis of the perceptron. Their idealized perceptron appears in Figure 1.14. The set $ = { l P \ , V 2 ,    , V n } is a set of predicates. In the predicates' simplest form, tpt = 1 if the zth point of the retina is on, and (pi — 0 otherwise. Each of the input predicates is weighted by a number from the set \a= 0.0) {if the input is positive} {then return a binary true} then return (1.0) else return (1.0) ; {else return a binary false} end function;
2.5.3
Adapting the Adaline
Now that our simulator can forward propagate signal information, we turn our attention to the implementation of the learning algorithms. Here again we assume that the input signal pattern is placed in the appropriate array by an applicationspecific process. During training, however, we will need to know what the target output d^ is for every input vector, so that we can compute the error term for the Adaline. Recall that, during training, the LMS algorithm requires that the Adaline update its weights after every forward propagation for a new input pattern. We must also consider that the Adaline application may need to adapt the
2.5
Simulating the Adaline
S3
Adaline while it is running. Based on these observations, there is no need to store or accumulate errors across all patterns within the training algorithm. Thus, we can design the training algorithm merely to adapt the weights for a single pattern. However, this design decision places on the application program the responsibility for determining when the Adaline has trained sufficiently. This approach is usually acceptable because of the advantages it offers over the implementation of a selfcontained training loop. Specifically, it means that we can use the same training function to adapt the Adaline initially or while it is online. The generality of the algorithm is a particularly useful feature, in that the application program merely needs to detect a condition requiring adaptation. It can then sample the input that caused the error and generate the correct response "on the fly," provided we have some way of knowing that the error is increasing and can generate the correct desired values to accommodate retraining. These values, in turn, can then be input to the Adaline training algorithm, thus allowing adaptation at run time. Finally, it also reduces the housekeeping chores that must be performed by the simulator, since we will not need to maintain a list of expected outputs for all training patterns. We must now define algorithms to compute the squared error term (£2(t)), the approximation of the gradient of the error surface, and to update the connection weights to the Adaline. We can again simplify matters by combining the computation of the error and the update of the connection weights into one function, as there is no need to compute the former without performing the latter. We now present the algorithms to accomplish these functions: function compute_error (A : Adaline; TARGET : float) return float var tempi : float; {scratch memory} temp2 : float; {scratch memory}
err : float;
{error term for unit}
begin tempi = sum_inputs (A.input.outs, A.output.weights); temp2 = compute_output (tempi, A.output~.activation) ;
err = absolute (TARGET  temp2); {fast error} return (err);
{return error}
end function; function update_weights (A : Adaline; ERR : float) return void var grad : float; {the gradient of the error} ins : "float[]; {pointer to inputs array} wts : "float[];
i : integer;
{pointer to weights array}
{iteration counter}
Adaline and Madaline
84
begin ins = A.input.outs;
{locate start of input vector}
= A. output.weights"; {locate start of connections) for i = 1 to length(wts) do {for all connections, do} grad = 2 * err * ins[i]; {approximate gradient} wts[i] = wts[i]  grad * A.mu;
{update connection}
end do; end function;
2.5.4
Completing the Adaline Simulator
The algorithms we have just defined are sufficient to implement an Adaline simulator in both learning and operational modes. To offer a clean interface to any external program that must call our simulator to perform an Adaline function, we can combine the modules we have described into two higherlevel functions. These functions will perform the two types of activities the Adaline must perform: f orwarcLpropagate and adaptAdaline. function forward_jaropagate (A : Adaline) return void var tempi : float; {scratch memory}
begin tempi = sum_inputs (A.inputs.outs, A. outputs.weights); A.outputs.outs[1] = compute_output (tempi. A.node_type); end function;
function adapt_Adaline (A : Adaline; TARGET : float) return float {train until small} var err : float;
begin forward_propagate (A);
{Apply input signal}
err = compute_error (A, TARGET); {Compute error}
update_weights (A, err);
{Adapt Adaline}
return(err); end function;
2.5.5
Madaline Simulator Implementation
As we have discussed earlier, the Madaline network is simply a collection of binary Adaline units, connected together in a layered structure. However, even though they share the same type of processing unit, the learning strategies imple
2.5
Simulating the Adaline
85
mented for the Madaline are significantly different, as described in Section 2.5.2.
Providing that as a guide, along with the discussion of the data structures needed, we leave the algorithm development for the Madaline network to you as an exercise. In this regard, you should note that the layered structure of the Madaline lends itself directly to our simulator data structures. As illustrated in Figure 2.24, we can implement a layer of Adaline units as easily as we created a single Adaline. The major differences here will be the length of the cuts arrays in
the layer records (since there will be more than one Adaline output per layer), and the length and number of connection arrays (there will be one weights array for each Adaline in the layer, and the weight.ptr array will be extended by one slot for each new weights array). Similarly, there will be more layer records as the depth of the Madaline increases, and, for each layer, there will be a corresponding increase in the number of cuts, weights, and weight.ptr arrays. Based on these observations, one fact that becomes immediately perceptible is the combinatorial growth of both memory consumed and computer time required to support a linear growth in network size. This relationship between computer resources and model sizing is true not only for the Madaline, but for all ANS models we will
study. It is for these reasons that we have stressed optimization in data structures.
outputs weights
activation outs Madaline
weights
/
°3
We ight p —————— ^ "X A
W^
A
w3
^•^^ outputs
Figure 2.24
Madaline data structures are shown.
Adaline and Madaline
86
Programming Exercises 2.1. Extend the Adaline simulator to include the bias unit, 0, as described in the text. 2.2. Extend the simulator to implement a threelayer Madaline using the algorithms discussed in Section 2.3.2. Be sure to use the binary Adaline type.
Test the operation of your simulator by training it to solve the XOR problem described in the text.
2.3. We have indicated that the network stability term, it, can greatly affect the ability of the Adaline to converge on a solution. Using four different values
for /z of your own choosing, train an Adaline to eliminate noise from an input sinusoid ranging from 0 to 2n (one way to do this is to use a scaled randomnumber generator to provide the noise). Graph the curve of training iterations versus /z.
Suggested Readings The authoritative text by Widrow and Stearns is the standard reference to the material contained in this chapter [9]. The original deltarule derivation is contained in a 1960 paper by Widrow and Hoff [6], which is also reprinted in
the collection edited by Anderson and Rosenfeld [1].
Bibliography [1] James A. Anderson and Edward Rosenfeld, editors. Neurocomputing: Foundations of Research. MIT Press, Cambridge, MA, 1988. [2] David Andes, Bernard Widrow, Michael Lehr, and Eric Wan. MRIII: A robust algorithm for training analog neural networks. In Proceedings of the International Joint Conference on Neural Networks, pages I533I
536, January 1990. [3] Richard W. Hamming. Digital Filters. PrenticeHall, Englewood Cliffs, NJ, 1983. [4] Wilfred Kaplan. Advanced Calculus, 3rd edition. AddisonWesley, Reading, MA, 1984. [5] Alan V. Oppenheim arid Ronald W. Schafer. Digital Signal Processing. PrenticeHall, Englewood Cliffs, NJ, 1975. [6] Bernard Widrow and Marcian E. Hoff. Adaptive switching circuits. In 7960 IRE WESCON Convention Record, New York, pages 96104, 1960. IRE. [7] Bernard Widrow and Rodney Winter. Neural nets for adaptive filtering and adaptive pattern recognition. Computer, 21(3):2539, March 1988.
Bibliography
87
[8] Rodney Winter and Bernard Widrow. MADALINE RULE II: A training algorithm for neural networks. In Proceedings of the IEEE Second International Conference on Neural Networks, San Diego, CA, 1:401408, July 1988.
[9] Bernard Widrow and Samuel D. Stearns. Adaptive Signal Processing. Signal Processing Series. PrenticeHall, Englewood Cliffs, NJ, 1985.
R
H
Backpropagation
There are many potential computer applications that are difficult to implement because there are many problems unsuited to solution by a sequential process. Applications that must perform some complex data translation, yet have no
predefined mapping function to describe the translation process, or those that must provide a "best guess" as output when presented with noisy input data are
but two examples of problems of this type. An ANS that we have found to be useful in addressing problems requiring recognition of complex patterns and performing nontrivial mapping functions is the backpropagation network (BPN), formalized first by Werbos [11], and later by Parker [8] and by Rummelhart and McClelland [7]. This network, illustrated genetically in Figure 3.1, is designed to operate as a multilayer, feedforward network, using the supervised mode of learning. The chapter begins with a discussion of an example of a problem mapping character image to ASCII, which appears simple, but can quickly overwhelm traditional approaches. Then, we look at how the backpropagation network operates to solve such a problem. Following that discussion is a detailed derivation of the equations that govern the learning process in the backpropagation network. From there, we describe some practical applications of the BPN as described in the literature. The chapter concludes with details of the BPN software simulator within the context of the general design given in Chapter 1.
3.1
THE BACKPROPAGATION NETWORK
To illustrate some problems that often arise when we are attempting to automate complex patternrecognition applications, let us consider the design of a computer program that must translate a 5 x 7 matrix of binary numbers representing the bitmapped pixel image of an alphanumeric character to its equivalent eightbit ASCII code. This basic problem, pictured in Figure 3.2, appears to be relatively trivial at first glance. Since there is no obvious mathematical function 89
Backpropagation
90
Output read in parallel
Always 1
Always 1 Input applied in parallel Figure 3.1
The general backpropagation network architecture is shown.
that will perform the desired translation, and because it would undoubtedly take
too much time (both human and computer time) to perform a pixelbypixel correlation, the best algorithmic solution would be to use a lookup table. The lookup table needed to solve this problem would be a onedimensional linear array of ordered pairs, each taking the form: record AELEMENT =
pattern : long integer; ascii : byte;
end record;
= 0010010101000111111100011000110001
= 0951FC63116
= 6510 ASCII Figure 3.2
Each character image is mapped to its corresponding ASCII code.
91
3.1 The Backpropagation Network
The first is the numeric equivalent of the bitpattern code, which we generate by moving the seven rows of the matrix to a single row and considering the result to be a 35bit binary number. The second is the ASCII code associated with the character. The array would contain exactly the same number of orderedpairs as there were characters to convert. The algorithm needed to perform the conversion process would take the following form: function TRANSLATE(INPUT : long integer; LUT : "AELEMENT[]> return ascii;
{performs pixelmatrix to ASCII character conversion} var TABLE : "AELEMENT[];
found : boolean; i : integer;
begin TABLE = LUT; found = false;
{locate translation table} {translation not found yet}
for i = 1 to length(TABLE) do {for all items in table) if TABLE[i].pattern = INPUT then Found = True; Exit;
{translation found, quit loop}
end; If Found Then return TABLE[i].ascii
{return ascii}
Else return 0 end;
Although the lookuptable approach is reasonably fast and easy to maintain, there are many situations that occur in real systems that cannot be handled by this method. For example, consider the same pixelimagetoASCII conversion process in a more realistic environment. Let's suppose that our character image scanner alters a random pixel in the input image matrix due to noise when the image was read. This single pixel error would cause the lookup algorithm to return either a null or the wrong ASCII code, since the match between the input pattern and the target pattern must be exact. Now consider the amount of additional software (and, hence, CPU time) that must be added to the lookuptable algorithm to improve the ability of the computer to "guess" at which character the noisy image should have been. Singlebit errors are fairly easy to find and correct. Multibit errors become increasingly difficult as the number of bit errors grows. To complicate matters even further, how could our software compensate for noise on the image if that noise happened to make an "O" look like a "Q", or an "E" look like an "F"? If our characterconversion system had to produce an accurate output all the time,
Backpropagation
92
an inordinate amount of CPU time would be spent eliminating noise from the input pattern prior to attempting to translate it to ASCII.
One solution to this dilemma is to take advantage of the parallel nature of neural networks to reduce the time required by a sequential processor to perform the mapping. In addition, systemdevelopment time can be reduced because the network can learn the proper algorithm without having someone deduce that algorithm in advance.
3.1.1
The Backpropagation Approach
Problems such as the noisy imagetoASCII example are difficult to solve by computer due to the basic incompatibility between the machine and the problem. Most of today's computer systems have been designed to perform mathematical and logic functions at speeds that are incomprehensible to humans. Even the relatively unsophisticated desktop microcomputers commonplace today can perform hundreds of thousands of numeric comparisons or combinations every second. However, as our previous example illustrated, mathematical prowess is not what is needed to recognize complex patterns in noisy environments. In fact, an algorithmic search of even a relatively small input space can prove to be timeconsuming. The problem is the sequential nature of the computer itself; the "fetchexecute" cycle of the von Neumann architecture allows the machine to perform only one operation at a time. In most cases, the time required by the computer to perform each instruction is so short (typically about onemillionth of a second) that the aggregate time required for even a large program is insignificant to the human users. However, for applications that must search through a large input space, or attempt to correlate all possible permutations of a complex pattern, the time required by even a very fast machine can quickly become intolerable. What we need is a new processing system that can examine all the pixels in the image in parallel. Ideally, such a system would not have to be programmed explicitly; rather, it would adapt itself to "learn" the relationship between a set of example patterns, and would be able to apply the same relationship to new input patterns. This system would be able to focus on the features of an arbitrary input that resemble other patterns seen previously, such as those pixels in the noisy image that "look" like a known character, and to ignore the noise. Fortunately, such a system exists; we call this system the backpropagation network (BPN).
3.1.2 BPN Operation In Section 3.2, we will cover the details of the mechanics of backpropagation. A summary description of the network operation is appropriate here, to illustrate how the BPN can be used to solve complex patternmatching problems. To begin with, the network learns a predefined set of inputoutput example pairs by using a twophase propagateadapt cycle. After an input pattern has been applied as a stimulus to the first layer of network units, it is propagated through each upper
3.2 The Generalized Delta Rule
93
layer until an output is generated. This output pattern is then compared to the desired output, and an error signal is computed for each output unit. The error signals are then transmitted backward from the output layer to each node in the intermediate layer that contributes directly to the output. However, each unit in the intermediate layer receives only a portion of the total error signal, based roughly on the relative contribution the unit made to the original output. This process repeats, layer by layer, until each node in the network has received an error signal that describes its relative contribution to the total error. Based on the error signal received, connection weights are then updated by each unit to cause the network to converge toward a state that allows all the training patterns to be encoded.
The significance of this process is that, as the network trains, the nodes in the intermediate layers organize themselves such that different nodes learn to recognize different features of the total input space. After training, when presented with an arbitrary input pattern that is noisy or incomplete, the units in the hidden layers of the network will respond with an active output if the new input contains a pattern that resembles the feature the individual units learned to recognize during training. Conversely, hiddenlayer units have a tendency to inhibit their outputs if the input pattern does not contain the feature that they were trained to recognize. As the signals propagate through the different layers in the network, the activity pattern present at each upper layer can be thought of as a pattern with features that can be recognized by units in the subsequent layer. The output pattern generated can be thought of as a feature map that provides an indication of the presence or absence of many different feature combinations at the input. The total effect of this behavior is that the BPN provides an effective means of allowing a computer system to examine data patterns that may be incomplete or noisy, and to recognize subtle patterns from the partial input. Several researchers have shown that during training, BPNs tend to develop internal relationships between nodes so as to organize the training data into classes of patterns [5]. This tendency can be extrapolated to the hypothesis that all hiddenlayer units in the BPN are somehow associated with specific features of the input pattern as a result of training. Exactly what the association is may or may not be evident to the human observer. What is important is that the network has found an internal representation that enables it to generate the desired outputs when given the training inputs. This same internal representation can be applied to inputs that were not used during training. The BPN will classify these previously unseen inputs according to the features they share with the training examples.
3.2
THE GENERALIZED DELTA RULE
In this section, we present the formal mathematical description of BPN operation. We shall present a detailed derivation of the generalized delta rule (GDR), which is the learning algorithm for the network.
94
Backpropagation
Figure 3.3 serves as the reference for most of the discussion. The BPN is a layered, feedforward network that is fully interconnected by layers. Thus, there are no feedback connections and no connections that bypass one layer to go directly to a later layer. Although only three layers are used in the discussion, more than one hidden layer is permissible. A neural network is called a mapping network if it is able to compute
some functional relationship between its input and its output. For example, if the input to a network is the value of an angle, and the output is the cosine of
that angle, the network performs the mapping 9 —> cos(#). For such a simple function, we do not need a neural network; however, we might want to perform a complicated mapping where we do not know how to describe the functional relationship in advance, but we do know of examples of the correct mapping.
'PN
Figure 3.3
The threelayer BPN architecture follows closely the general network description given in Chapter 1. The bias weights, 0^ , and Q°k, and the bias units are optional. The bias units provide a fictitious input value of 1 on a connection to the bias weight. We can then treat the bias weight (or simply, bias) like any other weight: It contributes to the netinput value to the unit, and it participates in the learning process like any other weight.
3.2
The Generalized Delta Rule
95
In this situation, the power of a neural network to discover its own algorithms is extremely useful. Suppose we have a set of P vectorpairs, (xi,yj), ( x 2 , y 2 ) ,    > (xp,yp), which are examples of a functional mapping y = 0(x) : x € R^y 6 R M We want to train the network so that it will learn an approximation o = y' = '(x). We shall derive a method of doing this training that usually works, provided the trainingvector pairs have been chosen properly and there is a sufficient number of them. (Definitions of properly and sufficient will be given
in Section 3.3.) Remember that learning in a neural network means finding an appropriate set of weights. The learning technique that we describe here resembles the problem of finding the equation of a line that best fits a number of known points. Moreover, it is a generalization of the LMS rule that we discussed in Chapter 2. For a linefitting problem, we would probably use a leastsquares approximation. Because the relationship we are trying to map is likely to be nonlinear, as well as multidimensional, we employ an iterative version of the simple leastsquares method, called a steepestdescent technique. To begin, let's review the equations for information processing in the threelayer network in Figure 3.3. An input vector, xp = (xp\,xp2,...,xpN)t, is applied to the input layer of the network. The input units distribute the values to the hiddenlayer units. The net input to the jth hidden unit is (3.1)
where wft is the weight on the connection from the ith input unit, and 6^ is the bias term discussed in Chapter 2. The "h" superscript refers to quantities on the hidden layer. Assume that the activation of this node is equal to the net input; then, the output of this node is )
(3.2)
The equations for the output nodes are L
°kjipj + 8°k opk = />e&)
(3.3) (3.4)
where the "o" superscript refers to quantities on the output layer. The initial set of weight values represents a first guess as to the proper weights for the problem. Unlike some methods, the technique we employ here does not depend on making a good first guess. There are guidelines for selecting the initial weights, however, and we shall discuss them in Section 3.3. The basic procedure for training the network is embodied in the following description:
Backpropagation
96
1. Apply an input vector to the network and calculate the corresponding output values. 2. Compare the actual outputs with the correct outputs and determine a measure of the error. 3. Determine in which direction (+ or —) to change each weight in order to reduce the error.
4. Determine the amount by which to change each weight. 5. Apply the corrections to the weights. 6. Repeat items 1 through 5 with all the training vectors until the error for all
vectors in the training set is reduced to an acceptable value.
In Chapter 2, we described an iterative weightchange law for network with no hidden units and linear output units, called the LMS rule or delta rule: w(t
= w(t)t
(3.5)
where /z is a positive constant, xkl is the ith component of the fcth training vector, and ek is the difference between the actual output and the correct value, £k — (dk — yk) Equation 3.5 is just the component form of Eq. (2.15).
A similar equation results when the network has more than two layers, or when the output functions are nonlinear. We shall derive the results explicitly in the next sections.
3.2.1
Update of OutputLayer Weights
In the derivation of the delta rule, the error for the fcth input vector is Ek =
(dk — yk), where the desired output is dk and the actual output is yk. In this chapter, we adopt a slightly different notation that is somewhat inconsistent with the notation we used in Chapter 2. Because there are multiple units in a layer, a single error value, such as ek, will not suffice for the BPN. We shall define
the error at a single output unit to be 8pk = (ypk — opk), where the subscript "p" refers to the pth training vector, and "fc" refers to the fcth output unit. In this case, ypk is the desired output value, and opk is the actual output from the fcth unit. The error that is minimized by the GDR is the sum of the squares of the errors for all output units:
(3.6) The factor of  in Eq. (3.6) is there for convenience in calculating derivatives later. Since an arbitrary constant will appear in the final result, the presence of this factor does not invalidate the derivation. To determine the direction in which to change the weights, we calculate the negative of the gradient of Ep, VEP, with respect to the weights, Wkj. Then,
3.2
97
The Generalized Delta Rule
we can adjust the values of the weights such that the total error is reduced. It
is often useful to think of Ep as a surface in weight space. Figure 3.4 shows a simple example where the network has only two weights. To keep things simple, we consider each component of VEP separately. From Eq. (3.6) and the definition of ,,*., (3.7)
and 3(net;A.) ——
Figure 3.4
(3.8)
This hypothetical surface in weight space hints at the complexity of these surfaces in comparison with the relatively simple hyperparaboloid of the Adaline (see Chapter 2). The gradient, VE/lt at point z appears along with the negative of the gradient. Weight changes should occur in the direction of the negative gradient, which is the direction of the steepest descent of the surface at the point z. Furthermore, weight changes should be made iteratively until E,, reaches the minimum point
Backpropagation
98
where we have used Eq. (3.4) for the output value, o;,/,., and the chain rule for partial derivatives. For the moment, we shall not try to evaluate the derivative
of flk', but instead will write it simply as /A"'(net")A.). The last factor in Eq. (3.8) is (3.9) Combining Eqs. (3.8) and (3.9), we have for the negative gradient
dE
 
(3.10)
As far as the magnitude of the weight change is concerned, we take it to be proportional to the negative gradient. Thus, the weights on the output layer are updated according to
(3.11) where (3.12)
The factor rj is called the learningrate parameter. We shall discuss the value of /i in Section 3.3. For now, it is sufficient to note that it is positive and is usually less than 1. Let's go back to look at the function /[". First, notice the requirement that the function /£ be differentiable. This requirement eliminates the possibility of using a linear threshold unit such as we described in Chapter 2, since the output function for such a unit is not differentiable at the threshold value. There are two forms of the output function that are of interest here: = net°,
The first function defines the linear output unit. The latter function is called a sigmoid, or logistic function; it is illustrated in Figure 3.5. The choice of output function depends on how you choose to represent the output data. For example, if you want the output units to be binary, you use a sigmoid output function, since the sigmoid is outputlimiting and quasibistable but is also differentiable. In other cases, either a linear or a sigmoid output function is appropriate. In the first case, /£' = 1; in the second case, /£' = /A°(l  /£) = o, )A (l ot,i,.). For these two cases, we have (3.13)
99
3.2 The Generalized Delta Rule
1.0 
Figure 3.5
This graph shows the characteristic Sshape of the sigmoid function.
for the linear output, and
w'kj(t + 1) = wkj(t) + T](ypk  opk)opk(\  opk)ipj
(3.14)
for the sigmoidal output. We want to summarize the weightupdate equations by defining a quantity
= 6 pt /f(net° fc )
(3.15)
We can then write the weightupdate equation as
w°kj (
= w°k (t)
(3. 16)
regardless of the functional form of the output function, /£. We wish to make a comment regarding the relationship between the gradientdescent method described here and the leastsquares technique. If we were trying to make the generalized delta rule entirely analogous to a leastsquares method, we would not actually change any of the weight values until all of the training patterns had been presented to the network once. We would simply accumulate the changes as each pattern was processed, sum them, and make one update to the weights. We would then repeat the process until the error was acceptably low. The error that this process minimizes is (3.17)
where P is the number of patterns in the training set. In practice, we have found little advantage to this strict adherence to analogy with the leastsquares
1 00
Backpropagation
method. Moreover, you must store a large amount of information to use this method. We recommend that you perform weight updates as each training pattern is processed.
Exercise 3.1: A certain network has output nodes called quadratic neurons. The net input to such a neuron is
net*. = ^wkj(ij vkj)2 j The output function is sigmoidal. Both Wkj and vkj are weights, and ij is the jth input value. Assume that the w weights and v weights are independent.
a. Determine the weightupdate equations for both types of weights. b. What is the significance of this type of node? Hint: Consider a single unit of the type described here, having two inputs and a linearthreshold function output. With what geometric figure does this unit partition the input space?
(This exercise was suggested by Gary Mclntire, Loral Space Information Systems, who derived and implemented this network.)
3.2.2
Updates of HiddenLayer Weights
We would like to repeat for the hidden layer the same type of calculation as we did for the output layer. A problem arises when we try to determine a measure of the error of the outputs of the hiddenlayer units. We know what the actual output is, but we have no way of knowing in advance what the correct output
should be for these units. Intuitively, the total error, Ep, must somehow be related to the output values on the hidden layer. We can verify our intuition by going back to Eq. (3.7):
P
k
5£(%>frk
J
We know that zw depends on the weights on the hidden layer through Eqs. (3.1) and (3.2). We can exploit this fact to calculate the gradient of Ep with respect to the hiddenlayer weights.
dEp
\ ^ d
,
,2 '
,, „ , p p = — > (j/pfcp 0,,k)^. —— — —  —— 7T. ——— — ——— J— „ ., —— (3.18) 'k) divj 9(net^) dw^
3.2
The Generalized Delta Rule
101
Each of the factors in Eq. (3. 1 8) can be calculated explicitly from previous equations. The result is O 77"


'
(3.19)
Exercise 3.2: Verify the steps between Eqs. (3.18) and (3.19). We update the hiddenlayer weights in proportion to the negative of Eq. (3.19):
„*  opl{)f?(net0pll)w'lj
AX'; = '//j"(nk, on the output layer. This result is where the notion of backpropagation arises. The known errors on the output layer are propagated back to the hidden layer to determine the appropriate weight changes on that layer. By defining a hiddenlayer error term
^=/> associate the face of a friend with that friend's name, or a name with a telephone number. Many devices exhibit associativememory characteristics. For example, the memory bank in a computer is a type of associative memory: it associates addresses with data. An objectoriented program (OOP) with inheritance can exhibit another type of associative memory. Given a datum, the OOP associates other data with it, through the OOP's inheritance network. This type of memory is called a contentaddressable memory (CAM). The CAM associates data with addresses of other data; it does the opposite of the computer memory bank. The Hopfield memory, in particular, played an important role in the current resurgence of interest in the field of ANS. Probably as much as any other single factor, the efforts of John Hopfield, of the California Institute of Technology, have had a profound, stimulating effect on the scientific community in the area 197
128
The BAM and the Hopfield Memory
of ANS. Before describing the BAM and the Hopfield memory, we shall present
a few definitions in the next section.
4.1
ASSOCIATIVEMEMORY DEFINITIONS
In this section, we review some basic definitions and concepts related to associative memories. We shall begin with a discussion of Hamming distance, not because the concept is likely to be new to you, but because we want to relate it to the more familiar Euclidean distance, in order to make the notion of Hamming distance more plausible. Then we shall discuss a simple associative memory called the linear associator.
4.1.1
Hamming Distance
Figure 4.1 shows a set of points which form the threedimensional Hamming cube. In general, Hamming space can be defined by the expression
Hn = {* = (xi,x2,...,xn)tERn:xi 6 (±1)}
(4.1)
In words, ndimensional Hamming space is the set of ndimensional vectors, with each component an element of the real numbers, R, subject to the condition that each component is restricted to the values ±1. This space has 2" points, all equidistant from the origin of Euclidean space. Many neuralnetwork models use the concept of the distance between two vectors. There are, however, many different measures of distance. In this section, we shall define the distance measure known as Hamming distance and shall show its relationship to the familiar Euclidean distance between points. In later chapters, we shall explore other distance measures. Let x = ( x i , x 2 , . . . ,£„)* and y = (y],y2, ••• ,J/n) f be two vectors in ndimensional Euclidean space, subject to the restriction that X i , y i e {±1}, so that x and y are also vectors in ndimensional Hamming space. The Euclidean distance between the two vector endpoints is d= V(xi  y\)2 + fe  2/2)2 + • • • Since xt,yt e {±1}, then (x,  yrf e {0,4}:
Thus, the Euclidean distance can be written as d = \/4(# mismatched components of x and y)
4.1
129
AssociativeMemory Definitions
(11,1)
H.1,1)
(111)
(1,11)
(1,11)
Figure 4.1
This figure shows the Hamming cube in threedimensional space. The entire threedimensional Hamming space, H3, comprises the eight points having coordinate values of either — 1 or +1. In this threedimensional space, no other points exist.
We define the Hamming distance as
h = # mismatched components of x and y
(4.2)
or the number of bits that are different between x and y.' The Hamming distance is related to the Euclidean distance by the equation (4.3)
or (4.4) Even though the components of the vectors are ± 1, rather than 0 and 1, we shall use the term bits to represent one of the vector components. We shall refer to vectors having components of ± 1 as being bipolar, rather than binary. We shall reserve the term binary for vectors whose components are 0 and 1.
L
130
The BAM and the Hopfield Memory
We shall use the concept of Hamming distance a little later in our discussion of the BAM. In the next section, we shall take a look at the formal definition of the associative memory and the details of the linearassociator model, Exercise 4.1: Determine the Euclidean distance between (1,1,1,1,1)* and (1,—1,1,1,1)*. Use this result to determine the Hamming distance with Eq. (4.4).
4.1.2
The Linear Associator
Suppose we have L pairs of vectors, {(xi, yO, (x2, y 2 ),..., (xj,, yL)}, with x» £ Rn, and y; e R m . We call these vectors exemplars, because we will use them as examples of correct associations. We can distinguish three types of associative memories: 1. Heteroassociative memory: Implements a mapping, $, of x to y such that $(Xj) = y;, and, if an arbitrary x is closer to Xj than to any other Xj, j = 1,...,L, then (x) = y^. In this and the following definitions, closer means with respect to Hamming distance. 2. Interpolate associative memory: Implements a mapping, $, of x to y such that $(Xj) — y,, but, if the input vector differs from one of the exemplars by the vector d, such that x = Xj + d, then the output of the memory also differs from one of the exemplars by some vector e: $(x) = $(Xj + d) = y; + e.
3. Autoassociative memory: Assumes y{ = xz and implements a mapping, $, of x to x such that <J>(Xj) = Xj, and, if some arbitrary x is closer to x, than to any other Xj, j — 1,..., L, then $(x) = Xj.
Building such a memory is not such a difficult task mathematically if we make the further restriction that the vectors, Xj, form an orthonormal set.2 To build an interpolative associative memory, we define the function
y2x*,
yLx*L)x
(45)
If Xj is the input vector, then ij = 1 if i = j, and 6ij = 0 if
4.2
The BAM
131
All the 6ij terms in the preceding expression vanish, except for 622, which is
equal to 1. The result is perfect recall of $2'.
If the input vector is different from one of the exemplars, such that x — \, + d, then the output is
= $(Xi + d) = yt + e where
e= Note that there is nothing in the discussion of the linear associator that requires that the input or output vectors be members of Hamming space: The only requirement is that they be orthonormal. Furthermore, notice that there was no training involved in the definition of the linear associator. The function that mapped x into y was defined by the mathematical expression in Eq. (4.5). Most of the models we discuss in this chapter share this characteristic; that is, they are not trained in the sense that an Adaline or backpropagation network is trained. In the next section, we take up the discussion of BAM. This model utilizes the distributed processing approach, discussed in the previous chapters, to implement an associative memory.
4.2
THE BAM
The BAM consists of two layers of processing elements that are fully interconnected between the layers. The units may, or may not, have feedback connections to themselves. The general case is illustrated in Figure 4.2.
4.2.1
BAM Architecture
As in other neural network architectures, in the BAM architecture there are weights associated with the connections between processing elements. Unlike in many other architectures, these weights can be determined in advance if all of the training vectors can be identified. We can borrow the procedure from the linearassociator model to construct the weight matrix. Given L vector pairs that constitute the set of exemplars that we would like to store, we can construct the matrix: w = y,xj+y2x£ +    + y L x £
(4.6)
This equation gives the weights on the connections from the x layer to the y
layer. For example, the value w2i is the weight on the connection from the
The BAM and the Hopfield Memory
132
x layer
y layer Figure 4.2
The BAM shown here has n units on the x layer, and m units
on the y layer. For convenience, we shall call the x vector the input vector, and call the y vector the output vector. In this network, x e H", and y e Hm. All connections between units are bidirectional, with weights at each end. Information passes back and forth from one layer to the other, through these connections. Feedback connections at each unit may not be present in all BAM architectures.
third unit on the x layer to the second unit on the y layer. To construct the weights for the x layer units, we simply take the transpose of the weight matrix, w'. We can make the BAM into an autoassociative memory by constructing the weight matrix as W = X i X , + X 2 Xj + • • • + XLx'L
In this case, the weight matrix is square and symmetric.
4.2.2
BAM Processing
Once the weight matrix has been constructed, the BAM can be used to recall information (e.g., a telephone number), when presented with some key information (a name corresponding to a particular telephone number). If the desired information is only partially known in advance or is noisy (a misspelled name such as "Simth"), the BAM may be able to complete the information (giving the proper spelling, "Smith," and the correct telephone number).
4.2
The BAM
133
To recall information using the BAM, we perform the following steps:
1. Apply an initial vector pair, (x0, yo), to the processing elements of the BAM. 2. Propagate the information from the x layer to the y layer, and update the values on the ylayer units. We shall see how this propagation is done shortly.3 3. Propagate the updated y information back to the x layer and update the units there. 4. Repeat steps 2 and 3 until there is no further change in the units on each layer.
This algorithm is what gives the BAM its bidirectional nature. The terms input and output refer to different quantities, depending on the current direction of the propagation. For example, in going from y to x, the y vector is considered as the input to the network, and the x vector is the output. The opposite is true when propagating from x to y. If all goes well, the final, stable state will recall one of the exemplars used to construct the weight matrix. Since, in this example, we assume we know something about the desired x vector, but perhaps know nothing about the associated y vector, we hope that the final output is the exemplar whose x, vector is closest in Hamming distance to the original input vector, x0. This scenario works well provided we have not overloaded the BAM with exemplars. If we try to put too much information in a given BAM, a phenomenon known as crosstalk occurs between exemplar patterns. Crosstalk occurs when exemplar patterns are too close to each other. The interaction between these patterns can result in the creation of spurious stable states. In that case, the BAM could stabilize on meaningless vectors. If we think in terms of a surface in weight space, as we did in Chapters 2 and 3, the spurious stable states correspond to minima that appear between the minima that correspond to the exemplars.
4.2.3 BAM Mathematics The basic processing done by each unit of the BAM is similar to that done by the general processing element discussed in the first chapter. The units compute sums of products of the inputs and weights to determine a netinput value, net. On the y layer, net*' = wx (4.7) where net;i/ is the vector of netinput values on the y layer. In terms of the individual units, yt,
.
^Although we consistently begin with the Xtoy propagation, you could begin in the other direction.
134
The BAM and the Hopfield Memory
On the x layer,
w*y
(4.9) (4.10)
The quantities n and m are the dimensions of the x and y layers, respectively. The output value for each processing element depends on the net input value, and on the current output value on the layer. The new value of y at timestep t + 1, y(t + 1), is related to the value of y at timestep t, y(t), by net? > 0 net*' = 0 net? < 0
(4.11)
netf > 0 netf = 0 netf < 0
(4.12)
Similarly, x(t + 1) is related to x(t) by +1 Xi(t) 1
Let's illustrate BAM processing with a specific example. Let
l
=(!,!, 1,1, 1,1,!,!,!, 1)' and y, = (1, 1, 1, 1, 1,1)'
2=(l,l,l,l,l,l,l,l,l,l)*
and y2 = (l,1,1,1,!,!)*
We have purposely made these vectors rather long to minimize the possibility of crosstalk. Hand calculation of the weight matrix is tedious when the vectors are long, but the weight matrix is fairly sparse. The weight matrix is calculated from Eq. (4.6). The result is
w
/ 2 0 0 0 2 0 2 0 2  2 0
0 22 22 22 0 0
2 0 2 0  2 0 \ 02 0 2 02 02 0 2 02 02 0 2 02 2 0  2 0 2 0
022 2 0 2 02 0 2
4.2
The BAM
135
For our first trial, we choose an x vector with a Hamming distance of 1 from Xi: XQ = (—1, —1, — 1 , 1 , — 1 , 1 , 1 , 1, 1, 1)*. This situation could represent noise on the input vector. The starting yo vector is one of the training vectors, y 2 ; y0 = ( 1 , 1 , 1 , 1 , — ! , — ! ) * . (Note that in a realistic problem, you may not have prior knowledge of the output vector. Use a random bipolar vector if necessary.) We will propagate first from x to y. The net inputs to the y units are nety — (4,12,12,12,4,12)*. The new y vector is ynew = (1,1,1,1,1, 1)', which is also one of the training vectors. Propagating back to the x layer we get xnew = (1, — 1, — 1 , 1 , — 1 , 1 , 1 , — 1, —1,1)*. Further passes result in no change, so we are finished. The BAM successfully recalled the first training set.
Exercise 4.2: Repeat the calculation just shown, but begin with the ytox propagation. Is the result what you expected?
For our second example, we choose the following initial vectors: x0 = (1,1,1, 1,1,1,1, 1,1,1)' yo = (1,1,!, 1,1,1)* The Hamming distances of the XQ vector from the training vectors are /I(XQ, xj) = 7 and /i(xo,X2) = 5. For the yo vector, the values are /i(yo,yi) = 4 and h(yo,y2) = 2. Based on these results, we might expect that the BAM would settle on the second exemplar as a final solution. We start again by propagating from x to y, and the new y vector is ynew = (1,1,1,1,1,!)*. Propagating back from y to x, we get xnew = (—1. 1, 1. — 1 , 1 , —1, — 1 , 1 , 1, —1)'. Further propagation does not change the results. If you examine these output vectors, you will notice that they do not match any of the exemplars. Furthermore, they are actually the complement of the first training set, (x ne w,ynew) = (xf,yf), where the "c" superscript refers to the complement. This example illustrates a basic property of the BAM: If you encode an exemplar, (x,y), you also encode its complement, (x c ,y c ). The best way to familiarize yourself with the properties of a BAM is to work through many examples. Thus, we recommend the following exercises.
Exercise 4.3: Using the same weight matrix as in Exercise 4.2, experiment with several different input vectors to investigate the characteristics of the BAM. In particular, evaluate the difference between starting with xtoy propagation, and ytox propagation. Pick starting vectors that have various Hamming distances from the exemplar vectors. In addition, try adding more exemplars to the weight matrix. You can add more exemplars to the weight matrix by a simple additive process. How many exemplars can you add before crosstalk becomes a significant problem?
The BAM and the Hopfield Memory
136
Exercise 4.4: Construct an autoassociative BAM using the following training vectors: xj =(!,!,1,1,1,1)*
and
x2 = (1,1,1, 1, 1, 1)*
Determine the output using XQ = (1,1,1,1,—1,1)*, which is a Hamming distance of two from each training vector. Try XQ = ( — 1 , 1 , 1 , — 1 , 1 , — 1 ) ' , which is a complement of one of the training vectors. Experiment with this network in accordance with the instructions in Exercise 4.3. In addition, try setting the diagonal elements of the weight matrix equal to zero. Does doing so have any effect on the operation of the BAM?
4.2.4
BAM Energy Function
In the previous two chapters, we discussed an iterative process for finding weight values that are appropriate for a particular application. During those discussions, each point in weight space had associated with it a certain error value. The learning process was an iterative attempt to find the weights which minimized the error. To gain an understanding of the process, we examined simple cases having two weights so that each weight vector corresponded to a point on an error surface in three dimensions. The height of the surface at each point determined the error associated with that weight vector. To minimize the error, we began at some given starting point and moved along the surface until we reached the deepest valley on the surface. This minimum point corresponded to the weights that resulted in the smallest error value. Once these weights were found, no further changes were permitted and training was complete. During the training process, the weights form a dynamical system. That is, the weights change as a function of time, and those changes can be represented as a set of coupled differential equations. For the BAM that we have been discussing in the last few sections, a slightly different situation occurs. The weights are calculated in advance, and are not part of a dynamical system. On the other hand, an unknown pattern presented to the BAM may require several passes before the network stabilizes on a final result. In this situation, the x and y vectors change as a function of time, and they form a dynamical system. In both of the dynamical systems described, we are interested in several aspects of system behavior: Does a solution exist? If it does, will the system converge to it in a finite time? What is the solution? Up to now we have been primarily concerned with the last of those three questions. We shall now look at the first two. For the simple examples discussed so far, the question of the existence of a solution is academic. We found solutions; therefore, they must exist. Nevertheless, we may have been simply lucky in our choice of problems. It is still a valid question to ask whether a BAM, or for that matter, any other network, will always converge to a stable solution. The technique discussed here
4.2
The BAM
137
is fairly easy to apply to the BAM. Unfortunately, many network architectures do not have convergence proofs. The lack of such a proof does not mean that
the network will not function properly, but there is no guarantee that it will converge for any given problem.
In the theory of dynamical systems, a theorem can be proved concerning the existence of stable states that uses the concept of a function called a Lyapunov function, or energy function. We shall present a nonrigorous version here, which is useful for our purposes. If a bounded function of the state variables of a dynamical system can be found, such that all state changes result in a decrease in the value of the function, then the system has a stable solution.4 This function is called a Lyapunov function, or energy function. In the case of the BAM, such a function exists. We shall call it the BAM energy function; it has the form £(x,y) = y'wx (4.13)
or, in terms of components,
We shall now state an important theorem about the BAM energy function that will help to answer our questions about the existence of stable solutions of the BAM processing equations. The theorem has three parts: 1. Any change in x or y during BAM processing results in a decrease in E.
2. E is bounded below by Em\n = — ^
Wij .
3. When E changes, it must change by a finite amount. Items 1 and 2 prove that E is a Lyapunov function, and that the dynamical system has a stable state. In particular, item 2 shows that E can decrease only to a certain value; it can't continue down to negative infinity, so that eventually the x and y vectors must stop changing. Item 3 prevents the possibility that changes in E might be infinitesimally small, resulting in an infinite amount of time spent before the minimum E is reached. In essence, the weight matrix determines the contour of a surface, or landscape, with hills and valleys, much like the ones we have discussed in previous chapters. Figure 4.3 illustrates a crosssectional view of such a surface. The analogy of the E function as an energy function results from an analysis of how the BAM operates. The initial state of the BAM is determined by the choice of the starting vectors, (x and y). As the BAM processes the data, x and y change, resulting in movement of the energy over the landscape, which is guaranteed by the BAM energy theorem to be downward. 4
See Hirsch and Smale [3] or Beltrami [1J for a more rigorous version of the theorem.
138
The BAM and the Hopfield Memory
D) CD C
UJ
Figure 4.3
This figure shows a crosssection of a BAM energy landscape in two dimensions. The particular topography results from the choice of exemplar vectors that go into making up the weight matrix. During processing, the BAM energy value will move from its starting point down the energy hill to the nearest minimum, while the BAM outputs move from state a to state b. Notice that the minima reached need not be the global, or lowest, minima on the landscape.
Initially, the changes in the calculated values of E(x, y) are large. As the x and y vectors reach their stable state, the value of E changes by smaller amounts, and eventually stops changing when the minimum point is reached. This situation corresponds to a physical system such as a ball rolling down a hill into a valley, but with enough friction that, by the time the ball reaches the bottom, it has no more energy and therefore it stops. Thus, the BAM resembles a dissipative dynamic system in which the E function corresponds to the energy of the physical system. Remember that the weight matrix determines the contour of this energy landscape; that is, it determines how many energy valleys there are, how far apart they are, how deep they are, and whether there are any unexpected valleys (i.e., spurious states). We need to clarify one point. We have been illustrating these concepts using a twodimensional crosssection of an energy landscape, and using the familiar
4.2
The BAM
139
term valley to refer to the locations of the minima. A more precise term would be basin. In fact, the literature on dynamical systems refers to these locations as basins of attraction. To solidify the concept of BAM energy, we return to the examples of the previous section. First, notice that according to part two of the BAM energy theorem, the minimum value of E is —64, found by summing the negatives of all the magnitudes of the components of the matrix. A calculation of E for each of the two training vectors shows that both pairs sit at the bottom of basins having this same value of E. Our first trial vectors were x0 = (I,!,!,!,},!,!,!,],!)* and y0 = (1,1,1,1,1,1)'. The energy of this system is E = y^wxc = 8. The first propagation results in ynew = (!,!,—!,!,—1, 1)*, and a new energy value E = y£ewwx0 — 24. Propagation back to the x layer resulted in xnew = (1, —1, — 1 , 1 , — 1 , 1 , 1 , —1, —1,1)*. The energy is now E = y^ewwxnew = 64. At this point, no further passes through the system are necessary, since —64 is the lowest possible energy. Since any further change in x or y would lower the energy, according to the theorem, no such changes are possible. Exercise 4.5: Perform the BAM energy calculation on the second example from Section 4.2.3.
Proof of the BAM Energy Theorem. In this section, we prove the first part of the BAM energy theorem. We present this proof because it is both clever and easy to understand. The proof is not essential to your understanding of the remaining material, so you may skip it if you wish. We begin with Eq. (4.14), which is reproduced here:
According to the theorem, any change in x or y must result in a decrease in the value of E. For simplicity, we first consider a change in a single component of y, specifically yf.. We can rewrite Eq. (4. 14) showing the term with yk explicitly:
Now, make the change y/. — > y"ew. The new energy value is
£new =  E vrvj  E E vwi
I 140
The BAM and the Hopfield Memory
Since only yj has changed, the second terms on the right sides of Eqs. (4.15) and (4.16) are identical. In that case we can write the change in energy as n
AF — Fnew — n/) F} — (in ?i new ^) V^ L±IJ — I\n* \yk — y^ 7 i/i, u / ^ i XT y•
(A 17s! \I.LI)
For convenience, we recall the statechange equations that determine the new value of y 0. But, according to the procedure for calculating y"ew, this transition can occur only if Y^]=\ wkjxj < 0 Therefore, the value of AE is the product of one factor that is greater than zero and one that is less than zero. The result is that AE < 0. The second possibility is that yk = 1 and y"ew — +1. Then, (yk y"ew) < 0, but this transition occurs only if 5Z?=i w^jxi > 0 Again, AE 's tne product of one factor less than zero and one greater than zero. In both cases where y^ changes, AE decreases. Note that, for the case where y/,. does not change, both factors in the equation for AE are zero, so the energy does not change unless one of the vectors changes. Equation (4.17) can be extended to cover the situation where more than one component of the y vector changes. If we write Ay, — (y;  y"ew), then the equation that replaces Eq. (4.17) for the general case where any or all components of y can change is ni
AE = (E
new
n
 E)  ^ Ay, >T wijXj
(4.18)

This equation is a sum of m terms, one for each possible Ay;, which are either negative or zero depending on whether or not y; changed. Thus, in the general case, E must decrease if y changes. Exercise 4.6: Prove part 2 of the BAM energy theorem.

Exercise 4.7: Prove part 3 of the BAM energy theorem. Exercise 4.8: Suppose we have denned an autoassociative BAM whose weight matrix is calculated according to W = XiX* + X 2 Xj + • • • + XLX*L
where the x, are not necessarily orthonormal. Show that the weight matrix can be written as w = al + S
141
4.3 The Hopfield Memory
where a is a constant, I is the identity matrix, and S is identical to the weight matrix, but with zeros on the diagonal. For an arbitrary input vector, x, show that the value of the BAM energy function is
E=(3 x'Sx where /3 is a constant. From this result, deduce that the change in energy, AE,
during a state change, is independent of the diagonal elements on the weight matrix.
4.3 THE HOPFIELD MEMORY In this section we describe two versions of an ANS, which we call the Hopfield memory. We shall show that you can consider the Hopfield memory as a derivative of the BAM, although we doubt that that was the way the Hopfield memory originated. The two versions are the discrete Hopfield memory, and the continuous Hopfield memory, depending on whether the unit outputs are a discrete or a continuous function of the inputs respectively.
4.3.1
Discrete Hopfield Memory
In the discussion of the previous sections, we defined an autoassociative BAM as one which stored and recalled a set of vectors {\i, x 2 , . . . , XL }• The prescription for determining the weights was to calculate the correlation matrix: w= Figure 4.4 illustrates a BAM that performs this autoassociative function. We pointed out in the previous section that the weight matrix for an autoassociative memory is square and symmetric, which means, for example, that the weights w\2 and wi\ are equal. Since each of the two layers has the same number of units, and the connection weight from the nth unit on layer 1 to the nth unit on layer 2 is the same as the connection weight from the nth unit on layer 2 back to the nth unit on layer 1, it is possible to reduce the autoassociative BAM structure to one having only a single layer. Figure 4.5 illustrates this structure. A somewhat different rendering appears in Figure 4.6. The figure shows a fully connected network, without the feedback from each unit to itself. The major difference between the architecture of Figure 4.5 and that of Figure 4.6 is the existence of the external input signals, /,. This addition modifies the calculation of the net input to a given unit by the inclusion of the /,; term. In this case, net,; =
(4.19)
The BAM and the Hopfield Memory
142
x layer
x layer Figure 4.4 The autoassociative BAM architecture has an equal number of units on each layer. Note that we have omitted the feedback terms to each unit.
x layer
Figure 4.5
The autoassociative BAM can be reduced to a singlelayer structure. Notice that, when the reduction is carried out, the feedback connections to each unit reappear.
4.3 The Hopfield Memory
Figure 4.6
143
This figure shows the Hopfieldmemory architecture without the feedback connections to each unit. Eliminating these connections explicitly forces the weight matrix to have zeros on the diagonal. We have also added external input signals, /,:, to each unit.
Moreover, we can allow the threshold condition on the output value to take on a value other than zero. Then, instead of Eq. (4.12), we would have +1 Xi(t) 1
netj > netz — net;
.
*JXi *JX i ~~r~ •> *• " * • j
A rj
i=l
C P.
In the last term of this equation, if i = 1, then the quantity VYO appears. If i = n, then the quantity vy,n+i appears. These results establish the need for the definitions, vX(n+\) = vxi and vxo = vXn, discussed previously. Before we dissect this rather formidable energy equation, first note that dxy represents the distance between city X and city Y, and that dxy = dYx. Furthermore, if the parameters, A,B,C, and D, are positive, then E is nonnegative.6 Let's take the first term in Eq. (4.29) and consider the fivecity tour proposed in Figure 4.10. The product terms, vXivXj, refer to a single city, X. The inner two sums result in a sum of products, v\iVxj, for all combinations of i and j as long as i / j. After the network has stabilized on a solution, and in the highgain limit where v € {0, 1}, all such terms will be zero if and only if there is a single vxi — 1 and all other vxi = 0. That is, the contribution of the first term in Eq. (4.29) will be zero if and only if a single city appears in each row of the PE matrix. This condition corresponds to the constraint that each city appear only once on the tour. Any other situation results in a positive value for this term in the energy equation. Using a similar analysis, we can show that the second term in Eq. (4.29) can be shown to be zero, if and only if each column of the PE matrix contains a single value of 1. This condition corresponds to the second constraint on the problem; that each position on the tour has a unique city associated with it. In the third term, ^x ^V vXi is a simple summation of all n2 output values. There should be only n of these terms that have a value of 1; all others The parameters. A, B, C, and D, in Eq. (4.29) have nothing to do with the cities labeled A, B, C, and D.
152
The BAM and the Hopfield Memory
should be zero. Then the third term will be zero, since ^x J^ vxt = n. If more or less than n terms are equal to 1, then the sum will be greater or less than n, and the contribution of the third term in Eq. (4.29) will be greater than zero. The final term in Eq. (4.29) computes a value proportional to the distance traveled on the tour. Thus, a minimum distance tour results in a minimum contribution of this term to the energy function. Exercise 4.10: For the fivecity example in Figure 4.10(b), show that the third term in Eq. (4.29) is
D(dBA
dDB)
The result of this analysis is that the energy function of Eq. (4.29) will be minimized only when a solution satisfies all four of the constraints (three strong, one weak) listed previously. Now we wish to construct a weight matrix that corresponds to this energy function so that the network will compute solutions properly. As is often the case, it is easier to specify what the network should not do than to say what it should do. Therefore, the connection weight matrix is defined solely in terms of inhibitions between PEs. Instead of the double index that has been used up to now to describe the weight matrix, we adopt a fourindex scheme that corresponds to the doubleindex scheme on the output vectors. The connectionmatrix elements are the n2 by n2 quantities, TXI.YJ, where X and Y refer to the cities, and i and j refer to the positions on the tour. The first term of the energy function is zero if and only if a single element in each row of the outputunit matrix is 1. This situation is favored if, when one unit in a row is on (i.e., it has a large output value relative to the others), then it inhibits the other units in the same row. This situation is essentially a winnertakeall competition, where the rich get richer at the expense of the poor. Consider the quantity where 6.,,,, = 1 if u = v, and 6UV = 0 if u / v. The first delta is zero except on a single row where X = Y. The quantity in parentheses is 1 unless i = j. That factor ensures that a unit inhibits all other units on its row but does not inhibit itself. Therefore, if this quantity represented a connection weight between units, all units on a particular row would have an inhibitory connection of strength —A to all other units on the same row. As the network evolved toward a solution, if one unit on a row began to show a larger output value than the others, it would tend to inhibit the other units on the same row. This situation is called lateral inhibition. The second term in the energy function is zero if and only if a single unit in each column of the outputunit matrix is 1 . The quantity
4.3
The Hopfield Memory
153
causes a lateral inhibition between units on each column. The first delta ensures that this inhibition is confined to each column, where i = j. The second delta ensures that each unit does not inhibit itself. The contribution of the third term in the energy equation is perhaps not so intuitive as the first two. Because it involves a sum of all of the outputs, it has a rather global character, unlike the first two terms, which were localized to rows and columns. Thus, we include a global inhibition, — C, such that each unit in the network is inhibited by this constant amount. Finally, recall that the last term in the energy function contains information about the distance traveled on the tour. The desire to minimize this term can be translated into connections between units that inhibit the selection of adjacent cities in proportion to the distance between those cities. Consider the term
For a given column, j (i.e., for a given position on the tour), the two delta terms ensure that inhibitory connections are made only to units on adjacent columns. Units on adjacent columns represent cities that might come either before or after the cities on column j. The factor DdXy ensures that the units representing cities farther apart will receive the largest inhibitory signal. We can now define the entire connection matrix by adding the contributions of the previous four paragraphs:
TXi,Yj = A6xY(lSij)B6ij(lSxY)CDdXY(6j.i+i+6j.ii)
(4.30)
The inhibitory connections between units are illustrated graphically in Figure 4.11. To find a solution to the TSP, we must return to the equations that describe the time evolution of the network. Equation (4.24) is the one we want:
Here, we have used N as the summation limit to avoid confusion with the n previously defined. Because all of the terms in Ttj contain arbitrary constants, and Ii can be adjusted to any desired values, we can divide this equation by C and write
dt
^
l} J
r
where r = RC, the system time constant, and we have assumed that Rj = R
for all i.
The BAM and the Hopfield Memory
154
Position on tour 3
Figure 4.11
4
This schematic illustrates the pattern of inhibitory connections between PEs for the TSP problem: Unit a illustrates the inhibition between units on a single row, unit b shows the inhibition within a single column, and unit c shows the inhibition of units in adjacent columns. The global inhibition is not shown.
A digital simulation of this system requires that we integrate the above set of equations numerically. For a sufficiently small value of A£, we can write N
A (4.31)
Then, we can iteratively update the Uj values according to (4.32)
4.3
The Hopfield Memory
155
where Aw^ is given by Eq. (4.31). The final output values are then calculated using the output function
Notice that, in these equations, we have returned to the subscript notation used in the discussion of the general system: v^ rather than VYJIn the doublesubscript notation, we have
uXi(t + 1) = uxitt) + &uxt
(4.33)
vxt = 9Xr(uXi) = ^(l + tanh(Awxi))
(4.34)
and
If we substitute TXI.YJ from Eq. (4.30) into Eq. (4.31), and define the external inputs as Ixi = Cri , with n' a constant, and C equal to the C in Eq. (4.30), the result takes on an interesting form (see Exercise 4.11):
(4.35) n
 D

d
Exercise 4.11: Assume that n' = n in Eq. (4.35). Then, the sum of terms, ~A(...)  B(...)  C(...)  D(...), has a simple relationship to the TSP energy function in Eq. (4.29). What is that relationship? Exercise 4.12: Using the doublesubscript notation on the outputs of the PEs, «C3 refers to the output of the unit that represents city C in position 3 of the tour. This unit is also element v^ of the outputunit matrix. What is the general equation that converts the dual subscripts of the matrix notation, Vjk into the proper single subscript of the vector notation, i>,:?
Exercise 4.13: There are 25 possible connections to unit vC3 — ^33 from other units in the fivecity tour problem. Determine the values of the resistors, Rij = l/\Tij\, that form those connections.
To complete the solution of the TSP, suitable values for the constants must be chosen, along with the initial values of the uxi Hopfield [6] provides parameters suitable for a 10city problem: A = B = 500, C = 200, D = 500, T — 1, A = 50, and n' = 15. Notice that it is not necessary to choose n' = n. Because n' enters the equations through the external inputs, 7j = Cn', it can be used as another adjustable parameter. These parameters must be empirically chosen, and those for a 10city tour will not necessarily work for tours of different sizes.
The BAM and the Hopfield Memory
156
We might be tempted to make all of the initial values of the uxi equal to a constant UQO such that, at t = 0,
because that is what we expect that particular sum to be when the network has stabilized on a solution. Assigning initial values in that manner, however, has the effect of placing the system on an unstable equilibrium point, much like a ball placed at the exact top of a hill. Without at least a slight nudge,the ball would remain there forever. Given that nudge, however, the ball would roll down the hill. We can give our TSP system a nudge by adding a random noise term to the UQO values, so that uxi = "oo + 6uxi, where Suxt is the random noise term, which may be different for each unit. In the ballonthehill analogy, the direction of the nudge determines the direction in which the ball rolls off the hill. Likewise, different randomnoise selections for the initial uxi values may result in different final stable states. Refer back to the discussion of optimization problems earlier in this section, where we said that a good solution now may be better than the best solution later. Hopfield's solution to the TSP may not always find the best solution (the one with the shortest distance possible), but repeated trials have shown that the network generally settles on tours at or near the minimum distance. Figure 4.12 shows a graphical representation of how a network would evolve toward a solution. We have discussed this example at great length to show both the power and the complexity of the Hopfield network. The example also illustrates a general principle about neural networks: For a given problem, finding an appropriate representation of the data or constraints is often the most difficult part of the solution.
4.4 SIMULATING THE BAM As you may already suspect, the implementation of the BAM network simulator will be straightforward. The only difficulty is the implementation of bidirectional connections between the layers, and, with a little finesse, this is a relatively easy problem to overcome. We shall begin by describing the general nature of the problems associated with modeling bidirectional connections in a sequential memory array. From there, we will present the data structures needed to overcome these problems while remaining compatible with our basic simulator. We conclude this section with a presentation of the algorithms needed to implement the BAM.
4.4.1
BidirectionalConnection Considerations
Let us first consider the basic data structures we have defined for our simulator. We have assumed that all network PEs will be organized into layers, with connections primarily between the layers. Further, we have decided that the
4.4
A
Simulating the BAM
1 .
2 3 4 5 6 . . .  •
157
7 8 9 • . .
10 .
B C D E F G H
Figure 4.12
(a)
(b)
(c)
(d)
This sequence of diagrams illustrates the convergence of the Hopfield network for a 10city TSP tour. The output values, vxi, are represented as squares at each location in the outputunit matrix. The size of the square is proportional to the magnitude of the output value, (a, b, c) At the intermediate steps, the system has not yet settled on a valid tour. The magnitude of the output values for these intermediate steps can be thought of as the current estimate of the confidence that a particular city will end up in a particular position on the tour, (d) The network has stabilized on the valid tour, DHIFGEAJCB. Source: Reprinted with permission of SpringerVerlag, Heidelberg, from J. J. Hopfield and D. W. Tank, "Neural computation of decisions in optimization problems." Biological
Cybernetics, 52:141152, 1985.
L
158
The BAM and the Hopfield Memory
individual PEs within any layer will be simulated by processing inputs, with no provision for processing output connections. With respect to modeling bidirectional connections, we are faced with the dilemma of using a single connection as input to two different PEs. Thus, our parallel array structures for modeling network connections are no longer valid. As an example, consider the weight matrix illustrated on page 136 as part of the discussion in Section 4.2. For clarity, we will consider this matrix as being an R x C array, where R = rows = 6 and C — columns — 10. Next, consider the implementation of this matrix in computer memory, as depicted in Figure 4.13. Since memory is organized as a onedimensional linear array of cells (or bytes, words, etc.), most modern computer languages will allocate and maintain this matrix as a onedimensional array of R vectors, each C cells long, arranged sequentially in the computer memory.7 In this implementation, access to each row vector requires at least one multiplication (row index x number of columns per row) and an addition (to determine the memory address of the row, offset from the base address of the array). However, once the beginning of the row has been located, access to the individual components within the vector is simply an increment operation. In the columnvector case, access to the data is not quite as easy. Simply put, each component of the column vector must be accessed by performance of a multiplication (as before, to access the appropriate row), plus an addition to locate the appropriate cell. The penalty imposed by this approach is such that, for the entire column vector to be accessed, R multiplications must be performed. To access each element in the matrix as a component of a column vector, we must do R x C multiplications, or one for each element—a timeconsuming process.
4.4.2 BAM Simulator Data Structures Since we have chosen to use the arraybased structure, we are faced with the complicated lem of accessing the network weight matrix the propagation from layer x to layer y, then
model for our basic network data (and CPUtimeconsuming) probfirst as a set of row vectors for accessing weights as a set of col
umn vectors for the propagation in the other direction. Further complicating the situation is the fact that we have chosen to isolate the weight vectors in our network data structure, accessing each array indirectly through the intermediate 7
FORTRAN, which uses a columnmajor array organization, is the notable exception.
4.4
159
Simulating the BAM
weight_ptr array. If we hold strictly to this scheme, we must significantly modify the design of our simulator to allow access to the connections from both layers of PEs, a situation illustrated in Figure 4.14. As shown in this diagram, all the connection weights will be contained in a set of arrays associated with one layer of PEs. The connections back to the other layer must then be individually accessed by indexing into each array to extract the appropriate element. To solve this dilemma, let's now consider a slight modification to the conceptual model of the BAM. Until now, we have considered the connections between the layers as one set of bidirectional paths; that is, signals can pass
High memory
Y////////A
BA + 6(10)
Row 5
BA + 5(10)
Row 4
BA + 4(10)
Row 3
BA + 3(10)
Row 2
BA + 2(10)
Row 1
BA+1(10)
10 Columns RowO
Base address (BA)
Low memory Figure 4.13
The rowmajor structure used to implement a matrix is shown. In this technique, memory is allocated sequentially so that column values within the same row are adjacent. This structure allows the computer to step through all values in a single row by simply incrementing a memory pointer.
The BAM and the Hopfield Memory
160
outputs
Figure 4.14
weight matrix
yl
weights
y
weights
y
weights
This bidirectional connection implementation uses our standard data structures. Here, the connection arrays located by the layer y structure are identical to those previously described for the backpropagation simulator. However, the pointers associated with the layer x structure locate the connection in the first weights array that is associated with the column weight vector. Hence, stepping through connections to layer x requires locating the connection in each weights array at the same offset from the beginning of array as the first connection.
from layer x to layer y as well as from layer y to layer x. If we instead consider the connections as two sets of unidirectional paths, we can logically implement the same network if we simply connect the outputs of the x layer to the inputs
on the y layer, and, similarly, connect the outputs of the y layer to the inputs on the x layer. To complete this model, we must initialize the connections from x to y with the predetermined weight matrix, while the connections from y to x must contain the transpose of the weight matrix. This strategy allows us to process only inputs at each PE, and, since the connections are always accessed in the desired rowmajor form, allows efficient signal propagation through the simulator, regardless of direction.
4.4
Simulating the BAM
161
The disadvantage to this approach is that it consumes twice as much memory as does the singlematrix implementation. There is not much that we can do to solve this problem other than reverting to the singlematrix model. Even a linkedlist implementation will not solve the problem, as it will require approximately three times the memory of the singlematrix model. Thus, in terms of memory consumption, the singlematrix model is the most efficient implementation. However, as we have already seen, there are performance issues that must be considered when we use the single matrix. We therefore choose to implement the double matrix, because runtime performance, especially in a large network application, must be good enough to prevent long periods of dead time while the human operator waits for the computer to arrive at a solution. The remainder of the network is completely compatible with our generic network data structures. For the BAM, we begin by defining a network with two layers: record BAM =
X : "layer; Y : "layer;
{pointer to first layer record} {pointer to second layer record}
end record;
As before, we now consider the implementation of the layers themselves. In the case of the BAM, a layer structure is simply a record used to contain pointers to the outputs and weight_ptr arrays. Such a record is defined by the structure record LAYER =
OUTS : ~integer[]; WEIGHTS : ~"integer[]; end record;
{pointer to node outputs array} {pointer to weight_ptr array}
Notice that we have specified integer values for the outputs and weights in the network. This is a benefit derived from the binary nature of the network, and from the fact that the individual connection weights are given by the dot product between two integer vectors, resulting in an integer value. We use integers in this model, since most computers can process integer values much faster than they can floatingpoint values. Hence, the performance improvement of the simulator for large BAM applications justifies the use of integers. We now define the three arrays needed to store the node outputs, the connection weights, and the intermediate weightptr. These arrays will be sized dynamically to conform to the desired BAM network structure. In the case of the outputs arrays, one will contain x integer values, whereas the other must be sized to contain y integers. The weight_ptr array will contain a memory pointer for each PE on the layer; that is, x pointers will be required to locate the connection arrays for each node on the x layer, and y pointers for the connections to the y layer. Conversely, each of the weights arrays must be sized to accommodate an integer value for each connection to the layer from the input layer. Thus, each
162
The BAM and the Hopfield Memory
weights array on the x layer will contain y values, whereas the weights arrays on the y layer will each contain x values. The complete BAM data structure is illustrated in Figure 4.15.
4.4.3 BAM Initialization Algorithms As we have noted earlier, the BAM is different from most of the other ANS networks discussed in this text, in that it is not trained; rather, it is initialized. Specifically, it is initialized from the set of training vectors that it will be required to recall. To develop this algorithm, we use the formula used previously to generate the weight matrix for the BAM, given by Eq. (4.6), and repeated here
outputs
Figure 4.15
y1 weights
The data structures for the BAM simulator are shown. Notice the difference in the implementation of the connection arrays in this model and in the singlematrix model described earlier.
4.4 Simulating the BAM
163
for reference:
We can translate this general formula into one that can be used to determine any specific connection weight, given L training pairs to be encoded in the BAM. This new equation, which will form the basis of the routine that we will use to initialize the connection weights in the BAM simulation, is given by WRC =
\i[c]
(4.36)
where the variables r and c denote the row and column position of the weight value of interest. We assume that, for purposes of computer simulation, each of the training vectors x and y are onedimensional arrays of length C and R, respectively. We also presume that the calculation will be performed only to determine the weights for the connections from layer x to layer y. Once the values for these connections are determined, the connections from y to x are simply the transpose of this weight matrix. Using this equation, we can now write a routine to determine any weight value for the BAM. The following algorithm presumes that all the training pairs to be encoded are contained in two external, twodimensional matrices named XT and YT. These arrays will contain the patterns to be encoded in the BAM, organized as L instances of either x or y vectors. Thus, the dimensions of the XT and YT initialization matrices are L x C and L x R respectively. function weight (r,c,L:integer; XT,YT:"integer[][]) return integer; {determine and return weight value for position r,c} var i : integer; x,y : "integer[] [] ; sum : integer;
{loop iteration counter} {local array pointers} {local accumulator}
begin sum = 0; x = XT; y = YT;
{initialize accumulator} {initialize x pointer} {initialize y pointer}
for i = 1 to L do {for all training pairs} sum = sum + y[i][r] * x[i] [c]; end do; return (sum); end function;
{return the result}
The weight function allows us to compute the value to be associated with any particular connection. We will now extend that basic function into a general routine to initialize all the weights arrays for all the input connections
The BAM and the Hopfield Memory
164
to the PEs in layer y. This algorithm uses two functions, called rowsin and colsin, that return the number of rows and columns in a given matrix. The implementation of these two algorithms is left to the reader as an exercise. procedure initialize (Y:~layer; XT,YT:"integer[][]); {initialize all input connections to a layer, Y}
var connects: "integer[] units: ~integer[]; L,R,C : integer; i, j: integer;
(connection pointer) {pointers into weight_ptrs} {sizeof variables} {iteration counters}
begin units = Y".WEIGHTS; L = rows_in (XT); R = cols_in (YT); C = cols in (XT);
{locate weight_ptr array} {number of training patterns} {dimension of Y vector} {dimension of X vector}
for i = 1 to length(units) do {for all units on layer} connects = unit[i]; {get pointer to weight array} for j = 1 to length(connects) do {for all connections to unit} connects[j] = weight (R, C, L, XT, YT) ;
{initialize weight}
end do; end do; end procedure;
We indicated earlier that the connections from layer y to layer x could be initialized by use of the transpose of the weight matrix computed for the inputs to layer y. We could develop another routine to copy data from one set of arrays to another, but inspection of the initialize algorithm just described indicates that another algorithm will not be needed. By substituting the x layer record for the y, and swapping the order of the two input training arrays, we can use initialize to initialize the input connections to the x layer as well, and we will have reduced the amount of code needed to implement the simulator. On the other hand, the transpose operation is a relatively easy algorithm to write, and, since it involves only copying data from one array to another, it is also extremely fast. We therefore leave to you the choice of which of these two approaches to use to complete the BAM initialization.
4.4.4
BAM Signal Propagation
Now that we have created and initialized the BAM, we need only to implement an algorithm to perform the signal propagation through the network. Here again, we would like this routine to be general enough to propagate signals to either layer of the BAM. We will therefore design the routine such that the direction
4.4
Simulating the BAM
165
of signal propagation will be determined by the order of the input arguments to the routine. For simplicity, we also assume that the layer outputs arrays have been initialized to contain the patterns to be propagated. Before we proceed, however, note that the desired algorithm generality implies that this routine will not be sufficient to implement completely the iterated signalpropagation function needed to allow the BAM to stabilize. This iteration must be performed by a higherlevel routine. We will therefore design the unidirectional BAM propagation routine as a function that returns the number of patterns changed in the receiving layer, so that the iterated propagation routine can easily determine when to stop. With these concerns in mind, we can now design the unidirectional signalpropagation routine. Such a routine will take this general form: function propagate (X,Y:"layer) return integer; {propagate signals from layer X to layer Y} var changes, i, j: integer; ins, outs : "integer[]; connects : ~integer[]; sum : integer;
begin outs = Y'.OUTS; changes = 0;
{local counters} {local pointers} {locate connections} {sum of products}
{locate start of Y array} {initialize counter}
for i = 1 to length (outs) do ins = X'.OUTS; connects= Y".WEIGHTS[i]; sum = 0 ;
{for all output units} {locate X outputs} {find connections} {initial sum}
for j = 1 to length(ins) do {for all inputs} sum = sum + ins[j] * connects[j]; end do;
if (sum < 0) then sum = 1 else if (sum > 0) then sum = 1 else sum = outs[i];
{if negative sum} {use 1 as output} {if positive sum} {use 1 as output} {else use old output}
if (sum != outs[i])
{if unit changed}
then changes = changes + 1; outs[i] = sum;
{store new output}
end do; return (changes); end function;
{number of changes}
166
Programming Exercises
To complete the BAM simulator, we will need a toplevel routine to perform the bidirectional signal propagation. We will use the propagate routine described previously to perform the signal propagation between layers, and we will iterate until no units change state on two successive passes, as that will indicate that the BAM has stabilized. Here again, we assume that the input vectors have been initialized by an external process prior to calling recall. procedure recall (net:BAM); {propagate signals in the BAM until stabilized} var delta : integer;
{how many units change}
begin delta = 100;
{arbitrary nonzero value}
while (delta != 0) {until two successive passes} do delta = 0 ; {reset to zero} delta = delta + propagate (net'.X, net'.Y); delta = delta + propagate (net'.Y, net~.X);
end do; end procedure;
Programming Exercises 4.1. Define the pseudocode algorithms for the functions rows.in and cols_in as described in the text. 4.2. Implement the BAM simulator described in Section 4.4, adding a routine to initialize the input vectors from patterns read from a data file. Test the BAM with the two training vectors described in Exercise 4.4 in Section 4.2.3. 4.3. Modify the BAM simulator so that the initial direction of signal propagation can be specified by the user at run time. Repeat Exercise 4.2, starting signal propagation first from x to y, then from y to x. Describe the results for each case. 4.4. Develop an encoding scheme to represent the following training pairs for a BAM application. Initialize your simulator with the training data, and then apply a "noisy" input pattern to the input (Hint: One way to do this exercise is to encode each character as a sevenbit ASCII code, letting —1 represent a logic 0 and +1 represent a logic 1). Does your BAM return the correct results? x
Y
CAT DOG FLY
TABBY ROVER PESKY
Bibliography
167
Suggested Readings Introductory articles on the BAM, by Bart Kosko, appear in the IEEE ICNN proceedings and Byte magazine [9, 10]. Two of Kosko's papers discuss how to make the BAM weights adaptive [8, 11]. The Scientific American article by Tank and Hopfield provides a good introduction to the Hopfield network as we have discussed the latter in this chapter [13]. It is also worthwhile to review some of the earlier papers that discuss the development of the network and the use of the network for optimization problems such as the TSP [4, 5, 6, 7]. The issue of the information storage capacity of associative memories is treated in detail in the paper by Kuh and Dickinson [12]. The paper by Tagliarini and Page, on "solving constraint satisfaction problems with neural networks," is a good complement to the discussion of the TSP in this chapter [14].
Bibliography [1] Edward J. Beltrami. Mathematics for Dynamic Modeling. Academic Press, Orlando, FL, 1987. [2] M. R. Garey and D. S. Johnson. Computers and Intractability. W. H. Freeman, New York, 1979. [3] Morris W. Hirsch and Stephen Smale. Differential Equations, Dynamical Systems, and Linear Algebra. Academic Press, New York, 1974. [4] John J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA, 79:25542558, April 1982. Biophysics. [5] John J. Hopfield. Neurons with graded response have collective computational properties like those of twostate neurons. Proc. Natl. Acad. Sci. USA, 81:30883092, May 1984. Biophysics. [6] John J. Hopfield and David W. Tank. "Neural" computation of decisions in optimization problems. Biological Cybernetics, 52:141152, 1985. [7] John J. Hopfield and David W. Tank. Computing with neural circuits: A model. Science, 233:625633, August 1986. [8] Bart Kosko. Adaptive bidirectional associative memories. Applied Optics, 26(23):49474960, December 1987. [9] Bart Kosko. Competitive bidirectional associative memories. In Proceedings of the IEEE First International Conference on Neural Networks, San Diego, CA, II: 759766, June 1987. [lOJBart Kosko. Constructing an associative memory. Byte, 12(10): 137144, September 1987.
168
The BAM and the Hopfield Memory
[11] Bart Kosko. Bidirectional associative memories. IEEE Transactions on Systems, Man, and Cybernetics, 18(1):4960, JanuaryFebruary 1988. [12] Anthony Kuh and Bradley W. Dickinson. Information capacity of associative memories. IEEE Transactions on Information Theory, 35(l):5968, January 1989. [13] David W. Tank and John J. Hopfield. Collective computation in neuronlike circuits. Scientific American, 257(6): 104114, December 1987.
[14] Gene A. Tagliarini and Edward W. Page. Solving constraint satisfaction problems with neural networks. In Proceedings of the First IEEE Conference on Neural Networks, San Diego, CA, III: 741747, June 1987.
C
H
A
P
T
E
R
Simulated Annealing
The neural networks discussed in Chapters 2, 3, and 4 relied on the minimization of some function during either the learning process (Adaline and backpropagation) or the recall process (BAM and Hopfield network). The technique employed to perform this minimization is essentially the opposite of a standard heuristic used to find the maximum of a function. That technique is known as hill climbing. The term hill climbing derives from a simple analogy. Imagine that you are standing at some unknown location in an area of hilly terrain, with the goal of walking up to the highest peak. The problem is that it is foggy and you cannot see more than a few feet in any direction. Barring obvious solutions, such as waiting for the fog to lift, a logical way to proceed would be to begin walking in the steepest upward direction possible. If you walk only upward at each step, you will eventually reach a spot where the only possible way to go is down. At this point, you are at the top of a hill. The question that remains is whether this hill is indeed the highest hill possible. Unfortunately, without further extensive exploration, that question cannot be answered. The methods we have used to minimize energy or error functions in previous chapters often suffer from a similar problem: If only downward steps are allowed, when the minimum is reached it may not be the lowest minimum possible. The lowest minimum is referred to as the global minimum, and any other minimum that exists is called a local minimum. It is not always necessary, or even desirable, to reach the global minimum during a search. In one instance, it is impossible to reach any but the global minimum. In the case of the Adaline, the error surface was shown to be a hyperparaboloid with a single minimum. Thus, finding a local minimum is impossible. In the BAM and discrete Hopfield model, we store items at the vertices of the Hamming hypercube, each of which occupies a minimum of the energy surface. When recalling an item, we begin with some partial
information and seek the local minimum nearest to the starting point. Hope169
170
Simulated Annealing
fully, the item stored at that local minimum will represent the complete item
of interest. The point we reach may or may not lie at the global minimum of the energy function. Thus, we do not care whether the minimum that we reach is global; we desire only that it correspond to the data in which we are interested. The generalized delta rule, used as the learning algorithm for the backpropagation network, performs gradient descent down an error surface with a topology that is not well understood. It is possible, as is seen occasionally in practice, that the system will end up in a local minimum. The effect is that the network appears to stop learning; that is, the error does not continue to decrease with additional training. Whether or not this situation is acceptable depends on the value of the error when the minimum is reached. If the error is acceptable, then it does not matter whether or not the minimum is global. If the error is unacceptable, the problem often can be remedied by retraining of the network with different learning parameters, or with a different random weight initialization. In the case of backpropagation, we see that finding the global minimum is desirable, but we can live with a local minimum in many cases. A further example of localminima effects is found in the continuous Hopfield memory as the latter is used to perform an optimization calculation. The travelingsalesperson problem is a welldefined problem subject to certain constraints. The salesperson must visit each city once and only once on the tour. This restriction is known as a strong constraint. A violation of this constraint is not permitted in any real solution. An additional constraint is that the total distance traveled must be minimized. Failure to find the solution with the absolute minimum distance does not invalidate the solution completely. Any solution that does not have the minimum distance results in a penalty or cost increase. It is up to the individual to decide how much cost is acceptable in return for a relatively quick solution. The minimumdistance requirement is an example of a weak constraint; it is desirable, but not absolutely necessary. Finding the absolute shortest route corresponds to finding the global minimum of the energy function. As with backpropagation, we would like to find the global minimum, but will settle for a local minimum, provided the cost is not too high. In the following sections, we shall present one method for reducing the possibility of falling into a local minimum. That method is called simulated annealing because of its strong analogy to the physical annealing process done to metals and other substances. Along the way, we shall briefly explore a few concepts in information theory, and discuss the relationship between information theory and a branch of physics known as statistical mechanics. Because we do not expect that you are an information theorist or a physicist, the discussion is somewhat brief. However, we do assume a knowledge of basic probability theory, a discussion of which can be found in many fundamental texts.
5.1
Information Theory and Statistical Mechanics
171
5.1 INFORMATION THEORY AND STATISTICAL MECHANICS In this section we shall present a few topics from the fields of information theory and statistical mechanics. We choose to discuss only those topics that have relevance to the discussion of simulated annealing, so that the treatment is brief.
5.1.1
InformationTheory Concepts
Every computer scientist understands what a bit is. It is a binary digit, a thing that has a value of either 1 or 0. Memory in a digital computer is implemented as a series of bits joined together logically to form bytes, or words. In the mathematical discipline of information theory, however, a bit is something else. Suppose some event, e, occurs with some probability, P(e). If we observe that e has occurred, then, according to information theory, we have received
= log2 bits of information, where Iog2 refers to the log to the base 2. You may need to get used to this notion. For example, suppose that P(e) = 1/2, so there is a 50percent chance that the event occurs. In that case, I(e) = Iog2 2 = 1 bit. We can, therefore, define a bit as the amount of information received when one of two equally probable alternatives is specified. If we know for sure that an event will occur, its occurrence provides us with no information: Iog2 1 = 0. Some reflection on these ideas will help you to understand the intent of Eq. (5.1). The most information is received when we have absolutely no clue regarding whether the event will occur. Notice also that bits can occur in fractional quantities. Suppose we have an information source, which has a sequential output of symbols from the set, 5 = {s\,S2,...,sq}, with each symbol occurring with a fixed probability, {P(s\), P(S2), • •  , P(sq)}. A simple example would be an automatic character generator that types letters according to a certain probability distribution. If the probability of sending each symbol is independent of symbols previously sent, then we have what is called a zeromemory source. For such an information source, the amount of information received from each symbol is
/( = log2
«> The average amount of information received per symbol is
(5 2)

9
(5.3)
Simulated Annealing
172
Equation (5.3) is the definition of the entropy, H(S), of the source, S:
(5.4) In a physical system, entropy is associated with a measure of disorder in the system. It is the same in information theory. The most disorderly system is the one where all symbols occur with equal probability. Thus, the maximum information (maximum entropy) is received from such a system. Exercise 5.1: Show that the average amount of information received from a zeromemory source having q symbols occurring with equal probabilities, \/q, is H = \og2q (5.5) Exercise 5.2: Consider two sources, each of which sends a sequence of symbols whose possible values are the 26 letters of the English alphabet and the "space" character. The first source, Si , sends the letters with equal probability. The second source, 52, sends letters with the probabilities equal to their relative frequency of occurrence in English text. Which source transmits the most information? How many bits of information per symbol are transmitted by each source on the average?
We can demonstrate explicitly that the maximum entropy occurs for a source whose symbol probabilities are all equal. Suppose we have two sources, Si and 52, each containing q symbols, where the symbol probabilities are {Pu} and {P2;}, i = l , . . . , q , and the probabilities are normalized so that J^Pii = J^j P2, = 1. The difference in entropy between these two sources is
By using the trick of adding and subtracting the same quantity from the right side of the equation, we can write
HiH2 = 
i Iog2 P,,; + Pi,; Iog2 P2z  Pu Iog2 P2i  P2i Iog2 P2i]
^ !°g2 51 + (PH  P28) Iog2 P«] 9
v—^
p
* it
(5.6)
9
x~
=  > PH log, —  >
iP 2 i)log 2 P«
If we identify S2 as a source with equiprobable symbols, then H2 = H = — Iog2 q, as in Eq. (5.5). Since Iog2 P2i = Iog2  is independent of i, and
5.1
Information Theory and Statistical Mechanics
173
£;(.PH  PU) = Ej P\i  £/ p2