Proceedings Template - WORD



A neuromorphic approach to computer vision(Solicited Paper)Thomas SerreMcGovern Institute for Brain ResearchDepartment of Brain & Cognitive Sciences Massachusetts Institute of Technology 77 Massachusetts Avenue, Bldg 46-5155B+1 (617) 253 0548serre@mit.edu Tomaso PoggioMcGovern Institute for Brain ResearchDepartment of Brain & Cognitive Sciences Massachusetts Institute of Technology 77 Massachusetts Avenue, Bldg 46-5177B+1 (617) 253 5230tp@ai.mit.edu ABSTRACTIf physics was “the” science of the first half of last century, biology was certainly the science of the second half. Neuroscience is often mentioned as the focus of the present century. The field of neuroscience has indeed grown very rapidly over the last several years, spanning a broad range of approaches from molecular neurobiology to neuro-informatics and computational neuroscience. Computer science provided to biology powerful new data analysis tools which created bioinformatics and genomics: they made possible the sequencing of the human genome. In a similar way, computer science techniques are at the heart of brain imaging and other branches of neuroscience. Computers are critical for the Neurosciences, however, at a much deeper level: they represent the best metaphor for the central mystery of how the brain produces intelligent behavior and intelligence itself. They also provide experimental tools for performing experiments in information processing, effectively testing theories of the brain, in particular theories of aspects of intelligence such as sensory perception. The contribution of computer science to neuroscience happens at a variety of levels and is well recognized. Perhaps less obvious is that neuroscience is beginning to contribute powerful new ideas and approaches to artificial intelligence and computer science. Modern computational neuroscience models are no longer toy models: they are quantitatively detailed and at the same time, they are starting to compete with state-of-the-art computer vision systems. In fact we will argue in this review that in the next decades computational neuroscience may be a major source of new ideas and approaches in artificial intelligence.KeywordsComputational neuroscience, neurobiology, models, cortex, theory, computer vision, artificial intelligence IntroductionUnderstanding the processing of information in our cortex is a significant part of understanding how the brain works and, in a sense, understanding intelligence itself. One of our most developed senses is vision. Primates can easily categorize images or parts of them, for instance as an office scene or as a face within that scene, and identify a specific object. Our visual capabilities are exceptional and despite decades of efforts in engineering, no computer algorithm has been able to match the level of performance of the primate visual system. It has been argued that vision is a form of intelligence: it is suggestive that the sentence ‘I see’ is often used to mean ‘I understand’! Our visual cortex may serve as a proxy for the rest of the cortex and thus for intelligence itself. There is little doubt that even a partial solution to the question of which computations are performed by the visual cortex would be a major breakthrough in computational neuroscience and broadly in neuroscience. It would begin to explain one of the most amazing abilities of the brain and open doors to other aspects of intelligence such as language and planning. It would also bridge the gap between neurobiology and information sciences making it possible to develop computer algorithms following the information processing principles used by biological organisms and honed by natural evolution.The past fifty years of experimental work in visual neuroscience has generated a large and rapidly increasing amount of data. Today’s quantitative models bridge several levels of understanding from biophysics to physiology and behavior. Some of these models already compete with state-of-the-art computer vision systems and are close to human level performance for specific visual tasks. In this review, we will describe recent work in our group towards a theory of cortical visual processing. In contrast to other models that address the computations in any one given brain area (such as primary visual cortex) or attempt to explain a particular phenomenon (such as contrast adaptation or a specific visual illusion), we will describe a large-scale model that attempts to mimic the main information processing steps across multiple brain areas and millions of neuron-like units. We believe that a first step towards understanding cortical functions may take the form of a detailed, neurobiologically plausible model taking into account the connectivity, the biophysics and the physiology of cortex. Models can provide a much-needed framework for summarizing and integrating existing data and for planning, coordinating and interpreting new experiments. Models can be powerful tools in basic research, integrating knowledge across several levels of analysis – from molecular to synaptic, cellular, systems and to complex visual behavior. Models, however, as we will discuss at the end of the paper, are limited in their explanatory power; ideally they should eventually lead to a deeper and more general theory.Figure 1: The problem of sample complexity. A hypothetical 2-dimensional (face) classification problem (red) line: One category is represented with “+” and the other with “–”. Insets show 2D transformations (translation and scales) applied to examples from the two classes. Illustrated in panel (A) and (B) are two different representations for the same set of images. The representation in (B), which is tolerant with respect to the exact position and scale of the object within the image, leads to a simpler decision function (e.g., a linear classifier) and will require less training examples to achieve a similar level of performance thus lowering the sample complexity of the classification problem. In the limit, learning in panel (B) could be done with only two training examples (illustrated in blue).We first argue about the role of the visual cortex and review some of the key computational principles underlying the processing of information during visual recognition. We then describe a computational neuroscience model – representative of a whole class of older models – that implements those principles. We also discuss some of the evidence in its favor. When tested with natural images the model is able to perform robust object recognition on par with then current computer vision systems and at the level of human performance for a specific class of rapid visual recognition tasks. The initial success of this research represents a case in point for arguing that over the next decade progress in computer vision and artificial intelligence may benefit directly from progress in neuroscience.GOAL OF THE VISUAL SYSTEMOne key computational issue in object recognition is the specificity-invariance trade-off. On the one hand, recognition must be able to finely discriminate between different objects or object classes (such as the faces illustrated in insets A and B of Figure 1). At the same time, recognition must be tolerant to object transformations such as scaling, translation, illumination, changes in viewpoint, clutter, as well as non-rigid transformations such as variations in shape within a class (for instance change of facial expression for the recognition of faces). Though the tolerance shown by our visual system is not complete, it is still significant. A key challenge posed by the visual cortex is how well it deals with the poverty of stimulus problem: Primates can learn to recognize an object in quite different images from far fewer labeled examples than our present learning theory and learning algorithms predict. For instance, discriminative algorithms such as Support Vector Machines (SVMs) can learn a complex object recognition task from a few hundred labeled images. This is a small number compared with the apparent dimensionality of the problem (millions of pixels), but a child, or even a monkey, can apparently learn the same task from just a handful of examples. As an example of the prototypical problem in visual recognition, imagine that a (na?ve) machine is shown one image of a given person and one image of another person. The system’s task is to discriminate future images of these two people. The system did not see other images of these two people though it has seen many images of other people and other objects and their transformations and may have learned from them in an unsupervised way. Can the system learn to perform the classification task correctly with just two (or very few) labeled examples?For simplicity, imagine trying to build such classifier from the output of two cortical cells (as illustrated in Fig. 1). Here the response of these two cells defines a 2D feature space to represent visual stimuli. In a more realistic setting, objects would be represented by the response patterns of thousands of such neurons. Here we denote visual examples from the two people with “+” and “–” signs. Panels (A) and (B) illustrate what the recognition problem would look like when these two neurons are sensitive vs. invariant to the precise position of the object within their receptive fields. In both cases it is possible to find a separation (the red lines indicate one such possible separation) between the two classes. In fact it has been shown that certain learning algorithms such as SVMs with Gaussian kernels can solve any discrimination task with arbitrary difficulty (in the limit of an infinite number of training examples). In other words, with certain classes of learning algorithms we are guaranteed to be able to find a separation for the problem at hand irrespective of the difficulty of the recognition task. However learning to solve the problem may require a prohibitively large number of training examples.In that respect, the two representations in panels (A) and (B) are not equal: The representation in panel (B) is far superior to the one in panel (A). With no prior assumption on the class of functions to be learned, the “simplest” classifier that can separate the data in panel (B) is much simpler than the “simplest” classifier that can separate the data in panel (A). The number of wiggles of the separation line gives a hand-wavy estimate of the complexity of a classifier, which is related to the number of parameters to be learned. The sample complexity of the problem derived from the invariant representation in panel (B) is much lower than that of the problem in panel (A). Learning to categorize the data-points in panel (B) will require far fewer training examples than in panel (A), and it may be done with as few as two examples. Thus the key problem in vision is what can be learned with a small number of examples and how.Our main argument is not that a low-level representation as provided from the retina would not be able to support robust object recognition. Indeed relatively good computer vision systems developed in the 90’s were based on simple retina-like representations and on rather complex decision functions (such as Radial Basis Function (RBF) networks, etc). The main problem of these systems is that they required a prohibitively large number of training examples compared to humans. More recent work in computer vision suggests that a hierarchical architecture may provide a better solution to this problem (see also ADDIN EN.CITE <EndNote><Cite ExcludeYear="1"><Author>Bengio</Author><RecNum>1</RecNum><record><rec-number>1</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">1</key></foreign-keys><ref-type name="Book">6</ref-type><contributors><authors><author>Bengio, J.</author><author>Le Cun, Y.</author></authors></contributors><titles><title>Scaling learning algorithms towards AI</title><secondary-title>Large-Scale Kernel Machines</secondary-title></titles><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[2] for a related argument). For instance Heisele et al. (see ADDIN EN.CITE <EndNote><Cite><Author>Heisele</Author><Year>2007</Year><RecNum>2</RecNum><record><rec-number>2</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">2</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Heisele, B.</author><author>Serre, T.</author><author>Poggio, T.</author></authors></contributors><auth-address>MIT, Ctr Biol &amp; Computat Learning, McGovern Inst Brain Res, Dept Brain &amp; Cognit Sci, Cambridge, MA 02139 USA</auth-address><titles><title>A component-based framework for face detection and identification</title><secondary-title>Int J Comput Vis</secondary-title></titles><pages>167-181</pages><volume>74</volume><number>2</number><keywords><keyword>Face Identification</keyword><keyword>Face Recognition</keyword><keyword>Support Vector Machines</keyword><keyword>Cortex</keyword><keyword>Face Detection</keyword><keyword>Components</keyword><keyword>Classification</keyword><keyword>Fragments</keyword><keyword>Object Detection</keyword><keyword>Features</keyword><keyword>Parts</keyword><keyword>Object Recognition</keyword><keyword>Image</keyword><keyword>Hierarchical Classification</keyword></keywords><dates><year>2007</year><pub-dates><date>Jan 1</date></pub-dates></dates><accession-num>000246490900004</accession-num><label>p04736</label><urls><related-urls><url><style face="underline" font="default" size="100%"> face="underline" font="default" size="100%">10.1007/s11263-006-0006-z</style></electronic-resource-num><language>English</language></record></Cite></EndNote>[3] for a recent review) designed a hierarchical system for the detection and recognition of faces. The approach is based on a hierarchy of “component experts” performing a local search for one facial component (e.g., an eye, a nose) over a range of positions and scales. Experimental evidence from ADDIN EN.CITE <EndNote><Cite><Author>Heisele</Author><Year>2007</Year><RecNum>2</RecNum><record><rec-number>2</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">2</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Heisele, B.</author><author>Serre, T.</author><author>Poggio, T.</author></authors></contributors><auth-address>MIT, Ctr Biol &amp; Computat Learning, McGovern Inst Brain Res, Dept Brain &amp; Cognit Sci, Cambridge, MA 02139 USA</auth-address><titles><title>A component-based framework for face detection and identification</title><secondary-title>Int J Comput Vis</secondary-title></titles><pages>167-181</pages><volume>74</volume><number>2</number><keywords><keyword>Face Identification</keyword><keyword>Face Recognition</keyword><keyword>Support Vector Machines</keyword><keyword>Cortex</keyword><keyword>Face Detection</keyword><keyword>Components</keyword><keyword>Classification</keyword><keyword>Fragments</keyword><keyword>Object Detection</keyword><keyword>Features</keyword><keyword>Parts</keyword><keyword>Object Recognition</keyword><keyword>Image</keyword><keyword>Hierarchical Classification</keyword></keywords><dates><year>2007</year><pub-dates><date>Jan 1</date></pub-dates></dates><accession-num>000246490900004</accession-num><label>p04736</label><urls><related-urls><url><style face="underline" font="default" size="100%"> face="underline" font="default" size="100%">10.1007/s11263-006-0006-z</style></electronic-resource-num><language>English</language></record></Cite></EndNote>[3] suggests that such hierarchical system based exclusively on linear (SVM) classifiers outperformed significantly a shallow architecture that tries to classify a face as a whole albeit relying on more complex kernels. Here we suggest that the visual system may be using a similar strategy to recognize objects with the goal of reducing the sample complexity of the classification problem. In this view, the visual cortex is transforming the raw image into a position- and scale-tolerant representation through a hierarchy of processing stages, whereby each layer gradually increases the tolerance to position and scale of the image representation. After several layers of such processing stages, the resulting image representation can be used much more efficiently for task-dependent learning and classification by higher brain areas. Such processing stages can be learned during development from temporal streams of natural images by exploiting the statistics of natural environments in two ways: Correlation over images provides information-rich features at various levels of complexity and sizes while correlations over time are used to learn equivalence classes of these features under transformations such as shifts in position and changes in scale. The combination of these two learning processes allows the efficient sharing of visual features between object categories and makes the learning of new objects and categories easier since they inherit the invariance properties of the representation learned from previous experience in the form of basic features common to other objects. Below we review evidence for this hierarchical architecture and the two mechanisms described above.Hierarchical Architecture and invariant recognitionSeveral lines of evidence (both from human psychophysics and monkey electrophysiology studies) suggest that the primate visual system exhibits at least some invariance to position and scale. While the precise amount of invariance is still under debate, there is general agreement about the fact that there is at least some generalization to position and scale. The neural mechanisms underlying such invariant visual recognition have been the subject of much computational and experimental work in the past decades. One general class of computational models postulates that the hierarchical organization of the visual cortex is key to this process (see ADDIN EN.CITE <EndNote><Cite><Author>Hegdé</Author><Year>2007</Year><RecNum>4</RecNum><record><rec-number>4</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">4</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Hegdé, H.</author><author>Felleman, D.J.</author></authors></contributors><titles><title>Reappraising the functional implications of the primate visual anatomical hierarchy</title><secondary-title>The Neuroscientist</secondary-title></titles><periodical><full-title>The Neuroscientist</full-title></periodical><pages>416-21</pages><volume>13</volume><number>5</number><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[4] for an alternative view-point). The processing of shape information in the visual cortex follows a series of stages, starting from the retina, through the Lateral Geniculate Nucleus (LGN) of the thalamus to primary visual cortex (V1) and extrastriate visual areas, V2, V4 and the inferotemporal (IT) cortex. In turn IT provides a major source of input to prefrontal cortex (PFC) involved in linking perception to memory and action (see ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2005</Year><RecNum>5</RecNum><record><rec-number>5</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">5</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Kouh, M.</author><author>Cadieu, C.</author><author>Knoblich, U.</author><author>Kreiman, G.</author><author>Poggio, T.</author></authors></contributors><titles><title>A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex</title><secondary-title>MIT AI Memo 2005-036 / CBCL Memo 259</secondary-title></titles><periodical><full-title>MIT AI Memo 2005-036 / CBCL Memo 259</full-title></periodical><number>AI Memo 2005-036 / CBCL Memo 259</number><dates><year>2005</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><label>Serre:MEMO:05</label><urls></urls></record></Cite></EndNote>[5] for references).As one progresses along the ventral stream of the visual cortex, neurons become selective for stimuli that are increasingly complex: from simple oriented bars and edges in early visual area V1 to moderately complex features in intermediate areas (such as combination of orientations) and complex objects and faces in higher visual areas such as IT. In parallel to this increase in the complexity of the preferred stimulus, the invariance properties of neurons seem to also increase. Neurons become more and more tolerant with respect to the exact position and scale of the stimulus within their receptive fields. As a result of this increase in invariance properties, the receptive field size of neurons increases, from about one degree or less in V1 to several degrees in IT.There is increasing evidence that IT, which has been critically linked with the monkey’s ability to recognize objects, provides a representation of the image which facilitates recognition tolerant to image transformations. For instance, Logothetis and colleagues showed that monkeys could be trained to recognize paperclip-like wireframe objects at one specific location and scale ADDIN EN.CITE <EndNote><Cite><Author>Logothetis</Author><Year>1995</Year><RecNum>35</RecNum><record><rec-number>35</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">35</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Logothetis, N. K.</author><author>Pauls, J.</author><author>Poggio, T.</author></authors></contributors><auth-address>Division of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA.</auth-address><titles><title>Shape representation in the inferior temporal cortex of monkeys</title><secondary-title>Curr Biol</secondary-title></titles><periodical><full-title>Curr Biol</full-title></periodical><pages>552-63</pages><volume>5</volume><number>5</number><edition>1995/05/01</edition><keywords><keyword>Animals</keyword><keyword>Form Perception/*physiology</keyword><keyword>Image Processing, Computer-Assisted</keyword><keyword>Macaca mulatta</keyword><keyword>Neurons/physiology</keyword><keyword>Temporal Lobe/*physiology</keyword></keywords><dates><year>1995</year><pub-dates><date>May 1</date></pub-dates></dates><isbn>0960-9822 (Print)</isbn><accession-num>7583105</accession-num><urls><related-urls><url>(95)00108-4 [pii]</electronic-resource-num><language>eng</language></record></Cite></EndNote>[6]. After training, recordings in the IT cortex of these animals revealed some significant selectivity for the trained objects. Because monkeys were unlikely to have been in contact with the specific paperclip prior to training, this experiment provides indirect evidence for learning. More importantly, it was found that selective neurons also exhibited some range of invariance with respect to the exact position (between 2 and 4 degrees) and scale (around 2 octaves) of the stimulus – which was never presented before testing at these new positions and scales. More recently, work by Hung et al ADDIN EN.CITE <EndNote><Cite><Author>Hung</Author><Year>2005</Year><RecNum>7</RecNum><record><rec-number>7</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">7</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Hung, C.P.</author><author>Kreiman, G.</author><author>Poggio, T.</author><author>DiCarlo, J.J.</author></authors></contributors><titles><title>Fast read-out of object identity from macaque inferior temporal cortex</title><secondary-title>Science</secondary-title></titles><periodical><full-title>Science</full-title></periodical><pages>863-866</pages><volume>310</volume><number> </number><dates><year>2005</year></dates><urls></urls></record></Cite></EndNote>[7] showed that it was possible to train a (linear) classifier to robustly readout from a population of IT neurons, the category information of a briefly flashed stimulus. Furthermore it was shown that the classifier was able to generalize to a range of positions and scales (similar to Logothetis’ data) that were never presented during the training of the classifier. This suggests that the observed tolerance to 2D transformation is a property of the population of neurons learned from visual experience but available for a novel object without need of object-specific learning (depending on the difficulty of the task). Figure 2: Hierarchical feedforward models of the visual cortex (see text for details).COMPUTATIONAL MODELS OF OBJECT RECOGNITION IN CORTEXWe have developed PEVuZE5vdGU+PENpdGU+PEF1dGhvcj5TZXJyZTwvQXV0aG9yPjxZZWFyPjIwMDU8L1llYXI+PFJl

Y051bT41PC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41PC9yZWMtbnVtYmVyPjxmb3JlaWdu

LWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0id2V4dHdydzBiZmVwd3Zlc3gwb3gwYTk4MmZlc2Z3

dHd3d2ZmIj41PC9rZXk+PC9mb3JlaWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0

aWNsZSI+MTc8L3JlZi10eXBlPjxjb250cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5TZXJyZSwg

VC48L2F1dGhvcj48YXV0aG9yPktvdWgsIE0uPC9hdXRob3I+PGF1dGhvcj5DYWRpZXUsIEMuPC9h

dXRob3I+PGF1dGhvcj5Lbm9ibGljaCwgVS48L2F1dGhvcj48YXV0aG9yPktyZWltYW4sIEcuPC9h

dXRob3I+PGF1dGhvcj5Qb2dnaW8sIFQuPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3Jz

Pjx0aXRsZXM+PHRpdGxlPkEgdGhlb3J5IG9mIG9iamVjdCByZWNvZ25pdGlvbjogY29tcHV0YXRp

b25zIGFuZCBjaXJjdWl0cyBpbiB0aGUgZmVlZGZvcndhcmQgcGF0aCBvZiB0aGUgdmVudHJhbCBz

dHJlYW0gaW4gcHJpbWF0ZSB2aXN1YWwgY29ydGV4PC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlPk1J

VCBBSSBNZW1vIDIwMDUtMDM2IC8gQ0JDTCBNZW1vIDI1OTwvc2Vjb25kYXJ5LXRpdGxlPjwvdGl0

bGVzPjxwZXJpb2RpY2FsPjxmdWxsLXRpdGxlPk1JVCBBSSBNZW1vIDIwMDUtMDM2IC8gQ0JDTCBN

ZW1vIDI1OTwvZnVsbC10aXRsZT48L3BlcmlvZGljYWw+PG51bWJlcj5BSSBNZW1vIDIwMDUtMDM2

IC8gQ0JDTCBNZW1vIDI1OTwvbnVtYmVyPjxkYXRlcz48eWVhcj4yMDA1PC95ZWFyPjwvZGF0ZXM+

PHB1Yi1sb2NhdGlvbj5DYW1icmlkZ2UsIE1BPC9wdWItbG9jYXRpb24+PHB1Ymxpc2hlcj5NSVQ8

L3B1Ymxpc2hlcj48bGFiZWw+U2VycmU6TUVNTzowNTwvbGFiZWw+PHVybHM+PC91cmxzPjwvcmVj

b3JkPjwvQ2l0ZT48Q2l0ZT48QXV0aG9yPlNlcnJlPC9BdXRob3I+PFllYXI+MjAwNzwvWWVhcj48

UmVjTnVtPjg8L1JlY051bT48cmVjb3JkPjxyZWMtbnVtYmVyPjg8L3JlYy1udW1iZXI+PGZvcmVp

Z24ta2V5cz48a2V5IGFwcD0iRU4iIGRiLWlkPSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVz

Znd0d3d3ZmYiPjg8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBB

cnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPlNlcnJl

LCBULjwvYXV0aG9yPjxhdXRob3I+S3JlaW1hbiwgRy48L2F1dGhvcj48YXV0aG9yPktvdWgsIE0u

PC9hdXRob3I+PGF1dGhvcj5DYWRpZXUsIEMuPC9hdXRob3I+PGF1dGhvcj5Lbm9ibGljaCwgVS48

L2F1dGhvcj48YXV0aG9yPlBvZ2dpbywgVC48L2F1dGhvcj48L2F1dGhvcnM+PC9jb250cmlidXRv

cnM+PGF1dGgtYWRkcmVzcz5DZW50ZXIgZm9yIEJpb2xvZ2ljYWwgYW5kIENvbXB1dGF0aW9uYWwg

TGVhcm5pbmcsIE1jR292ZXJuIEluc3RpdHV0ZSBmb3IgQnJhaW4gUmVzZWFyY2gsIEJyYWluIGFu

ZCBDb2duaXRpdmUgU2NpZW5jZXMgRGVwYXJ0bWVudCwgTWFzc2FjaHVzZXR0cyBJbnN0aXR1dGUg

b2YgVGVjaG5vbG9neSwgQ2FtYnJpZGdlIDAyMTM5LCBVU0EuIHNlcnJlQG1pdC5lZHU8L2F1dGgt

YWRkcmVzcz48dGl0bGVzPjx0aXRsZT5BIHF1YW50aXRhdGl2ZSB0aGVvcnkgb2YgaW1tZWRpYXRl

IHZpc3VhbCByZWNvZ25pdGlvbjwvdGl0bGU+PHNlY29uZGFyeS10aXRsZT5Qcm9nIEJyYWluIFJl

czwvc2Vjb25kYXJ5LXRpdGxlPjwvdGl0bGVzPjxwZXJpb2RpY2FsPjxmdWxsLXRpdGxlPlByb2cg

QnJhaW4gUmVzPC9mdWxsLXRpdGxlPjwvcGVyaW9kaWNhbD48cGFnZXM+MzMtNTY8L3BhZ2VzPjx2

b2x1bWU+MTY1PC92b2x1bWU+PG51bWJlcj4gPC9udW1iZXI+PGVkaXRpb24+MjAwNy8xMC8xMTwv

ZWRpdGlvbj48ZGF0ZXM+PHllYXI+MjAwNzwveWVhcj48L2RhdGVzPjxpc2JuPjAwNzktNjEyMyAo

UHJpbnQpPC9pc2JuPjxhY2Nlc3Npb24tbnVtPjE3OTI1MjM5PC9hY2Nlc3Npb24tbnVtPjx1cmxz

PjxyZWxhdGVkLXVybHM+PHVybD48c3R5bGUgZmFjZT0idW5kZXJsaW5lIiBmb250PSJkZWZhdWx0

IiBzaXplPSIxMDAlIj5odHRwOi8vd3d3Lm5jYmkubmxtLm5paC5nb3YvZW50cmV6L3F1ZXJ5LmZj

Z2k/Y21kPVJldHJpZXZlJmFtcDtkYj1QdWJNZWQmYW1wO2RvcHQ9Q2l0YXRpb24mYW1wO2xpc3Rf

dWlkcz0xNzkyNTIzOTwvc3R5bGU+PC91cmw+PC9yZWxhdGVkLXVybHM+PC91cmxzPjxlbGVjdHJv

bmljLXJlc291cmNlLW51bT48c3R5bGUgZmFjZT0idW5kZXJsaW5lIiBmb250PSJkZWZhdWx0IiBz

aXplPSIxMDAlIj5TMDA3OS02MTIzKDA2KTY1MDA0LTggW3BpaV08L3N0eWxlPjxzdHlsZSBmYWNl

PSJub3JtYWwiIGZvbnQ9ImRlZmF1bHQiIHNpemU9IjEwMCUiPiYjeEQ7PC9zdHlsZT48c3R5bGUg

ZmFjZT0idW5kZXJsaW5lIiBmb250PSJkZWZhdWx0IiBzaXplPSIxMDAlIj4xMC4xMDE2L1MwMDc5

LTYxMjMoMDYpNjUwMDQtODwvc3R5bGU+PC9lbGVjdHJvbmljLXJlc291cmNlLW51bT48bGFuZ3Vh

Z2U+ZW5nPC9sYW5ndWFnZT48L3JlY29yZD48L0NpdGU+PC9FbmROb3RlPn==

ADDIN EN.CITE PEVuZE5vdGU+PENpdGU+PEF1dGhvcj5TZXJyZTwvQXV0aG9yPjxZZWFyPjIwMDU8L1llYXI+PFJl

Y051bT41PC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41PC9yZWMtbnVtYmVyPjxmb3JlaWdu

LWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0id2V4dHdydzBiZmVwd3Zlc3gwb3gwYTk4MmZlc2Z3

dHd3d2ZmIj41PC9rZXk+PC9mb3JlaWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0

aWNsZSI+MTc8L3JlZi10eXBlPjxjb250cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5TZXJyZSwg

VC48L2F1dGhvcj48YXV0aG9yPktvdWgsIE0uPC9hdXRob3I+PGF1dGhvcj5DYWRpZXUsIEMuPC9h

dXRob3I+PGF1dGhvcj5Lbm9ibGljaCwgVS48L2F1dGhvcj48YXV0aG9yPktyZWltYW4sIEcuPC9h

dXRob3I+PGF1dGhvcj5Qb2dnaW8sIFQuPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3Jz

Pjx0aXRsZXM+PHRpdGxlPkEgdGhlb3J5IG9mIG9iamVjdCByZWNvZ25pdGlvbjogY29tcHV0YXRp

b25zIGFuZCBjaXJjdWl0cyBpbiB0aGUgZmVlZGZvcndhcmQgcGF0aCBvZiB0aGUgdmVudHJhbCBz

dHJlYW0gaW4gcHJpbWF0ZSB2aXN1YWwgY29ydGV4PC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlPk1J

VCBBSSBNZW1vIDIwMDUtMDM2IC8gQ0JDTCBNZW1vIDI1OTwvc2Vjb25kYXJ5LXRpdGxlPjwvdGl0

bGVzPjxwZXJpb2RpY2FsPjxmdWxsLXRpdGxlPk1JVCBBSSBNZW1vIDIwMDUtMDM2IC8gQ0JDTCBN

ZW1vIDI1OTwvZnVsbC10aXRsZT48L3BlcmlvZGljYWw+PG51bWJlcj5BSSBNZW1vIDIwMDUtMDM2

IC8gQ0JDTCBNZW1vIDI1OTwvbnVtYmVyPjxkYXRlcz48eWVhcj4yMDA1PC95ZWFyPjwvZGF0ZXM+

PHB1Yi1sb2NhdGlvbj5DYW1icmlkZ2UsIE1BPC9wdWItbG9jYXRpb24+PHB1Ymxpc2hlcj5NSVQ8

L3B1Ymxpc2hlcj48bGFiZWw+U2VycmU6TUVNTzowNTwvbGFiZWw+PHVybHM+PC91cmxzPjwvcmVj

b3JkPjwvQ2l0ZT48Q2l0ZT48QXV0aG9yPlNlcnJlPC9BdXRob3I+PFllYXI+MjAwNzwvWWVhcj48

UmVjTnVtPjg8L1JlY051bT48cmVjb3JkPjxyZWMtbnVtYmVyPjg8L3JlYy1udW1iZXI+PGZvcmVp

Z24ta2V5cz48a2V5IGFwcD0iRU4iIGRiLWlkPSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVz

Znd0d3d3ZmYiPjg8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBB

cnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPlNlcnJl

LCBULjwvYXV0aG9yPjxhdXRob3I+S3JlaW1hbiwgRy48L2F1dGhvcj48YXV0aG9yPktvdWgsIE0u

PC9hdXRob3I+PGF1dGhvcj5DYWRpZXUsIEMuPC9hdXRob3I+PGF1dGhvcj5Lbm9ibGljaCwgVS48

L2F1dGhvcj48YXV0aG9yPlBvZ2dpbywgVC48L2F1dGhvcj48L2F1dGhvcnM+PC9jb250cmlidXRv

cnM+PGF1dGgtYWRkcmVzcz5DZW50ZXIgZm9yIEJpb2xvZ2ljYWwgYW5kIENvbXB1dGF0aW9uYWwg

TGVhcm5pbmcsIE1jR292ZXJuIEluc3RpdHV0ZSBmb3IgQnJhaW4gUmVzZWFyY2gsIEJyYWluIGFu

ZCBDb2duaXRpdmUgU2NpZW5jZXMgRGVwYXJ0bWVudCwgTWFzc2FjaHVzZXR0cyBJbnN0aXR1dGUg

b2YgVGVjaG5vbG9neSwgQ2FtYnJpZGdlIDAyMTM5LCBVU0EuIHNlcnJlQG1pdC5lZHU8L2F1dGgt

YWRkcmVzcz48dGl0bGVzPjx0aXRsZT5BIHF1YW50aXRhdGl2ZSB0aGVvcnkgb2YgaW1tZWRpYXRl

IHZpc3VhbCByZWNvZ25pdGlvbjwvdGl0bGU+PHNlY29uZGFyeS10aXRsZT5Qcm9nIEJyYWluIFJl

czwvc2Vjb25kYXJ5LXRpdGxlPjwvdGl0bGVzPjxwZXJpb2RpY2FsPjxmdWxsLXRpdGxlPlByb2cg

QnJhaW4gUmVzPC9mdWxsLXRpdGxlPjwvcGVyaW9kaWNhbD48cGFnZXM+MzMtNTY8L3BhZ2VzPjx2

b2x1bWU+MTY1PC92b2x1bWU+PG51bWJlcj4gPC9udW1iZXI+PGVkaXRpb24+MjAwNy8xMC8xMTwv

ZWRpdGlvbj48ZGF0ZXM+PHllYXI+MjAwNzwveWVhcj48L2RhdGVzPjxpc2JuPjAwNzktNjEyMyAo

UHJpbnQpPC9pc2JuPjxhY2Nlc3Npb24tbnVtPjE3OTI1MjM5PC9hY2Nlc3Npb24tbnVtPjx1cmxz

PjxyZWxhdGVkLXVybHM+PHVybD48c3R5bGUgZmFjZT0idW5kZXJsaW5lIiBmb250PSJkZWZhdWx0

IiBzaXplPSIxMDAlIj5odHRwOi8vd3d3Lm5jYmkubmxtLm5paC5nb3YvZW50cmV6L3F1ZXJ5LmZj

Z2k/Y21kPVJldHJpZXZlJmFtcDtkYj1QdWJNZWQmYW1wO2RvcHQ9Q2l0YXRpb24mYW1wO2xpc3Rf

dWlkcz0xNzkyNTIzOTwvc3R5bGU+PC91cmw+PC9yZWxhdGVkLXVybHM+PC91cmxzPjxlbGVjdHJv

bmljLXJlc291cmNlLW51bT48c3R5bGUgZmFjZT0idW5kZXJsaW5lIiBmb250PSJkZWZhdWx0IiBz

aXplPSIxMDAlIj5TMDA3OS02MTIzKDA2KTY1MDA0LTggW3BpaV08L3N0eWxlPjxzdHlsZSBmYWNl

PSJub3JtYWwiIGZvbnQ9ImRlZmF1bHQiIHNpemU9IjEwMCUiPiYjeEQ7PC9zdHlsZT48c3R5bGUg

ZmFjZT0idW5kZXJsaW5lIiBmb250PSJkZWZhdWx0IiBzaXplPSIxMDAlIj4xMC4xMDE2L1MwMDc5

LTYxMjMoMDYpNjUwMDQtODwvc3R5bGU+PC9lbGVjdHJvbmljLXJlc291cmNlLW51bT48bGFuZ3Vh

Z2U+ZW5nPC9sYW5ndWFnZT48L3JlY29yZD48L0NpdGU+PC9FbmROb3RlPn==

ADDIN EN.CITE.DATA [5, 8] – in close cooperation with experimental labs – an initial quantitative model of feedforward hierarchical processing in the ventral stream of the visual cortex (see Figure 2). The resulting model effectively integrates the large body of neuroscience data (summarized earlier) that characterizes the properties of neurons along the object recognition processing hierarchy. In addition, the model is sufficient to mimic human performance in difficult visual recognition tasks ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2007</Year><RecNum>9</RecNum><record><rec-number>9</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">9</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Oliva, A.</author><author>Poggio, T.</author></authors></contributors><auth-address>Center for Biological and Computational Learning, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. serre@mit.edu</auth-address><titles><title>A feedforward architecture accounts for rapid categorization</title><secondary-title>Proc Natl Acad Sci U S A</secondary-title></titles><periodical><full-title>Proc Natl Acad Sci U S A</full-title></periodical><pages>6424-9</pages><volume>104</volume><number>15</number><dates><year>2007</year><pub-dates><date>Apr 10</date></pub-dates></dates><accession-num>17404214</accession-num><urls><related-urls><url> </url></related-urls></urls></record></Cite></EndNote>[9] (while performing at least as well as most current computer vision systems ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2007</Year><RecNum>10</RecNum><record><rec-number>10</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">10</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>T. Serre</author><author>L. Wolf</author><author>S. Bileschi</author><author>M. Riesenhuber</author><author>T. Poggio</author></authors></contributors><titles><title>Object recognition with cortex-like mechanisms</title><secondary-title>IEEE Transactions on Pattern Analysis and Machine Intelligence</secondary-title></titles><periodical><full-title>IEEE Transactions on Pattern Analysis and Machine Intelligence</full-title></periodical><pages>411-426</pages><volume>29</volume><number>3</number><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[10]). Feedforward hierarchical models have a long history starting with Marko & Giebel’s homogeneous multi-layered architecture ADDIN EN.CITE <EndNote><Cite><Author>Marko</Author><Year>1970</Year><RecNum>65</RecNum><record><rec-number>65</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">65</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Marko, H.</author><author>Giebel, H. </author></authors></contributors><titles><title>Recognition of handwritten characters with a system of homogeneous Layers</title><secondary-title>Nachrichtentechnische Zeitschrift</secondary-title></titles><periodical><full-title>Nachrichtentechnische Zeitschrift</full-title></periodical><pages>455-459</pages><volume>23</volume><dates><year>1970</year></dates><urls></urls></record></Cite></EndNote>[11] in the 70’s and later Fukushima’s Neocognitron ADDIN EN.CITE <EndNote><Cite><Author>Fukushima</Author><Year>1980</Year><RecNum>47</RecNum><record><rec-number>47</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">47</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>K. Fukushima</author></authors></contributors><titles><title>Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position</title><secondary-title>Biol. Cyb.</secondary-title></titles><pages>193-202</pages><volume>36</volume><dates><year>1980</year></dates><urls></urls></record></Cite></EndNote>[12]. One of the key computational mechanisms in these, and other hierarchical models of visual processing, originates from the pioneering physiological studies and models of Hubel and Wiesel (see Box 1). The basic idea in these models is to build an increasingly complex and invariant object representation in a hierarchy of stages by progressively integrating (i.e., pooling) convergent inputs from lower levels. Building upon several existing neurobiological models PEVuZE5vdGU+PENpdGU+PEF1dGhvcj5XYWxsaXM8L0F1dGhvcj48WWVhcj4xOTk3PC9ZZWFyPjxS

ZWNOdW0+NTA8L1JlY051bT48cmVjb3JkPjxyZWMtbnVtYmVyPjUwPC9yZWMtbnVtYmVyPjxmb3Jl

aWduLWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0id2V4dHdydzBiZmVwd3Zlc3gwb3gwYTk4MmZl

c2Z3dHd3d2ZmIj41MDwva2V5PjwvZm9yZWlnbi1rZXlzPjxyZWYtdHlwZSBuYW1lPSJKb3VybmFs

IEFydGljbGUiPjE3PC9yZWYtdHlwZT48Y29udHJpYnV0b3JzPjxhdXRob3JzPjxhdXRob3I+V2Fs

bGlzLCBHLjwvYXV0aG9yPjxhdXRob3I+Um9sbHMsIEUuIFQuPC9hdXRob3I+PC9hdXRob3JzPjwv

Y29udHJpYnV0b3JzPjx0aXRsZXM+PHRpdGxlPkEgbW9kZWwgb2YgaW52YXJpYW50IHJlY29nbml0

aW9uIGluIHRoZSB2aXN1YWwgc3lzdGVtPC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlPlByb2cgTmV1

cm9iaW9sPC9zZWNvbmRhcnktdGl0bGU+PC90aXRsZXM+PHBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+

UHJvZyBOZXVyb2Jpb2w8L2Z1bGwtdGl0bGU+PC9wZXJpb2RpY2FsPjxwYWdlcz4xNjctMTk0PC9w

YWdlcz48dm9sdW1lPjUxPC92b2x1bWU+PGRhdGVzPjx5ZWFyPjE5OTc8L3llYXI+PC9kYXRlcz48

dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+TWVsPC9BdXRob3I+PFll

YXI+MTk5NzwvWWVhcj48UmVjTnVtPjUxPC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41MTwv

cmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZl

cHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+NTE8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5

cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0

aG9ycz48YXV0aG9yPkIuIFcuIE1lbDwvYXV0aG9yPjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48

dGl0bGVzPjx0aXRsZT5TRUVNT1JFOiBDb21iaW5pbmcgY29sb3IsIHNoYXBlIGFuZCB0ZXh0dXJl

IGhpc3RvZ3JhbW1pbmcgaW4gYSBuZXVyYWxseS1pbnNwaXJlZCBhcHByb2FjaCB0byB2aXN1YWwg

b2JqZWN0IHJlY29nbml0aW9uPC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlPk5ldXJhbCBDb21wLjwv

c2Vjb25kYXJ5LXRpdGxlPjwvdGl0bGVzPjxwYWdlcz43NzctLTgwNDwvcGFnZXM+PHZvbHVtZT45

PC92b2x1bWU+PG51bWJlcj40PC9udW1iZXI+PGRhdGVzPjx5ZWFyPjE5OTc8L3llYXI+PC9kYXRl

cz48dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+Umllc2VuaHViZXI8

L0F1dGhvcj48WWVhcj4xOTk5PC9ZZWFyPjxSZWNOdW0+MjE8L1JlY051bT48cmVjb3JkPjxyZWMt

bnVtYmVyPjIxPC9yZWMtbnVtYmVyPjxmb3JlaWduLWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0i

d2V4dHdydzBiZmVwd3Zlc3gwb3gwYTk4MmZlc2Z3dHd3d2ZmIj4yMTwva2V5PjwvZm9yZWlnbi1r

ZXlzPjxyZWYtdHlwZSBuYW1lPSJKb3VybmFsIEFydGljbGUiPjE3PC9yZWYtdHlwZT48Y29udHJp

YnV0b3JzPjxhdXRob3JzPjxhdXRob3I+Umllc2VuaHViZXIsIE08L2F1dGhvcj48YXV0aG9yPlBv

Z2dpbywgVDwvYXV0aG9yPjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48dGl0bGVzPjx0aXRsZT5I

aWVyYXJjaGljYWwgbW9kZWxzIG9mIG9iamVjdCByZWNvZ25pdGlvbiBpbiBjb3J0ZXg8L3RpdGxl

PjxzZWNvbmRhcnktdGl0bGU+TmF0dXJlIE5ldXJvc2NpZW5jZTwvc2Vjb25kYXJ5LXRpdGxlPjwv

dGl0bGVzPjxwZXJpb2RpY2FsPjxmdWxsLXRpdGxlPk5hdHVyZSBOZXVyb3NjaWVuY2U8L2Z1bGwt

dGl0bGU+PC9wZXJpb2RpY2FsPjxwYWdlcz4xMDE5LTEwMjU8L3BhZ2VzPjx2b2x1bWU+Mjwvdm9s

dW1lPjxudW1iZXI+MTE8L251bWJlcj48a2V5d29yZHM+PGtleXdvcmQ+b2JqZWN0IHJlY29nbml0

aW9uLyBoaWVyYXJjaHkvPC9rZXl3b3JkPjwva2V5d29yZHM+PGRhdGVzPjx5ZWFyPjE5OTk8L3ll

YXI+PC9kYXRlcz48dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+VWxs

bWFuPC9BdXRob3I+PFllYXI+MjAwMjwvWWVhcj48UmVjTnVtPjQ2PC9SZWNOdW0+PHJlY29yZD48

cmVjLW51bWJlcj40NjwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGIt

aWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+NDY8L2tleT48L2ZvcmVp

Z24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNv

bnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPlVsbG1hbiwgUy48L2F1dGhvcj48YXV0aG9yPlZp

ZGFsLU5hcXVldCwgTS48L2F1dGhvcj48YXV0aG9yPlNhbGksIEUuPC9hdXRob3I+PC9hdXRob3Jz

PjwvY29udHJpYnV0b3JzPjxhdXRoLWFkZHJlc3M+RGVwYXJ0bWVudCBvZiBDb21wdXRlciBTY2ll

bmNlIGFuZCBBcHBsaWVkIE1hdGhlbWF0aWNzLCBUaGUgV2Vpem1hbm4gSW5zdGl0dXRlIG9mIFNj

aWVuY2UsIFBPIEJveCAyNiwgUmVob3ZvdCA3NjEwMCwgSXNyYWVsLiBzaGltb24udWxsbWFuQHdl

aXptYW5uLmFjLmlsPC9hdXRoLWFkZHJlc3M+PHRpdGxlcz48dGl0bGU+VmlzdWFsIGZlYXR1cmVz

IG9mIGludGVybWVkaWF0ZSBjb21wbGV4aXR5IGFuZCB0aGVpciB1c2UgaW4gY2xhc3NpZmljYXRp

b248L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+TmF0IE5ldXJvc2NpPC9zZWNvbmRhcnktdGl0bGU+

PC90aXRsZXM+PHBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+TmF0IE5ldXJvc2NpPC9mdWxsLXRpdGxl

PjwvcGVyaW9kaWNhbD48cGFnZXM+NjgyLTc8L3BhZ2VzPjx2b2x1bWU+NTwvdm9sdW1lPjxudW1i

ZXI+NzwvbnVtYmVyPjxrZXl3b3Jkcz48a2V5d29yZD5Db21wdXRlciBTaW11bGF0aW9uPC9rZXl3

b3JkPjxrZXl3b3JkPipGYWNlPC9rZXl3b3JkPjxrZXl3b3JkPkh1bWFuczwva2V5d29yZD48a2V5

d29yZD5JbWFnZSBQcm9jZXNzaW5nLCBDb21wdXRlci1Bc3Npc3RlZC8qbWV0aG9kczwva2V5d29y

ZD48a2V5d29yZD5QYXR0ZXJuIFJlY29nbml0aW9uLCBWaXN1YWwvKmNsYXNzaWZpY2F0aW9uPC9r

ZXl3b3JkPjxrZXl3b3JkPlJlc2VhcmNoIFN1cHBvcnQsIE5vbi1VLlMuIEdvdiZhcG9zO3Q8L2tl

eXdvcmQ+PC9rZXl3b3Jkcz48ZGF0ZXM+PHllYXI+MjAwMjwveWVhcj48cHViLWRhdGVzPjxkYXRl

Pkp1bDwvZGF0ZT48L3B1Yi1kYXRlcz48L2RhdGVzPjxhY2Nlc3Npb24tbnVtPjEyMDU1NjM0PC9h

Y2Nlc3Npb24tbnVtPjx1cmxzPjxyZWxhdGVkLXVybHM+PHVybD5odHRwOi8vd3d3Lm5jYmkubmxt

Lm5paC5nb3YvZW50cmV6L3F1ZXJ5LmZjZ2k/Y21kPVJldHJpZXZlJmFtcDtkYj1QdWJNZWQmYW1w

O2RvcHQ9Q2l0YXRpb24mYW1wO2xpc3RfdWlkcz0xMjA1NTYzNDwvdXJsPjwvcmVsYXRlZC11cmxz

PjwvdXJscz48L3JlY29yZD48L0NpdGU+PENpdGU+PEF1dGhvcj5UaG9ycGU8L0F1dGhvcj48WWVh

cj4yMDAyPC9ZZWFyPjxSZWNOdW0+NDk8L1JlY051bT48cmVjb3JkPjxyZWMtbnVtYmVyPjQ5PC9y

ZWMtbnVtYmVyPjxmb3JlaWduLWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0id2V4dHdydzBiZmVw

d3Zlc3gwb3gwYTk4MmZlc2Z3dHd3d2ZmIj40OTwva2V5PjwvZm9yZWlnbi1rZXlzPjxyZWYtdHlw

ZSBuYW1lPSJDb25mZXJlbmNlIFBhcGVyIj40NzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0

aG9ycz48YXV0aG9yPlMuIFRob3JwZTwvYXV0aG9yPjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48

dGl0bGVzPjx0aXRsZT5VbHRyYS1SYXBpZCBTY2VuZSBDYXRlZ29yaXphdGlvbiB3aXRoIGEgV2F2

ZSBvZiBTcGlrZXM8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+V29ya3Nob3Agb24gQmlvbG9naWNh

bGx5IE1vdGl2YXRlZCBDb21wdXRlciBWaXNpb24gKEJNQ1YpPC9zZWNvbmRhcnktdGl0bGU+PC90

aXRsZXM+PHBhZ2VzPjEtMTU8L3BhZ2VzPjxkYXRlcz48eWVhcj4yMDAyPC95ZWFyPjwvZGF0ZXM+

PHVybHM+PC91cmxzPjwvcmVjb3JkPjwvQ2l0ZT48Q2l0ZT48QXV0aG9yPkFtaXQ8L0F1dGhvcj48

WWVhcj4yMDAzPC9ZZWFyPjxSZWNOdW0+NDg8L1JlY051bT48cmVjb3JkPjxyZWMtbnVtYmVyPjQ4

PC9yZWMtbnVtYmVyPjxmb3JlaWduLWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0id2V4dHdydzBi

ZmVwd3Zlc3gwb3gwYTk4MmZlc2Z3dHd3d2ZmIj40ODwva2V5PjwvZm9yZWlnbi1rZXlzPjxyZWYt

dHlwZSBuYW1lPSJKb3VybmFsIEFydGljbGUiPjE3PC9yZWYtdHlwZT48Y29udHJpYnV0b3JzPjxh

dXRob3JzPjxhdXRob3I+WS4gQW1pdDwvYXV0aG9yPjxhdXRob3I+TS4gTWFzY2FybzwvYXV0aG9y

PjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48dGl0bGVzPjx0aXRsZT5BbiBpbnRlZ3JhdGVkIG5l

dHdvcmsgZm9yIGludmFyaWFudCB2aXN1YWwgZGV0ZWN0aW9uIGFuZCByZWNvZ25pdGlvbjwvdGl0

bGU+PHNlY29uZGFyeS10aXRsZT5WaXNpb24gUmVzZWFyY2g8L3NlY29uZGFyeS10aXRsZT48L3Rp

dGxlcz48cGFnZXM+MjA3My0tMjA4ODwvcGFnZXM+PHZvbHVtZT40Mzwvdm9sdW1lPjxudW1iZXI+

MTk8L251bWJlcj48ZGF0ZXM+PHllYXI+MjAwMzwveWVhcj48L2RhdGVzPjx1cmxzPjwvdXJscz48

L3JlY29yZD48L0NpdGU+PENpdGU+PEF1dGhvcj5XZXJzaW5nPC9BdXRob3I+PFllYXI+MjAwMzwv

WWVhcj48UmVjTnVtPjUyPC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41MjwvcmVjLW51bWJl

cj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94

MGE5ODJmZXNmd3R3d3dmZiI+NTI8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0i

Sm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0

aG9yPkguIFdlcnNpbmc8L2F1dGhvcj48YXV0aG9yPkUuIEtvZXJuZXI8L2F1dGhvcj48L2F1dGhv

cnM+PC9jb250cmlidXRvcnM+PHRpdGxlcz48dGl0bGU+TGVhcm5pbmcgb3B0aW1pemVkIGZlYXR1

cmVzIGZvciBoaWVyYXJjaGljYWwgbW9kZWxzIG9mIGludmFyaWFudCByZWNvZ25pdGlvbjwvdGl0

bGU+PHNlY29uZGFyeS10aXRsZT5OZXVyYWwgQ29tcC48L3NlY29uZGFyeS10aXRsZT48L3RpdGxl

cz48cGFnZXM+MTU1OS0tMTU4ODwvcGFnZXM+PHZvbHVtZT4xNTwvdm9sdW1lPjxudW1iZXI+Nzwv

bnVtYmVyPjxkYXRlcz48eWVhcj4yMDAzPC95ZWFyPjwvZGF0ZXM+PHVybHM+PC91cmxzPjwvcmVj

b3JkPjwvQ2l0ZT48L0VuZE5vdGU+AG==

ADDIN EN.CITE PEVuZE5vdGU+PENpdGU+PEF1dGhvcj5XYWxsaXM8L0F1dGhvcj48WWVhcj4xOTk3PC9ZZWFyPjxS

ZWNOdW0+NTA8L1JlY051bT48cmVjb3JkPjxyZWMtbnVtYmVyPjUwPC9yZWMtbnVtYmVyPjxmb3Jl

aWduLWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0id2V4dHdydzBiZmVwd3Zlc3gwb3gwYTk4MmZl

c2Z3dHd3d2ZmIj41MDwva2V5PjwvZm9yZWlnbi1rZXlzPjxyZWYtdHlwZSBuYW1lPSJKb3VybmFs

IEFydGljbGUiPjE3PC9yZWYtdHlwZT48Y29udHJpYnV0b3JzPjxhdXRob3JzPjxhdXRob3I+V2Fs

bGlzLCBHLjwvYXV0aG9yPjxhdXRob3I+Um9sbHMsIEUuIFQuPC9hdXRob3I+PC9hdXRob3JzPjwv

Y29udHJpYnV0b3JzPjx0aXRsZXM+PHRpdGxlPkEgbW9kZWwgb2YgaW52YXJpYW50IHJlY29nbml0

aW9uIGluIHRoZSB2aXN1YWwgc3lzdGVtPC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlPlByb2cgTmV1

cm9iaW9sPC9zZWNvbmRhcnktdGl0bGU+PC90aXRsZXM+PHBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+

UHJvZyBOZXVyb2Jpb2w8L2Z1bGwtdGl0bGU+PC9wZXJpb2RpY2FsPjxwYWdlcz4xNjctMTk0PC9w

YWdlcz48dm9sdW1lPjUxPC92b2x1bWU+PGRhdGVzPjx5ZWFyPjE5OTc8L3llYXI+PC9kYXRlcz48

dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+TWVsPC9BdXRob3I+PFll

YXI+MTk5NzwvWWVhcj48UmVjTnVtPjUxPC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41MTwv

cmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZl

cHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+NTE8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5

cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0

aG9ycz48YXV0aG9yPkIuIFcuIE1lbDwvYXV0aG9yPjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48

dGl0bGVzPjx0aXRsZT5TRUVNT1JFOiBDb21iaW5pbmcgY29sb3IsIHNoYXBlIGFuZCB0ZXh0dXJl

IGhpc3RvZ3JhbW1pbmcgaW4gYSBuZXVyYWxseS1pbnNwaXJlZCBhcHByb2FjaCB0byB2aXN1YWwg

b2JqZWN0IHJlY29nbml0aW9uPC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlPk5ldXJhbCBDb21wLjwv

c2Vjb25kYXJ5LXRpdGxlPjwvdGl0bGVzPjxwYWdlcz43NzctLTgwNDwvcGFnZXM+PHZvbHVtZT45

PC92b2x1bWU+PG51bWJlcj40PC9udW1iZXI+PGRhdGVzPjx5ZWFyPjE5OTc8L3llYXI+PC9kYXRl

cz48dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+Umllc2VuaHViZXI8

L0F1dGhvcj48WWVhcj4xOTk5PC9ZZWFyPjxSZWNOdW0+MjE8L1JlY051bT48cmVjb3JkPjxyZWMt

bnVtYmVyPjIxPC9yZWMtbnVtYmVyPjxmb3JlaWduLWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0i

d2V4dHdydzBiZmVwd3Zlc3gwb3gwYTk4MmZlc2Z3dHd3d2ZmIj4yMTwva2V5PjwvZm9yZWlnbi1r

ZXlzPjxyZWYtdHlwZSBuYW1lPSJKb3VybmFsIEFydGljbGUiPjE3PC9yZWYtdHlwZT48Y29udHJp

YnV0b3JzPjxhdXRob3JzPjxhdXRob3I+Umllc2VuaHViZXIsIE08L2F1dGhvcj48YXV0aG9yPlBv

Z2dpbywgVDwvYXV0aG9yPjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48dGl0bGVzPjx0aXRsZT5I

aWVyYXJjaGljYWwgbW9kZWxzIG9mIG9iamVjdCByZWNvZ25pdGlvbiBpbiBjb3J0ZXg8L3RpdGxl

PjxzZWNvbmRhcnktdGl0bGU+TmF0dXJlIE5ldXJvc2NpZW5jZTwvc2Vjb25kYXJ5LXRpdGxlPjwv

dGl0bGVzPjxwZXJpb2RpY2FsPjxmdWxsLXRpdGxlPk5hdHVyZSBOZXVyb3NjaWVuY2U8L2Z1bGwt

dGl0bGU+PC9wZXJpb2RpY2FsPjxwYWdlcz4xMDE5LTEwMjU8L3BhZ2VzPjx2b2x1bWU+Mjwvdm9s

dW1lPjxudW1iZXI+MTE8L251bWJlcj48a2V5d29yZHM+PGtleXdvcmQ+b2JqZWN0IHJlY29nbml0

aW9uLyBoaWVyYXJjaHkvPC9rZXl3b3JkPjwva2V5d29yZHM+PGRhdGVzPjx5ZWFyPjE5OTk8L3ll

YXI+PC9kYXRlcz48dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+VWxs

bWFuPC9BdXRob3I+PFllYXI+MjAwMjwvWWVhcj48UmVjTnVtPjQ2PC9SZWNOdW0+PHJlY29yZD48

cmVjLW51bWJlcj40NjwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGIt

aWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+NDY8L2tleT48L2ZvcmVp

Z24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNv

bnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPlVsbG1hbiwgUy48L2F1dGhvcj48YXV0aG9yPlZp

ZGFsLU5hcXVldCwgTS48L2F1dGhvcj48YXV0aG9yPlNhbGksIEUuPC9hdXRob3I+PC9hdXRob3Jz

PjwvY29udHJpYnV0b3JzPjxhdXRoLWFkZHJlc3M+RGVwYXJ0bWVudCBvZiBDb21wdXRlciBTY2ll

bmNlIGFuZCBBcHBsaWVkIE1hdGhlbWF0aWNzLCBUaGUgV2Vpem1hbm4gSW5zdGl0dXRlIG9mIFNj

aWVuY2UsIFBPIEJveCAyNiwgUmVob3ZvdCA3NjEwMCwgSXNyYWVsLiBzaGltb24udWxsbWFuQHdl

aXptYW5uLmFjLmlsPC9hdXRoLWFkZHJlc3M+PHRpdGxlcz48dGl0bGU+VmlzdWFsIGZlYXR1cmVz

IG9mIGludGVybWVkaWF0ZSBjb21wbGV4aXR5IGFuZCB0aGVpciB1c2UgaW4gY2xhc3NpZmljYXRp

b248L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+TmF0IE5ldXJvc2NpPC9zZWNvbmRhcnktdGl0bGU+

PC90aXRsZXM+PHBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+TmF0IE5ldXJvc2NpPC9mdWxsLXRpdGxl

PjwvcGVyaW9kaWNhbD48cGFnZXM+NjgyLTc8L3BhZ2VzPjx2b2x1bWU+NTwvdm9sdW1lPjxudW1i

ZXI+NzwvbnVtYmVyPjxrZXl3b3Jkcz48a2V5d29yZD5Db21wdXRlciBTaW11bGF0aW9uPC9rZXl3

b3JkPjxrZXl3b3JkPipGYWNlPC9rZXl3b3JkPjxrZXl3b3JkPkh1bWFuczwva2V5d29yZD48a2V5

d29yZD5JbWFnZSBQcm9jZXNzaW5nLCBDb21wdXRlci1Bc3Npc3RlZC8qbWV0aG9kczwva2V5d29y

ZD48a2V5d29yZD5QYXR0ZXJuIFJlY29nbml0aW9uLCBWaXN1YWwvKmNsYXNzaWZpY2F0aW9uPC9r

ZXl3b3JkPjxrZXl3b3JkPlJlc2VhcmNoIFN1cHBvcnQsIE5vbi1VLlMuIEdvdiZhcG9zO3Q8L2tl

eXdvcmQ+PC9rZXl3b3Jkcz48ZGF0ZXM+PHllYXI+MjAwMjwveWVhcj48cHViLWRhdGVzPjxkYXRl

Pkp1bDwvZGF0ZT48L3B1Yi1kYXRlcz48L2RhdGVzPjxhY2Nlc3Npb24tbnVtPjEyMDU1NjM0PC9h

Y2Nlc3Npb24tbnVtPjx1cmxzPjxyZWxhdGVkLXVybHM+PHVybD5odHRwOi8vd3d3Lm5jYmkubmxt

Lm5paC5nb3YvZW50cmV6L3F1ZXJ5LmZjZ2k/Y21kPVJldHJpZXZlJmFtcDtkYj1QdWJNZWQmYW1w

O2RvcHQ9Q2l0YXRpb24mYW1wO2xpc3RfdWlkcz0xMjA1NTYzNDwvdXJsPjwvcmVsYXRlZC11cmxz

PjwvdXJscz48L3JlY29yZD48L0NpdGU+PENpdGU+PEF1dGhvcj5UaG9ycGU8L0F1dGhvcj48WWVh

cj4yMDAyPC9ZZWFyPjxSZWNOdW0+NDk8L1JlY051bT48cmVjb3JkPjxyZWMtbnVtYmVyPjQ5PC9y

ZWMtbnVtYmVyPjxmb3JlaWduLWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0id2V4dHdydzBiZmVw

d3Zlc3gwb3gwYTk4MmZlc2Z3dHd3d2ZmIj40OTwva2V5PjwvZm9yZWlnbi1rZXlzPjxyZWYtdHlw

ZSBuYW1lPSJDb25mZXJlbmNlIFBhcGVyIj40NzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0

aG9ycz48YXV0aG9yPlMuIFRob3JwZTwvYXV0aG9yPjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48

dGl0bGVzPjx0aXRsZT5VbHRyYS1SYXBpZCBTY2VuZSBDYXRlZ29yaXphdGlvbiB3aXRoIGEgV2F2

ZSBvZiBTcGlrZXM8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+V29ya3Nob3Agb24gQmlvbG9naWNh

bGx5IE1vdGl2YXRlZCBDb21wdXRlciBWaXNpb24gKEJNQ1YpPC9zZWNvbmRhcnktdGl0bGU+PC90

aXRsZXM+PHBhZ2VzPjEtMTU8L3BhZ2VzPjxkYXRlcz48eWVhcj4yMDAyPC95ZWFyPjwvZGF0ZXM+

PHVybHM+PC91cmxzPjwvcmVjb3JkPjwvQ2l0ZT48Q2l0ZT48QXV0aG9yPkFtaXQ8L0F1dGhvcj48

WWVhcj4yMDAzPC9ZZWFyPjxSZWNOdW0+NDg8L1JlY051bT48cmVjb3JkPjxyZWMtbnVtYmVyPjQ4

PC9yZWMtbnVtYmVyPjxmb3JlaWduLWtleXM+PGtleSBhcHA9IkVOIiBkYi1pZD0id2V4dHdydzBi

ZmVwd3Zlc3gwb3gwYTk4MmZlc2Z3dHd3d2ZmIj40ODwva2V5PjwvZm9yZWlnbi1rZXlzPjxyZWYt

dHlwZSBuYW1lPSJKb3VybmFsIEFydGljbGUiPjE3PC9yZWYtdHlwZT48Y29udHJpYnV0b3JzPjxh

dXRob3JzPjxhdXRob3I+WS4gQW1pdDwvYXV0aG9yPjxhdXRob3I+TS4gTWFzY2FybzwvYXV0aG9y

PjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48dGl0bGVzPjx0aXRsZT5BbiBpbnRlZ3JhdGVkIG5l

dHdvcmsgZm9yIGludmFyaWFudCB2aXN1YWwgZGV0ZWN0aW9uIGFuZCByZWNvZ25pdGlvbjwvdGl0

bGU+PHNlY29uZGFyeS10aXRsZT5WaXNpb24gUmVzZWFyY2g8L3NlY29uZGFyeS10aXRsZT48L3Rp

dGxlcz48cGFnZXM+MjA3My0tMjA4ODwvcGFnZXM+PHZvbHVtZT40Mzwvdm9sdW1lPjxudW1iZXI+

MTk8L251bWJlcj48ZGF0ZXM+PHllYXI+MjAwMzwveWVhcj48L2RhdGVzPjx1cmxzPjwvdXJscz48

L3JlY29yZD48L0NpdGU+PENpdGU+PEF1dGhvcj5XZXJzaW5nPC9BdXRob3I+PFllYXI+MjAwMzwv

WWVhcj48UmVjTnVtPjUyPC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41MjwvcmVjLW51bWJl

cj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94

MGE5ODJmZXNmd3R3d3dmZiI+NTI8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0i

Sm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0

aG9yPkguIFdlcnNpbmc8L2F1dGhvcj48YXV0aG9yPkUuIEtvZXJuZXI8L2F1dGhvcj48L2F1dGhv

cnM+PC9jb250cmlidXRvcnM+PHRpdGxlcz48dGl0bGU+TGVhcm5pbmcgb3B0aW1pemVkIGZlYXR1

cmVzIGZvciBoaWVyYXJjaGljYWwgbW9kZWxzIG9mIGludmFyaWFudCByZWNvZ25pdGlvbjwvdGl0

bGU+PHNlY29uZGFyeS10aXRsZT5OZXVyYWwgQ29tcC48L3NlY29uZGFyeS10aXRsZT48L3RpdGxl

cz48cGFnZXM+MTU1OS0tMTU4ODwvcGFnZXM+PHZvbHVtZT4xNTwvdm9sdW1lPjxudW1iZXI+Nzwv

bnVtYmVyPjxkYXRlcz48eWVhcj4yMDAzPC95ZWFyPjwvZGF0ZXM+PHVybHM+PC91cmxzPjwvcmVj

b3JkPjwvQ2l0ZT48L0VuZE5vdGU+AG==

ADDIN EN.CITE.DATA [13-19], conceptual proposals PEVuZE5vdGU+PENpdGUgRXhjbHVkZVllYXI9IjEiPjxBdXRob3I+SHViZWw8L0F1dGhvcj48UmVj

TnVtPjEzPC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj4xMzwvcmVjLW51bWJlcj48Zm9yZWln

bi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNm

d3R3d3dmZiI+MTM8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBB

cnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPkh1YmVs

LCBELiBILjwvYXV0aG9yPjxhdXRob3I+V2llc2VsLCBULiBOLjwvYXV0aG9yPjwvYXV0aG9ycz48

L2NvbnRyaWJ1dG9ycz48dGl0bGVzPjx0aXRsZT5SZWNlcHRpdmUgZmllbGRzLCBiaW5vY3VsYXIg

aW50ZXJhY3Rpb24gYW5kIGZ1bmN0aW9uYWwgYXJjaGl0ZWN0dXJlIGluIHRoZSBjYXQmYXBvcztz

IHZpc3VhbCBjb3J0ZXg8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+SiBQaHlzaW9sPC9zZWNvbmRh

cnktdGl0bGU+PC90aXRsZXM+PHBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+SiBQaHlzaW9sPC9mdWxs

LXRpdGxlPjwvcGVyaW9kaWNhbD48cGFnZXM+MTA2LTU0PC9wYWdlcz48dm9sdW1lPjE2MDwvdm9s

dW1lPjxudW1iZXI+IDwvbnVtYmVyPjxkYXRlcz48eWVhcj4xOTYyPC95ZWFyPjxwdWItZGF0ZXM+

PGRhdGU+SmFuPC9kYXRlPjwvcHViLWRhdGVzPjwvZGF0ZXM+PGFjY2Vzc2lvbi1udW0+MTQ0NDk2

MTc8L2FjY2Vzc2lvbi1udW0+PHVybHM+PHJlbGF0ZWQtdXJscz48dXJsPjxzdHlsZSBmYWNlPSJ1

bmRlcmxpbmUiIGZvbnQ9ImRlZmF1bHQiIHNpemU9IjEwMCUiPmh0dHA6Ly93d3cubmNiaS5ubG0u

bmloLmdvdi9lbnRyZXovcXVlcnkuZmNnaT9jbWQ9UmV0cmlldmUmYW1wO2RiPVB1Yk1lZCZhbXA7

ZG9wdD1DaXRhdGlvbiZhbXA7bGlzdF91aWRzPTE0NDQ5NjE3PC9zdHlsZT48L3VybD48L3JlbGF0

ZWQtdXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+UGVycmV0dDwvQXV0

aG9yPjxZZWFyPjE5OTM8L1llYXI+PFJlY051bT41MzwvUmVjTnVtPjxyZWNvcmQ+PHJlYy1udW1i

ZXI+NTM8L3JlYy1udW1iZXI+PGZvcmVpZ24ta2V5cz48a2V5IGFwcD0iRU4iIGRiLWlkPSJ3ZXh0

d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0d3d3ZmYiPjUzPC9rZXk+PC9mb3JlaWduLWtleXM+

PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0aWNsZSI+MTc8L3JlZi10eXBlPjxjb250cmlidXRv

cnM+PGF1dGhvcnM+PGF1dGhvcj5QZXJyZXR0LCBELjwvYXV0aG9yPjxhdXRob3I+T3JhbSwgTS48

L2F1dGhvcj48L2F1dGhvcnM+PC9jb250cmlidXRvcnM+PHRpdGxlcz48dGl0bGU+TmV1cm9waHlz

aW9sb2d5IG9mIHNoYXBlIHByb2Nlc3Npbmc8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+SW1hZ2Ug

VmlzaW9uIENvbXB1dDwvc2Vjb25kYXJ5LXRpdGxlPjwvdGl0bGVzPjxwZXJpb2RpY2FsPjxmdWxs

LXRpdGxlPkltYWdlIFZpc2lvbiBDb21wdXQ8L2Z1bGwtdGl0bGU+PC9wZXJpb2RpY2FsPjxwYWdl

cz4zMTctMzMzPC9wYWdlcz48dm9sdW1lPjExPC92b2x1bWU+PGRhdGVzPjx5ZWFyPjE5OTM8L3ll

YXI+PC9kYXRlcz48dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+SG9j

aHN0ZWluPC9BdXRob3I+PFllYXI+MjAwMjwvWWVhcj48UmVjTnVtPjU0PC9SZWNOdW0+PHJlY29y

ZD48cmVjLW51bWJlcj41NDwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIg

ZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+NTQ8L2tleT48L2Zv

cmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+

PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPkhvY2hzdGVpbiwgUy48L2F1dGhvcj48YXV0

aG9yPkFoaXNzYXIsIE0uPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3JzPjxhdXRoLWFk

ZHJlc3M+RGVwYXJ0bWVudCBvZiBOZXVyb2Jpb2xvZ3ksIE5ldXJhbCBDb21wdXRhdGlvbiBDZW50

ZXIsIEhlYnJldyBVbml2ZXJzaXR5LCBKZXJ1c2FsZW0sIElzcmFlbC4gc2hhdWxAdm1zLiBodWpp

LmFjLmlsPC9hdXRoLWFkZHJlc3M+PHRpdGxlcz48dGl0bGU+VmlldyBmcm9tIHRoZSB0b3A6IGhp

ZXJhcmNoaWVzIGFuZCByZXZlcnNlIGhpZXJhcmNoaWVzIGluIHRoZSB2aXN1YWwgc3lzdGVtPC90

aXRsZT48c2Vjb25kYXJ5LXRpdGxlPk5ldXJvbjwvc2Vjb25kYXJ5LXRpdGxlPjwvdGl0bGVzPjxw

ZXJpb2RpY2FsPjxmdWxsLXRpdGxlPk5ldXJvbjwvZnVsbC10aXRsZT48L3BlcmlvZGljYWw+PHBh

Z2VzPjc5MS04MDQ8L3BhZ2VzPjx2b2x1bWU+MzY8L3ZvbHVtZT48bnVtYmVyPjU8L251bWJlcj48

a2V5d29yZHM+PGtleXdvcmQ+SHVtYW5zPC9rZXl3b3JkPjxrZXl3b3JkPkxlYXJuaW5nLypwaHlz

aW9sb2d5PC9rZXl3b3JkPjxrZXl3b3JkPk5ldXJvbnMvcGh5c2lvbG9neTwva2V5d29yZD48a2V5

d29yZD5OZXVyb3BzeWNob2xvZ2ljYWwgVGVzdHM8L2tleXdvcmQ+PGtleXdvcmQ+UGF0dGVybiBS

ZWNvZ25pdGlvbiwgVmlzdWFsPC9rZXl3b3JkPjxrZXl3b3JkPlJlc2VhcmNoIFN1cHBvcnQsIE5v

bi1VLlMuIEdvdiZhcG9zO3Q8L2tleXdvcmQ+PGtleXdvcmQ+UmVzZWFyY2ggU3VwcG9ydCwgVS5T

LiBHb3YmYXBvczt0LCBOb24tUC5ILlMuPC9rZXl3b3JkPjxrZXl3b3JkPlZpc2lvbi9waHlzaW9s

b2d5PC9rZXl3b3JkPjxrZXl3b3JkPlZpc3VhbCBDb3J0ZXgvY3l0b2xvZ3kvcGh5c2lvbG9neTwv

a2V5d29yZD48a2V5d29yZD5WaXN1YWwgUGVyY2VwdGlvbi8qcGh5c2lvbG9neTwva2V5d29yZD48

L2tleXdvcmRzPjxkYXRlcz48eWVhcj4yMDAyPC95ZWFyPjxwdWItZGF0ZXM+PGRhdGU+RGVjIDU8

L2RhdGU+PC9wdWItZGF0ZXM+PC9kYXRlcz48YWNjZXNzaW9uLW51bT4xMjQ2NzU4NDwvYWNjZXNz

aW9uLW51bT48dXJscz48cmVsYXRlZC11cmxzPjx1cmw+aHR0cDovL3d3dy5uY2JpLm5sbS5uaWgu

Z292L2VudHJlei9xdWVyeS5mY2dpP2NtZD1SZXRyaWV2ZSZhbXA7ZGI9UHViTWVkJmFtcDtkb3B0

PUNpdGF0aW9uJmFtcDtsaXN0X3VpZHM9MTI0Njc1ODQ8L3VybD48L3JlbGF0ZWQtdXJscz48L3Vy

bHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+QmllZGVybWFuPC9BdXRob3I+PFllYXI+

MTk4NzwvWWVhcj48UmVjTnVtPjU2PC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41NjwvcmVj

LW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZlcHd2

ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+NTY8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUg

bmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9y

cz48YXV0aG9yPkkuIEJpZWRlcm1hbjwvYXV0aG9yPjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48

dGl0bGVzPjx0aXRsZT5SZWNvZ25pdGlvbi1ieS1Db21wb25lbnRzOiBBIFRoZW9yeSBvZiBIdW1h

biBJbWFnZSBVbmRlcnN0YW5kaW5nPC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlPlBzeWNoLiBSZXYu

PC9zZWNvbmRhcnktdGl0bGU+PC90aXRsZXM+PHBhZ2VzPjExNS0tMTQ3PC9wYWdlcz48dm9sdW1l

Pjk0PC92b2x1bWU+PGRhdGVzPjx5ZWFyPjE5ODc8L3llYXI+PC9kYXRlcz48dXJscz48L3VybHM+

PC9yZWNvcmQ+PC9DaXRlPjwvRW5kTm90ZT4A

ADDIN EN.CITE PEVuZE5vdGU+PENpdGUgRXhjbHVkZVllYXI9IjEiPjxBdXRob3I+SHViZWw8L0F1dGhvcj48UmVj

TnVtPjEzPC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj4xMzwvcmVjLW51bWJlcj48Zm9yZWln

bi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNm

d3R3d3dmZiI+MTM8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBB

cnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPkh1YmVs

LCBELiBILjwvYXV0aG9yPjxhdXRob3I+V2llc2VsLCBULiBOLjwvYXV0aG9yPjwvYXV0aG9ycz48

L2NvbnRyaWJ1dG9ycz48dGl0bGVzPjx0aXRsZT5SZWNlcHRpdmUgZmllbGRzLCBiaW5vY3VsYXIg

aW50ZXJhY3Rpb24gYW5kIGZ1bmN0aW9uYWwgYXJjaGl0ZWN0dXJlIGluIHRoZSBjYXQmYXBvcztz

IHZpc3VhbCBjb3J0ZXg8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+SiBQaHlzaW9sPC9zZWNvbmRh

cnktdGl0bGU+PC90aXRsZXM+PHBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+SiBQaHlzaW9sPC9mdWxs

LXRpdGxlPjwvcGVyaW9kaWNhbD48cGFnZXM+MTA2LTU0PC9wYWdlcz48dm9sdW1lPjE2MDwvdm9s

dW1lPjxudW1iZXI+IDwvbnVtYmVyPjxkYXRlcz48eWVhcj4xOTYyPC95ZWFyPjxwdWItZGF0ZXM+

PGRhdGU+SmFuPC9kYXRlPjwvcHViLWRhdGVzPjwvZGF0ZXM+PGFjY2Vzc2lvbi1udW0+MTQ0NDk2

MTc8L2FjY2Vzc2lvbi1udW0+PHVybHM+PHJlbGF0ZWQtdXJscz48dXJsPjxzdHlsZSBmYWNlPSJ1

bmRlcmxpbmUiIGZvbnQ9ImRlZmF1bHQiIHNpemU9IjEwMCUiPmh0dHA6Ly93d3cubmNiaS5ubG0u

bmloLmdvdi9lbnRyZXovcXVlcnkuZmNnaT9jbWQ9UmV0cmlldmUmYW1wO2RiPVB1Yk1lZCZhbXA7

ZG9wdD1DaXRhdGlvbiZhbXA7bGlzdF91aWRzPTE0NDQ5NjE3PC9zdHlsZT48L3VybD48L3JlbGF0

ZWQtdXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+UGVycmV0dDwvQXV0

aG9yPjxZZWFyPjE5OTM8L1llYXI+PFJlY051bT41MzwvUmVjTnVtPjxyZWNvcmQ+PHJlYy1udW1i

ZXI+NTM8L3JlYy1udW1iZXI+PGZvcmVpZ24ta2V5cz48a2V5IGFwcD0iRU4iIGRiLWlkPSJ3ZXh0

d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0d3d3ZmYiPjUzPC9rZXk+PC9mb3JlaWduLWtleXM+

PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0aWNsZSI+MTc8L3JlZi10eXBlPjxjb250cmlidXRv

cnM+PGF1dGhvcnM+PGF1dGhvcj5QZXJyZXR0LCBELjwvYXV0aG9yPjxhdXRob3I+T3JhbSwgTS48

L2F1dGhvcj48L2F1dGhvcnM+PC9jb250cmlidXRvcnM+PHRpdGxlcz48dGl0bGU+TmV1cm9waHlz

aW9sb2d5IG9mIHNoYXBlIHByb2Nlc3Npbmc8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+SW1hZ2Ug

VmlzaW9uIENvbXB1dDwvc2Vjb25kYXJ5LXRpdGxlPjwvdGl0bGVzPjxwZXJpb2RpY2FsPjxmdWxs

LXRpdGxlPkltYWdlIFZpc2lvbiBDb21wdXQ8L2Z1bGwtdGl0bGU+PC9wZXJpb2RpY2FsPjxwYWdl

cz4zMTctMzMzPC9wYWdlcz48dm9sdW1lPjExPC92b2x1bWU+PGRhdGVzPjx5ZWFyPjE5OTM8L3ll

YXI+PC9kYXRlcz48dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+SG9j

aHN0ZWluPC9BdXRob3I+PFllYXI+MjAwMjwvWWVhcj48UmVjTnVtPjU0PC9SZWNOdW0+PHJlY29y

ZD48cmVjLW51bWJlcj41NDwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIg

ZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+NTQ8L2tleT48L2Zv

cmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+

PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPkhvY2hzdGVpbiwgUy48L2F1dGhvcj48YXV0

aG9yPkFoaXNzYXIsIE0uPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3JzPjxhdXRoLWFk

ZHJlc3M+RGVwYXJ0bWVudCBvZiBOZXVyb2Jpb2xvZ3ksIE5ldXJhbCBDb21wdXRhdGlvbiBDZW50

ZXIsIEhlYnJldyBVbml2ZXJzaXR5LCBKZXJ1c2FsZW0sIElzcmFlbC4gc2hhdWxAdm1zLiBodWpp

LmFjLmlsPC9hdXRoLWFkZHJlc3M+PHRpdGxlcz48dGl0bGU+VmlldyBmcm9tIHRoZSB0b3A6IGhp

ZXJhcmNoaWVzIGFuZCByZXZlcnNlIGhpZXJhcmNoaWVzIGluIHRoZSB2aXN1YWwgc3lzdGVtPC90

aXRsZT48c2Vjb25kYXJ5LXRpdGxlPk5ldXJvbjwvc2Vjb25kYXJ5LXRpdGxlPjwvdGl0bGVzPjxw

ZXJpb2RpY2FsPjxmdWxsLXRpdGxlPk5ldXJvbjwvZnVsbC10aXRsZT48L3BlcmlvZGljYWw+PHBh

Z2VzPjc5MS04MDQ8L3BhZ2VzPjx2b2x1bWU+MzY8L3ZvbHVtZT48bnVtYmVyPjU8L251bWJlcj48

a2V5d29yZHM+PGtleXdvcmQ+SHVtYW5zPC9rZXl3b3JkPjxrZXl3b3JkPkxlYXJuaW5nLypwaHlz

aW9sb2d5PC9rZXl3b3JkPjxrZXl3b3JkPk5ldXJvbnMvcGh5c2lvbG9neTwva2V5d29yZD48a2V5

d29yZD5OZXVyb3BzeWNob2xvZ2ljYWwgVGVzdHM8L2tleXdvcmQ+PGtleXdvcmQ+UGF0dGVybiBS

ZWNvZ25pdGlvbiwgVmlzdWFsPC9rZXl3b3JkPjxrZXl3b3JkPlJlc2VhcmNoIFN1cHBvcnQsIE5v

bi1VLlMuIEdvdiZhcG9zO3Q8L2tleXdvcmQ+PGtleXdvcmQ+UmVzZWFyY2ggU3VwcG9ydCwgVS5T

LiBHb3YmYXBvczt0LCBOb24tUC5ILlMuPC9rZXl3b3JkPjxrZXl3b3JkPlZpc2lvbi9waHlzaW9s

b2d5PC9rZXl3b3JkPjxrZXl3b3JkPlZpc3VhbCBDb3J0ZXgvY3l0b2xvZ3kvcGh5c2lvbG9neTwv

a2V5d29yZD48a2V5d29yZD5WaXN1YWwgUGVyY2VwdGlvbi8qcGh5c2lvbG9neTwva2V5d29yZD48

L2tleXdvcmRzPjxkYXRlcz48eWVhcj4yMDAyPC95ZWFyPjxwdWItZGF0ZXM+PGRhdGU+RGVjIDU8

L2RhdGU+PC9wdWItZGF0ZXM+PC9kYXRlcz48YWNjZXNzaW9uLW51bT4xMjQ2NzU4NDwvYWNjZXNz

aW9uLW51bT48dXJscz48cmVsYXRlZC11cmxzPjx1cmw+aHR0cDovL3d3dy5uY2JpLm5sbS5uaWgu

Z292L2VudHJlei9xdWVyeS5mY2dpP2NtZD1SZXRyaWV2ZSZhbXA7ZGI9UHViTWVkJmFtcDtkb3B0

PUNpdGF0aW9uJmFtcDtsaXN0X3VpZHM9MTI0Njc1ODQ8L3VybD48L3JlbGF0ZWQtdXJscz48L3Vy

bHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRob3I+QmllZGVybWFuPC9BdXRob3I+PFllYXI+

MTk4NzwvWWVhcj48UmVjTnVtPjU2PC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41NjwvcmVj

LW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZlcHd2

ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+NTY8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUg

bmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9y

cz48YXV0aG9yPkkuIEJpZWRlcm1hbjwvYXV0aG9yPjwvYXV0aG9ycz48L2NvbnRyaWJ1dG9ycz48

dGl0bGVzPjx0aXRsZT5SZWNvZ25pdGlvbi1ieS1Db21wb25lbnRzOiBBIFRoZW9yeSBvZiBIdW1h

biBJbWFnZSBVbmRlcnN0YW5kaW5nPC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlPlBzeWNoLiBSZXYu

PC9zZWNvbmRhcnktdGl0bGU+PC90aXRsZXM+PHBhZ2VzPjExNS0tMTQ3PC9wYWdlcz48dm9sdW1l

Pjk0PC92b2x1bWU+PGRhdGVzPjx5ZWFyPjE5ODc8L3llYXI+PC9kYXRlcz48dXJscz48L3VybHM+

PC9yZWNvcmQ+PC9DaXRlPjwvRW5kTm90ZT4A

ADDIN EN.CITE.DATA [20-23] and computer vision systems ADDIN EN.CITE <EndNote><Cite><Author>Fukushima</Author><Year>1980</Year><RecNum>47</RecNum><record><rec-number>47</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">47</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>K. Fukushima</author></authors></contributors><titles><title>Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position</title><secondary-title>Biol. Cyb.</secondary-title></titles><pages>193-202</pages><volume>36</volume><dates><year>1980</year></dates><urls></urls></record></Cite><Cite><Author>Lecun</Author><Year>1998</Year><RecNum>55</RecNum><record><rec-number>55</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">55</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Y. LeCun</author><author>L. Bottou</author><author>Y. Bengio</author><author>P. Haffner</author></authors></contributors><titles><title>Gradient-Based Learning Applied to Document Recognition</title><secondary-title>Proc. of the IEEE</secondary-title></titles><pages>2278--2324</pages><volume>86</volume><number>11</number><dates><year>1998</year></dates><urls></urls></record></Cite></EndNote>[12, 24], we have been developing ADDIN EN.CITE <EndNote><Cite><Author>Riesenhuber</Author><Year>1999</Year><RecNum>21</RecNum><record><rec-number>21</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">21</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Riesenhuber, M</author><author>Poggio, T</author></authors></contributors><titles><title>Hierarchical models of object recognition in cortex</title><secondary-title>Nature Neuroscience</secondary-title></titles><periodical><full-title>Nature Neuroscience</full-title></periodical><pages>1019-1025</pages><volume>2</volume><number>11</number><keywords><keyword>object recognition/ hierarchy/</keyword></keywords><dates><year>1999</year></dates><urls></urls></record></Cite><Cite><Author>Serre</Author><Year>2005</Year><RecNum>5</RecNum><record><rec-number>5</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">5</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Kouh, M.</author><author>Cadieu, C.</author><author>Knoblich, U.</author><author>Kreiman, G.</author><author>Poggio, T.</author></authors></contributors><titles><title>A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex</title><secondary-title>MIT AI Memo 2005-036 / CBCL Memo 259</secondary-title></titles><periodical><full-title>MIT AI Memo 2005-036 / CBCL Memo 259</full-title></periodical><number>AI Memo 2005-036 / CBCL Memo 259</number><dates><year>2005</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><label>Serre:MEMO:05</label><urls></urls></record></Cite></EndNote>[5, 15] (see also ADDIN EN.CITE <EndNote><Cite><Author>Mutch</Author><Year>2006</Year><RecNum>28</RecNum><record><rec-number>28</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">28</key></foreign-keys><ref-type name="Conference Paper">47</ref-type><contributors><authors><author>Mutch, J.</author><author>Lowe, D.</author></authors></contributors><titles><title>Multiclass Object Recognition Using Sparse, Localized Features</title><secondary-title>CVPR</secondary-title></titles><dates><year>2006</year></dates><label>Mutch:CVPR:06</label><urls></urls></record></Cite><Cite><Author>Masquelier</Author><Year>2007</Year><RecNum>45</RecNum><record><rec-number>45</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">45</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Masquelier, T. </author><author>Serre, T. </author><author>Thorpe, S.</author><author>Poggio, T.</author></authors></contributors><titles><title>Learning complex cell invariance from natural videos: a plausibility proof</title><secondary-title>CBCL Paper #269/MIT-CSAIL-TR #2007-060</secondary-title></titles><dates><year>2007</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><urls></urls></record></Cite></EndNote>[25, 26]) a similar computational theory (see Fig. 1) that attempts to quantitatively account for a host of recent anatomical and physiological data.The feedforward hierarchical model of Figure 2 assumes two classes of functional units: simple and complex units. Simple units act as local template matching operators: They increase the complexity of the image representation by pooling over local afferent units with selectivities for different image-features (for instance edges at different orientations). Complex units on the other hand increase the tolerance of the representation with respect to 2D transformations by pooling over afferent units with similar selectivity but slightly different positions and scales. Learning and plasticityHow much of the organization of the visual cortex is influenced by development vs. genetics remains a matter of debate. A recent fMRI study ADDIN EN.CITE <EndNote><Cite><Author>Polk</Author><Year>2007</Year><RecNum>11</RecNum><record><rec-number>11</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">11</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Polk, T.A.</author><author>Park, J. E.</author><author>Smith, M.R.</author><author>Park, D.C.</author></authors></contributors><titles><title>Nature versus nurture in ventral visual cortex: A functional magnetic resonance imaging study of twins</title><secondary-title>Journal of Neuroscience</secondary-title></titles><periodical><full-title>Journal of Neuroscience</full-title></periodical><pages>13921-13925</pages><volume>27</volume><number>51</number><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[27] showed that the patterns of neural activity elicited by certain ecologically important classes of objects such as faces and places in monozygotic twins were significantly more similar than in dizygotic twins. These results thus suggest that genes may play a significant role in the way the visual cortex is wired to process certain object classes. At the same time, several electrophysiological studies have demonstrated learning and plasticity in the adult monkey (see for instance ADDIN EN.CITE <EndNote><Cite><Author>Li</Author><Year>2008</Year><RecNum>12</RecNum><record><rec-number>12</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">12</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Li, N.</author><author>DiCarlo, J. J.</author></authors></contributors><auth-address>McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.</auth-address><titles><title>Unsupervised natural experience rapidly alters invariant object representation in visual cortex</title><secondary-title>Science</secondary-title></titles><periodical><full-title>Science</full-title></periodical><pages>1502-7</pages><volume>321</volume><number>5895</number><keywords><keyword>Temporal Lobe</keyword><keyword>Macaca mulatta</keyword><keyword>Animals</keyword><keyword>Visual Cortex</keyword><keyword>Saccades</keyword><keyword>Male</keyword><keyword>Visual Perception</keyword><keyword>Models: Neurological</keyword><keyword>neurons</keyword><keyword>Neuronal Plasticity</keyword><keyword>Form Perception</keyword><keyword>Learning</keyword></keywords><dates><year>2008</year><pub-dates><date>Sep 12</date></pub-dates></dates><accession-num>18787171</accession-num><label>p18870</label><urls><related-urls><url>;[28]). Learning is likely to be both faster and easier to elicit in higher visually responsive areas such as PFC or IT ADDIN EN.CITE <EndNote><Cite><Author>Li</Author><Year>2008</Year><RecNum>12</RecNum><record><rec-number>12</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">12</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Li, N.</author><author>DiCarlo, J. J.</author></authors></contributors><auth-address>McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.</auth-address><titles><title>Unsupervised natural experience rapidly alters invariant object representation in visual cortex</title><secondary-title>Science</secondary-title></titles><periodical><full-title>Science</full-title></periodical><pages>1502-7</pages><volume>321</volume><number>5895</number><keywords><keyword>Temporal Lobe</keyword><keyword>Macaca mulatta</keyword><keyword>Animals</keyword><keyword>Visual Cortex</keyword><keyword>Saccades</keyword><keyword>Male</keyword><keyword>Visual Perception</keyword><keyword>Models: Neurological</keyword><keyword>neurons</keyword><keyword>Neuronal Plasticity</keyword><keyword>Form Perception</keyword><keyword>Learning</keyword></keywords><dates><year>2008</year><pub-dates><date>Sep 12</date></pub-dates></dates><accession-num>18787171</accession-num><label>p18870</label><urls><related-urls><url>;[28] than in lower areas.This makes intuitive sense: For the visual system to remain stable, the time scale for learning should increase ascending the ventral stream. In the model of Fig. 2, we assumed that unsupervised learning from V1 to IT happens during development in a sequence that starts with the lower areas. In reality, learning may continue throughout adulthood (certainly at the level of IT and perhaps in intermediate and lower areas). Box 1: Functional classes of cells and learning. Simple and complex cells. Following their work on striate cortex ADDIN EN.CITE <EndNote><Cite><Author>Hubel</Author><Year>1962</Year><RecNum>13</RecNum><record><rec-number>13</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">13</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Hubel, D. H.</author><author>Wiesel, T. N.</author></authors></contributors><titles><title>Receptive fields, binocular interaction and functional architecture in the cat&apos;s visual cortex</title><secondary-title>J Physiol</secondary-title></titles><periodical><full-title>J Physiol</full-title></periodical><pages>106-54</pages><volume>160</volume><number> </number><dates><year>1962</year><pub-dates><date>Jan</date></pub-dates></dates><accession-num>14449617</accession-num><urls><related-urls><url><style face="underline" font="default" size="100%">;" [20], Hubel & Wiesel first described two classes of functional cells. Simple cells that respond best to bar-like (or edge-like) stimuli at a particular orientation, position and phase (i.e., white bar on a black background or dark bar on a white background) within their relatively small receptive fields. Complex cells, on the other hand, while also selective for bars, tend to have larger receptive fields (about twice as big) and exhibit some tolerance with respect to the exact position (and phase of the bar) within their receptive fields. Hubel & Wiesel described a way by which specific pooling mechanisms could explain the response properties of these cells. Simple-cell-like receptive fields could be obtained by pooling the activity of a small set of cells tuned to spots of lights (as observed in ganglion cells in the retina and the Lateral Geniculate Nucleus) aligned around a preferred axis of orientation (not shown on the figure). Similarly, position tolerance at the complex cell level (green color on the figure), could be obtained by pooling over afferent simple cells (at the level below) with the same preferred orientation but slightly different positions. Recent work has provided evidence for such selective pooling mechanisms in V1 ADDIN EN.CITE <EndNote><Cite><Author>Rust</Author><Year>2005</Year><RecNum>14</RecNum><record><rec-number>14</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">14</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Rust, N.</author><author>Schwartz, O.</author><author>Simoncelli, E.P.</author><author>Movshon, J. A.</author></authors></contributors><auth-address>Center for Neural Science and New York University, New York, New York 10003, USA. rust@cns.nyu.edu</auth-address><titles><title>Spatiotemporal elements of macaque v1 receptive fields</title><secondary-title>Neuron</secondary-title></titles><periodical><full-title>Neuron</full-title></periodical><pages>945-56</pages><volume>46</volume><number>6</number><keywords><keyword>Nonlinear Dynamics</keyword><keyword>Male</keyword><keyword>Neurons</keyword><keyword>Animals</keyword><keyword>Motion Perception</keyword><keyword>Action Potentials</keyword><keyword>Photic Stimulation</keyword><keyword>Visual Cortex</keyword><keyword>Visual Pathways</keyword><keyword>Neural Inhibition</keyword><keyword>Space Perception</keyword><keyword>Models: Neurological</keyword><keyword>Visual Fields</keyword><keyword>Macaca</keyword></keywords><dates><year>2005</year><pub-dates><date>Jun 16</date></pub-dates></dates><accession-num>15953422</accession-num><label>p16319</label><urls><related-urls><url><style face="underline" font="default" size="100%"> face="underline" font="default" size="100%">10.1016/j.neuron.2005.05.021</style></electronic-resource-num><language>eng</language></record></Cite></EndNote>" [30]. Extending these ideas from primary visual cortex to higher areas of the visual cortex led to a class of models of object recognition, the feedforward hierarchical models (see ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2005</Year><RecNum>5</RecNum><record><rec-number>5</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">5</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Kouh, M.</author><author>Cadieu, C.</author><author>Knoblich, U.</author><author>Kreiman, G.</author><author>Poggio, T.</author></authors></contributors><titles><title>A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex</title><secondary-title>MIT AI Memo 2005-036 / CBCL Memo 259</secondary-title></titles><periodical><full-title>MIT AI Memo 2005-036 / CBCL Memo 259</full-title></periodical><number>AI Memo 2005-036 / CBCL Memo 259</number><dates><year>2005</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><label>Serre:MEMO:05</label><urls></urls></record></Cite></EndNote>" [5] for a recent review). Illustrated at the top of the figure on the left is a V2-like simple cell obtained by combining several V1 complex cells tuned to bars at different orientations. Iterating these selective pooling mechanisms leads to a hierarchical architecture like the one described in Figure 2. Along the hierarchy, units become selective for increasingly complex stimuli and at the same time exhibit more and more invariance properties with respect to position (and scale).Learning of selectivity and invariance. In the model of Figure 1, simple units are selective for specific conjunctions of inputs (i.e., similar to an and-like operation). Their wiring thus corresponds to learning correlations between inputs at the same time-points (i.e., for simple cells in V1, the bar-like arrangements of LGN inputs, and beyond V1, more elaborate arrangements of bar-like subunits, etc). This corresponds to learning what combinations of features appear most frequently in images (i.e., which sets of inputs are consistently co-active) and to become selective to these patterns. Conversely the wiring of complex units may correspond to learning how to associate frequent transformations in time – such as translation and scale – of specific image features coded by afferent (simple) cells. The wiring of the complex units reflects learning of correlations across time (because of the object motion), e.g., for V1-like complex units, learning which afferent units with the same orientation and neighboring locations should be wired together because, often, such a pattern changes smoothly in time (under translation) ADDIN EN.CITE <EndNote><Cite><Author>Foldiak</Author><Year>1991</Year><RecNum>15</RecNum><record><rec-number>15</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">15</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Foldiak, P</author></authors></contributors><titles><title>Learning invariance from transformation sequences</title><secondary-title>Neural Computation</secondary-title></titles><periodical><full-title>Neural Computation</full-title></periodical><pages>194-200</pages><volume>3</volume><number> </number><keywords><keyword>object recognition/ models/ scale and position invariance</keyword></keywords><dates><year>1991</year></dates><label>foldiak1991</label><urls></urls></record></Cite></EndNote>" [31]. Unsupervised learning in the ventral stream of the visual cortex With the exception of the task-specific units at the top of the hierarchy (denoted ‘visual routines’), learning in the model described in Figure 2 remains unsupervised thus closely mimicking a developmental learning stage. As emphasized by several authors, statistical regularities in natural visual scenes may provide critical cues to the visual system for learning with very limited or no supervision. One of the key goals of the visual system may be to adapt to the statistics of its natural environment through visual experience and perhaps evolution. In the model of Figure 2, the selectivity of simple and complex units can be learned from natural video sequences (see Box 1 for details).Supervised learning in higher areasAfter this initial developmental learning stage, learning of a new object category only requires training of task-specific circuits at the top of the ventral stream hierarchy. The ventral stream hierarchy thus provides a position and scale-invariant representation to task-specific circuits beyond IT to learn to generalize over transformations other than image-plane transformations such as 3D rotation that have to be learned anew for every object (or category). For instance, pose-invariant face categorization circuits may be built, possibly in PFC, by combining several units tuned to different face examples, including different people, views and lighting conditions (possibly in IT).In a default state (when no specific visual task is set) there may be a default routine running (perhaps the routine: What is there?). As an example of a simple routine consider a classifier, which receives the activity of a few hundred IT-like units, tuned to examples of the target object and distractors. While learning in the model from the layers below is stimulus-driven, the PFC-like classification units are trained in a supervised way (using a perceptron-like learning rule).Immediate RecognitionBox 2: Summary of quantitative data that are compatible with the model described above. Black corresponds to data that were used to derive the parameters of the model, red to data that are consistent with the model (not used to fit model parameters) and blue to actual correct predictions by the model. Notations: PFC (= prefrontal cortex), V1 (= visual area I or primary visual cortex), V4 (= visual area IV), IT (= inferotemporal cortex). Data from these areas correspond to monkey electrophysiology studies. LOC (=Lateral Occipital Complex) involves fMRI with humans; the Psych. studies are psychophysics on human subjects.An important aspect of the visual object recognition hierarchy (see Figure 2), i.e., the role of the anatomical back-projections abundantly present between almost all of the areas in visual cortex, remains a matter of debate. A commonly accepted hypothesis is that the basic processing of information is feedforward ADDIN EN.CITE <EndNote><Cite ExcludeYear="1"><Author>Thorpe</Author><RecNum>62</RecNum><record><rec-number>62</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">62</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>S. Thorpe</author><author>D. Fize</author><author>C. Marlot</author></authors></contributors><titles><title>Speed of processing in the human visual system</title><secondary-title>Nature</secondary-title></titles><pages>520-2.</pages><volume>381</volume><number>6582</number><keywords><keyword>Adult Evoked Potentials, Visual Female Human Male Middle Age *Reaction Time</keyword><keyword>Support, Non-U.S. Gov&apos;t Vision/*physiology</keyword></keywords><dates><year>1996</year></dates><urls></urls></record></Cite></EndNote>[32]. This is supported most directly by the short times required for a selective response to appear in cells at all stages of the hierarchy. Neural recordings from IT in monkey ADDIN EN.CITE <EndNote><Cite><Author>Hung</Author><Year>2005</Year><RecNum>7</RecNum><record><rec-number>7</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">7</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Hung, C.P.</author><author>Kreiman, G.</author><author>Poggio, T.</author><author>DiCarlo, J.J.</author></authors></contributors><titles><title>Fast read-out of object identity from macaque inferior temporal cortex</title><secondary-title>Science</secondary-title></titles><periodical><full-title>Science</full-title></periodical><pages>863-866</pages><volume>310</volume><number> </number><dates><year>2005</year></dates><urls></urls></record></Cite></EndNote>[7] show that the activity of small neuronal populations, over very short time intervals (as small as 12.5 ms) and only about 100 ms after stimulus onset, contains surprisingly accurate and robust information supporting a variety of recognition tasks. While this does not rule out local feedback loops within an area, it does suggest that a core hierarchical feedforward architecture like the one described here, may be a reasonable starting point for a theory of visual cortex, aiming to explain immediate recognition, the initial phase of recognition before eye movements and high-level processes take place.Agreement with experimental dataSince it was originally developed ADDIN EN.CITE <EndNote><Cite><Author>Riesenhuber</Author><Year>1999</Year><RecNum>21</RecNum><record><rec-number>21</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">21</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Riesenhuber, M</author><author>Poggio, T</author></authors></contributors><titles><title>Hierarchical models of object recognition in cortex</title><secondary-title>Nature Neuroscience</secondary-title></titles><periodical><full-title>Nature Neuroscience</full-title></periodical><pages>1019-1025</pages><volume>2</volume><number>11</number><keywords><keyword>object recognition/ hierarchy/</keyword></keywords><dates><year>1999</year></dates><urls></urls></record></Cite><Cite><Author>Serre</Author><Year>2005</Year><RecNum>5</RecNum><record><rec-number>5</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">5</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Kouh, M.</author><author>Cadieu, C.</author><author>Knoblich, U.</author><author>Kreiman, G.</author><author>Poggio, T.</author></authors></contributors><titles><title>A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex</title><secondary-title>MIT AI Memo 2005-036 / CBCL Memo 259</secondary-title></titles><periodical><full-title>MIT AI Memo 2005-036 / CBCL Memo 259</full-title></periodical><number>AI Memo 2005-036 / CBCL Memo 259</number><dates><year>2005</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><label>Serre:MEMO:05</label><urls></urls></record></Cite></EndNote>[5, 15], the model of Fig. 2 has been able to explain a number of new experimental data. This includes data that were not used to derive or fit model parameters. The model seems to be qualitatively and quantitatively consistent with (and in some cases actually predicts, see ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2005</Year><RecNum>5</RecNum><record><rec-number>5</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">5</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Kouh, M.</author><author>Cadieu, C.</author><author>Knoblich, U.</author><author>Kreiman, G.</author><author>Poggio, T.</author></authors></contributors><titles><title>A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex</title><secondary-title>MIT AI Memo 2005-036 / CBCL Memo 259</secondary-title></titles><periodical><full-title>MIT AI Memo 2005-036 / CBCL Memo 259</full-title></periodical><number>AI Memo 2005-036 / CBCL Memo 259</number><dates><year>2005</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><label>Serre:MEMO:05</label><urls></urls></record></Cite></EndNote>[5]) several properties of subpopulations of cells in V1, V4, IT, and PFC as well as fMRI and psychophysical data (see Box 2 for a complete list of findings). We recently compared the performance of this model and the performance of human observers in a rapid animal vs. non-animal recognition task ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2007</Year><RecNum>9</RecNum><record><rec-number>9</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">9</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Oliva, A.</author><author>Poggio, T.</author></authors></contributors><auth-address>Center for Biological and Computational Learning, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. serre@mit.edu</auth-address><titles><title>A feedforward architecture accounts for rapid categorization</title><secondary-title>Proc Natl Acad Sci U S A</secondary-title></titles><periodical><full-title>Proc Natl Acad Sci U S A</full-title></periodical><pages>6424-9</pages><volume>104</volume><number>15</number><dates><year>2007</year><pub-dates><date>Apr 10</date></pub-dates></dates><accession-num>17404214</accession-num><urls><related-urls><url> </url></related-urls></urls></record></Cite></EndNote>[9] for which recognition is fast and cortical back-projections are possibly less relevant. Results indicate that the model predicts human performance quite well during such task suggesting that the model may therefore provide a satisfactory description of the feedforward path. In particular, for this experiment, we broke down the performance of the model and human observers into four image categories with varying amount of clutter. Interestingly the performance of both the model and human observers was highest (~90% correct for both human participants and the model) on images for which the amount of information is maximal and the amount of clutter minimal and decreases monotically as the amount of clutter in the image increases. This decrease in performance with increasing amount of clutter is likely to reflect a key limitation of this type of feedforward architectures. This result is in agreement with the reduced selectivity of neurons in V4 and IT when presented with multiple stimuli within their receptive fields for which the model provides a good quantitative fit ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2005</Year><RecNum>5</RecNum><record><rec-number>5</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">5</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Kouh, M.</author><author>Cadieu, C.</author><author>Knoblich, U.</author><author>Kreiman, G.</author><author>Poggio, T.</author></authors></contributors><titles><title>A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex</title><secondary-title>MIT AI Memo 2005-036 / CBCL Memo 259</secondary-title></titles><periodical><full-title>MIT AI Memo 2005-036 / CBCL Memo 259</full-title></periodical><number>AI Memo 2005-036 / CBCL Memo 259</number><dates><year>2005</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><label>Serre:MEMO:05</label><urls></urls></record></Cite></EndNote>[5] with neurophysiology data ADDIN EN.CITE <EndNote><Cite><Author>Reynolds</Author><Year>1999</Year><RecNum>24</RecNum><record><rec-number>24</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">24</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Reynolds, J. H.</author><author>Chelazzi, L.</author><author>Desimone, R.</author></authors></contributors><auth-address>Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892-4415, USA.</auth-address><titles><title>Competitive mechanisms subserve attention in macaque areas V2 and V4</title><secondary-title>J Neurosci</secondary-title></titles><periodical><full-title>J Neurosci</full-title></periodical><pages>1736-53</pages><volume>19</volume><number>5</number><keywords><keyword>Animals</keyword><keyword>Attention/*physiology</keyword><keyword>Behavior, Animal/physiology</keyword><keyword>Electrodes, Implanted</keyword><keyword>Macaca/*physiology</keyword><keyword>Male</keyword><keyword>Models, Neurological</keyword><keyword>Neurons/physiology</keyword><keyword>Photic Stimulation</keyword><keyword>Reaction Time/physiology</keyword><keyword>Research Support, Non-U.S. Gov&apos;t</keyword><keyword>Research Support, U.S. Gov&apos;t, P.H.S.</keyword><keyword>Task Performance and Analysis</keyword><keyword>Visual Cortex/*physiology</keyword><keyword>Visual Pathways/physiology</keyword><keyword>Visual Perception/physiology</keyword></keywords><dates><year>1999</year><pub-dates><date>Mar 1</date></pub-dates></dates><accession-num>10024360</accession-num><urls><related-urls><url> </url></related-urls></urls></record></Cite></EndNote>[33]. Application to Computer Vision How does the model ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2005</Year><RecNum>5</RecNum><record><rec-number>5</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">5</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Kouh, M.</author><author>Cadieu, C.</author><author>Knoblich, U.</author><author>Kreiman, G.</author><author>Poggio, T.</author></authors></contributors><titles><title>A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex</title><secondary-title>MIT AI Memo 2005-036 / CBCL Memo 259</secondary-title></titles><periodical><full-title>MIT AI Memo 2005-036 / CBCL Memo 259</full-title></periodical><number>AI Memo 2005-036 / CBCL Memo 259</number><dates><year>2005</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><label>Serre:MEMO:05</label><urls></urls></record></Cite></EndNote>[5]) perform in real-world recognition tasks and how does it compare to state-of-the-art AI systems? Given the many specific biological constraints that the theory had to satisfy (e.g., using only biophysically plausible operations, receptive field sizes, range of invariances, etc) it was not clear how well the model implementation described above would perform in comparison to systems that have been heuristically engineered for these complex tasks. At the time – about 5 years ago – we were surprised to find that the model is capable of recognizing well complex images (see ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2007</Year><RecNum>10</RecNum><record><rec-number>10</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">10</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>T. Serre</author><author>L. Wolf</author><author>S. Bileschi</author><author>M. Riesenhuber</author><author>T. Poggio</author></authors></contributors><titles><title>Object recognition with cortex-like mechanisms</title><secondary-title>IEEE Transactions on Pattern Analysis and Machine Intelligence</secondary-title></titles><periodical><full-title>IEEE Transactions on Pattern Analysis and Machine Intelligence</full-title></periodical><pages>411-426</pages><volume>29</volume><number>3</number><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[10]). The model performed at a level comparable to some of the best existing systems on the CalTech-101 image database of 101 object categories with a recognition rate of about 55 % (chance level < 1%, see ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2007</Year><RecNum>10</RecNum><record><rec-number>10</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">10</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>T. Serre</author><author>L. Wolf</author><author>S. Bileschi</author><author>M. Riesenhuber</author><author>T. Poggio</author></authors></contributors><titles><title>Object recognition with cortex-like mechanisms</title><secondary-title>IEEE Transactions on Pattern Analysis and Machine Intelligence</secondary-title></titles><periodical><full-title>IEEE Transactions on Pattern Analysis and Machine Intelligence</full-title></periodical><pages>411-426</pages><volume>29</volume><number>3</number><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[10] and also the extension by Mutch & Lowe ADDIN EN.CITE <EndNote><Cite><Author>Mutch</Author><Year>2006</Year><RecNum>28</RecNum><record><rec-number>28</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">28</key></foreign-keys><ref-type name="Conference Paper">47</ref-type><contributors><authors><author>Mutch, J.</author><author>Lowe, D.</author></authors></contributors><titles><title>Multiclass Object Recognition Using Sparse, Localized Features</title><secondary-title>CVPR</secondary-title></titles><dates><year>2006</year></dates><label>Mutch:CVPR:06</label><urls></urls></record></Cite></EndNote>[25]). A related system with fewer layers, less invariance and more units has an even better recognition rate on the CalTech data set ADDIN EN.CITE <EndNote><Cite><Author>Pinto</Author><Year>2008</Year><RecNum>29</RecNum><record><rec-number>29</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">29</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Pinto, N.</author><author>Cox, D. D.</author><author>DiCarlo, J. J.</author></authors></contributors><auth-address>McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America.</auth-address><titles><title>Why is real-world visual object recognition hard?</title><secondary-title>PLoS Computational Biology</secondary-title></titles><periodical><full-title>PLoS Computational Biology</full-title></periodical><volume>4</volume><number>1</number><keywords><keyword>Computer Simulation</keyword><keyword>Artificial Intelligence</keyword><keyword>Models: Biological</keyword><keyword>Biomimetics</keyword><keyword>Pattern Recognition: Visual</keyword><keyword>Humans</keyword><keyword>Image Interpretation: Computer-Assisted</keyword></keywords><dates><year>2008</year><pub-dates><date>Jan 1</date></pub-dates></dates><accession-num>18225950</accession-num><label>p18874</label><urls><related-urls><url><style face="underline" font="default" size="100%"> face="underline" font="default" size="100%">10.1371/journal.pcbi.0040027</style></electronic-resource-num><language>eng</language></record></Cite></EndNote>[34]}. In parallel we also developed an automated system for the parsing of street scene images ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2007</Year><RecNum>10</RecNum><record><rec-number>10</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">10</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>T. Serre</author><author>L. Wolf</author><author>S. Bileschi</author><author>M. Riesenhuber</author><author>T. Poggio</author></authors></contributors><titles><title>Object recognition with cortex-like mechanisms</title><secondary-title>IEEE Transactions on Pattern Analysis and Machine Intelligence</secondary-title></titles><periodical><full-title>IEEE Transactions on Pattern Analysis and Machine Intelligence</full-title></periodical><pages>411-426</pages><volume>29</volume><number>3</number><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[10] based in part on the class of models described above. The system is able to recognize well seven different object categories (cars, bikes, skies, roads, buildings, trees) from natural images of street scenes despite very large variations in shape (e.g., trees in summer and winter, SUVs as well as compact cars under any view point). An emerging application of computer vision is content-based recognition and search in videos. Again, neuroscience may suggest an avenue for approaching this problem. We have developed an initial model for the recognition of biological motion and actions from video sequences. The system is based on the organization of the dorsal stream of the visual cortex ADDIN EN.CITE <EndNote><Cite><Author>Jhuang</Author><Year>2007</Year><RecNum>30</RecNum><record><rec-number>30</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">30</key></foreign-keys><ref-type name="Conference Paper">47</ref-type><contributors><authors><author>H. Jhuang</author><author>T. Serre</author><author>L. Wolf</author><author>T. Poggio</author></authors></contributors><titles><title>A Biologically Inspired System for Action Recognition.</title><secondary-title>Eleventh IEEE International Conference on Computer Vision (ICCV)</secondary-title></titles><volume> </volume><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[35], which has been critically linked to the processing of motion information, from V1 and MT to higher motion-selective areas MST/FST and STS. The system relies on computational principles that are very similar to those used in the model of the ventral stream described above but starts with spatio-temporal filters modeled after motion-sensitive cells in the primary visual cortex. Recently we evaluated the performance of the system for the recognition of actions (both humans and animals) in real-world video sequences ADDIN EN.CITE <EndNote><Cite><Author>Jhuang</Author><Year>2007</Year><RecNum>30</RecNum><record><rec-number>30</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">30</key></foreign-keys><ref-type name="Conference Paper">47</ref-type><contributors><authors><author>H. Jhuang</author><author>T. Serre</author><author>L. Wolf</author><author>T. Poggio</author></authors></contributors><titles><title>A Biologically Inspired System for Action Recognition.</title><secondary-title>Eleventh IEEE International Conference on Computer Vision (ICCV)</secondary-title></titles><volume> </volume><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[35]. We found that the model of the dorsal stream competed with a state-of-the-art system (which itself outperforms many other systems) on all three datasets (see ADDIN EN.CITE <EndNote><Cite><Author>Jhuang</Author><Year>2007</Year><RecNum>30</RecNum><record><rec-number>30</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">30</key></foreign-keys><ref-type name="Conference Paper">47</ref-type><contributors><authors><author>H. Jhuang</author><author>T. Serre</author><author>L. Wolf</author><author>T. Poggio</author></authors></contributors><titles><title>A Biologically Inspired System for Action Recognition.</title><secondary-title>Eleventh IEEE International Conference on Computer Vision (ICCV)</secondary-title></titles><volume> </volume><dates><year>2007</year></dates><urls></urls></record></Cite></EndNote>[35] for details). In addition we found that the learning in this model produces a large dictionary of optic-flow patterns, which seems to be consistent with the response properties of cells in the Medial Temporal (MT) area in response to both isolated gratings and plaids (i.e., 2 gratings superimposed). CONCLUSION AND FUTURE DIRECTIONS The demonstration that a model designed to mimic known anatomy and physiology of the visual system led to good performance with respect to computer vision benchmarks may suggest that neuroscience is on the verge of providing novel and useful paradigms to computer vision and perhaps to other areas of computer science. The model we described can obviously be modified and improved by taking into account new experimental data (for instance more detailed properties of specific visual areas such as V1 ADDIN EN.CITE <EndNote><Cite><Author>Rolls</Author><Year>2002</Year><RecNum>63</RecNum><record><rec-number>63</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">63</key></foreign-keys><ref-type name="Book">6</ref-type><contributors><authors><author>Rolls, E. T.</author><author>Deco, G.</author></authors></contributors><titles><title> Computational Neuroscience of Vision</title></titles><dates><year>2002</year></dates><pub-location>Oxford</pub-location><publisher>Oxford University Press</publisher><urls></urls></record></Cite></EndNote>[36]), implementing several of its implicit assumptions such as the learning of invariances from sequences of natural images, taking into account additional sources of visual information such as binocular disparity and color and extending it to describe the dynamics of neural responses. The recognition performance of models of this general type can be improved by exploring the space of parameters (e.g., receptive field sizes, connectivity, etc.), for instance by using computer intensive iterations of a mutation-and-test cycle (Cox et al., abstract #164 presented at Cosyne, 2008).It is important however to realize the intrinsic limitations of the specific computational framework we have described here and why it is at best a first step towards understanding the visual cortex. First, from the anatomical and physiological point of view the class of feedforward models described here is incomplete, as it does not take into account the massive back-projections found in the cortex. To date, the role of cortical feedback remains poorly understood. It is likely that feedback underlies top-down signals related to attention, task-dependent biases and memory. Back-projections have to be taken into account in order to describe visual perception beyond the first 100-200 msec. Given enough time, humans make eye movements to scan an image and performance in many object recognition tasks can increase significantly over that obtained during fast presentations. Extensions of the model to incorporate feedback are possible and under way ADDIN EN.CITE <EndNote><Cite ExcludeYear="1"><Author>Chikkerur</Author><RecNum>31</RecNum><record><rec-number>31</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">31</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Chikkerur, S.</author><author>Tan, C.</author><author>Serre, T.</author><author>Poggio, T.</author></authors></contributors><titles><title>An integrated model of visual attention using shape-based features</title></titles><volume>CBCL-278 / MIT-CSAIL-TR-2009-029</volume><dates><year>2009</year></dates><pub-location>Cambridge, MA</pub-location><publisher>MIT</publisher><urls></urls></record></Cite></EndNote>[37]. We think that feedforward models may well turn out to be approximate descriptions of the first 100-200 msec of the processing required by more complex theories of vision, which are based on back-projections PEVuZE5vdGU+PENpdGUgRXhjbHVkZVllYXI9IjEiPjxBdXRob3I+UmFvPC9BdXRob3I+PFJlY051

bT41NzwvUmVjTnVtPjxyZWNvcmQ+PHJlYy1udW1iZXI+NTc8L3JlYy1udW1iZXI+PGZvcmVpZ24t

a2V5cz48a2V5IGFwcD0iRU4iIGRiLWlkPSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0

d3d3ZmYiPjU3PC9rZXk+PC9mb3JlaWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0

aWNsZSI+MTc8L3JlZi10eXBlPjxjb250cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5SYW8sIFIu

UC48L2F1dGhvcj48YXV0aG9yPkJhbGxhcmQsIEQuSC48L2F1dGhvcj48L2F1dGhvcnM+PC9jb250

cmlidXRvcnM+PHRpdGxlcz48dGl0bGU+UHJlZGljdGl2ZSBjb2RpbmcgaW4gdGhlIHZpc3VhbCBj

b3J0ZXg6IGEgZnVuY3Rpb25hbCBpbnRlcnByZXRhdGlvbiBvZiBzb21lIGV4dHJhLWNsYXNzaWNh

bCByZWNlcHRpdmUtZmllbGQgZWZmZWN0czwvdGl0bGU+PHNlY29uZGFyeS10aXRsZT5OYXR1cmUg

TmV1cm9zYy48L3NlY29uZGFyeS10aXRsZT48L3RpdGxlcz48cGVyaW9kaWNhbD48ZnVsbC10aXRs

ZT5OYXR1cmUgTmV1cm9zYy48L2Z1bGwtdGl0bGU+PC9wZXJpb2RpY2FsPjxwYWdlcz43OS04Nzwv

cGFnZXM+PHZvbHVtZT4yPC92b2x1bWU+PG51bWJlcj4xPC9udW1iZXI+PGRhdGVzPjx5ZWFyPjE5

OTk8L3llYXI+PC9kYXRlcz48dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRo

b3I+TGVlPC9BdXRob3I+PFllYXI+MjAwMzwvWWVhcj48UmVjTnVtPjMyPC9SZWNOdW0+PHJlY29y

ZD48cmVjLW51bWJlcj4zMjwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIg

ZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+MzI8L2tleT48L2Zv

cmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+

PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPkxlZSwgVC4gUy48L2F1dGhvcj48YXV0aG9y

Pk11bWZvcmQsIEQuPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3JzPjxhdXRoLWFkZHJl

c3M+Q29tcHV0ZXIgU2NpZW5jZSBEZXBhcnRtZW50LCBDZW50ZXIgZm9yIHRoZSBOZXVyYWwgQmFz

aXMgb2YgQ29nbml0aW9uLCBDYXJuZWdpZSBNZWxsb24gVW5pdmVyc2l0eSwgUGl0dHNidXJnaCwg

UGVubnN5bHZhbmlhIDE1MjEzLCBVU0EuIHRhaUBjcy5jbXUuZWR1PC9hdXRoLWFkZHJlc3M+PHRp

dGxlcz48dGl0bGU+SGllcmFyY2hpY2FsIEJheWVzaWFuIGluZmVyZW5jZSBpbiB0aGUgdmlzdWFs

IGNvcnRleDwvdGl0bGU+PHNlY29uZGFyeS10aXRsZT5KIE9wdCBTb2MgQW0gQTwvc2Vjb25kYXJ5

LXRpdGxlPjwvdGl0bGVzPjxwZXJpb2RpY2FsPjxmdWxsLXRpdGxlPkogT3B0IFNvYyBBbSBBPC9m

dWxsLXRpdGxlPjwvcGVyaW9kaWNhbD48cGFnZXM+MTQzNC00ODwvcGFnZXM+PHZvbHVtZT4yMDwv

dm9sdW1lPjxudW1iZXI+NzwvbnVtYmVyPjxrZXl3b3Jkcz48a2V5d29yZD5BbmltYWxzPC9rZXl3

b3JkPjxrZXl3b3JkPkJheWVzIFRoZW9yZW08L2tleXdvcmQ+PGtleXdvcmQ+SGFwbG9yaGluaTwv

a2V5d29yZD48a2V5d29yZD4qTW9kZWxzLCBOZXVyb2xvZ2ljYWw8L2tleXdvcmQ+PGtleXdvcmQ+

UmVzZWFyY2ggU3VwcG9ydCwgTm9uLVUuUy4gR292JmFwb3M7dDwva2V5d29yZD48a2V5d29yZD5S

ZXNlYXJjaCBTdXBwb3J0LCBVLlMuIEdvdiZhcG9zO3QsIE5vbi1QLkguUy48L2tleXdvcmQ+PGtl

eXdvcmQ+UmVzZWFyY2ggU3VwcG9ydCwgVS5TLiBHb3YmYXBvczt0LCBQLkguUy48L2tleXdvcmQ+

PGtleXdvcmQ+VmlzdWFsIENvcnRleC8qcGh5c2lvbG9neTwva2V5d29yZD48L2tleXdvcmRzPjxk

YXRlcz48eWVhcj4yMDAzPC95ZWFyPjxwdWItZGF0ZXM+PGRhdGU+SnVsPC9kYXRlPjwvcHViLWRh

dGVzPjwvZGF0ZXM+PGFjY2Vzc2lvbi1udW0+MTI4Njg2NDc8L2FjY2Vzc2lvbi1udW0+PHVybHM+

PHJlbGF0ZWQtdXJscz48dXJsPjxzdHlsZSBmYWNlPSJ1bmRlcmxpbmUiIGZvbnQ9ImRlZmF1bHQi

IHNpemU9IjEwMCUiPmh0dHA6Ly93d3cubmNiaS5ubG0ubmloLmdvdi9lbnRyZXovcXVlcnkuZmNn

aT9jbWQ9UmV0cmlldmUmYW1wO2RiPVB1Yk1lZCZhbXA7ZG9wdD1DaXRhdGlvbiZhbXA7bGlzdF91

aWRzPTEyODY4NjQ3IDwvc3R5bGU+PC91cmw+PC9yZWxhdGVkLXVybHM+PC91cmxzPjwvcmVjb3Jk

PjwvQ2l0ZT48Q2l0ZSBFeGNsdWRlWWVhcj0iMSI+PEF1dGhvcj5EZWFuPC9BdXRob3I+PFJlY051

bT42MDwvUmVjTnVtPjxyZWNvcmQ+PHJlYy1udW1iZXI+NjA8L3JlYy1udW1iZXI+PGZvcmVpZ24t

a2V5cz48a2V5IGFwcD0iRU4iIGRiLWlkPSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0

d3d3ZmYiPjYwPC9rZXk+PC9mb3JlaWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkNvbmZlcmVuY2Ug

UGFwZXIiPjQ3PC9yZWYtdHlwZT48Y29udHJpYnV0b3JzPjxhdXRob3JzPjxhdXRob3I+RGVhbiwg

VC4gPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3JzPjx0aXRsZXM+PHRpdGxlPkEgQ29t

cHV0YXRpb25hbCBNb2RlbCBvZiB0aGUgQ2VyZWJyYWwgQ29ydGV4PC90aXRsZT48c2Vjb25kYXJ5

LXRpdGxlPlByb2NlZWRpbmdzIG9mIHRoZSBUd2VudGlldGggTmF0aW9uYWwgQ29uZmVyZW5jZSBv

biBBcnRpZmljaWFsIEludGVsbGlnZW5jZSAoQUFBSS0wNSk8L3NlY29uZGFyeS10aXRsZT48L3Rp

dGxlcz48cGFnZXM+OTM4LTk0MzwvcGFnZXM+PGRhdGVzPjx5ZWFyPjIwMDU8L3llYXI+PHB1Yi1k

YXRlcz48ZGF0ZT4yMDA1PC9kYXRlPjwvcHViLWRhdGVzPjwvZGF0ZXM+PHB1Yi1sb2NhdGlvbj5D

YW1icmlkZ2UsIE1hc3NhY2h1c2V0dHM8L3B1Yi1sb2NhdGlvbj48cHVibGlzaGVyPk1JVCBQcmVz

czwvcHVibGlzaGVyPjx1cmxzPjwvdXJscz48L3JlY29yZD48L0NpdGU+PENpdGUgRXhjbHVkZVll

YXI9IjEiPjxBdXRob3I+R2VvcmdlPC9BdXRob3I+PFJlY051bT4zMzwvUmVjTnVtPjxyZWNvcmQ+

PHJlYy1udW1iZXI+MzM8L3JlYy1udW1iZXI+PGZvcmVpZ24ta2V5cz48a2V5IGFwcD0iRU4iIGRi

LWlkPSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0d3d3ZmYiPjMzPC9rZXk+PC9mb3Jl

aWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkNvbmZlcmVuY2UgUGFwZXIiPjQ3PC9yZWYtdHlwZT48

Y29udHJpYnV0b3JzPjxhdXRob3JzPjxhdXRob3I+RC4gR2VvcmdlPC9hdXRob3I+PGF1dGhvcj5K

LiBIYXdraW5zPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3JzPjx0aXRsZXM+PHRpdGxl

PkEgaGllcmFyY2hpY2FsIEJheWVzaWFuIG1vZGVsIG9mIGludmFyaWFudCBwYXR0ZXJuIHJlY29n

bml0aW9uIGluIHRoZSB2aXN1YWwgY29ydGV4LjwvdGl0bGU+PHNlY29uZGFyeS10aXRsZT5JbnRl

cm5hdGlvbmFsIEpvaW50IENvbmZlcmVuY2Ugb24gTmV1cmFsIE5ldHdvcmtzPC9zZWNvbmRhcnkt

dGl0bGU+PC90aXRsZXM+PGRhdGVzPjx5ZWFyPjIwMDU8L3llYXI+PHB1Yi1kYXRlcz48ZGF0ZT4y

MDA1PC9kYXRlPjwvcHViLWRhdGVzPjwvZGF0ZXM+PHVybHM+PC91cmxzPjwvcmVjb3JkPjwvQ2l0

ZT48Q2l0ZSBFeGNsdWRlWWVhcj0iMSI+PEF1dGhvcj5ZdWlsbGU8L0F1dGhvcj48UmVjTnVtPjU4

PC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41ODwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlz

PjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dm

ZiI+NTg8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xl

Ij4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPll1aWxsZSwgQS48

L2F1dGhvcj48YXV0aG9yPktlcnN0ZW4sIEQuPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0

b3JzPjxhdXRoLWFkZHJlc3M+RGVwYXJ0bWVudCBvZiBTdGF0aXN0aWNzLCBVQ0xBLCBTYW4gRnJh

bmNpc2NvLCBDQSA5NDExNSwgVVNBLiB5dWlsbGVAc3RhdC51Y2xhLmVkdTwvYXV0aC1hZGRyZXNz

Pjx0aXRsZXM+PHRpdGxlPlZpc2lvbiBhcyBCYXllc2lhbiBpbmZlcmVuY2U6IGFuYWx5c2lzIGJ5

IHN5bnRoZXNpcz88L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+VHJlbmRzIENvZ24gU2NpPC9zZWNv

bmRhcnktdGl0bGU+PC90aXRsZXM+PHBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+VHJlbmRzIENvZ24g

U2NpPC9mdWxsLXRpdGxlPjwvcGVyaW9kaWNhbD48cGFnZXM+MzAxLTg8L3BhZ2VzPjx2b2x1bWU+

MTA8L3ZvbHVtZT48bnVtYmVyPjc8L251bWJlcj48ZWRpdGlvbj4yMDA2LzA2LzIxPC9lZGl0aW9u

PjxrZXl3b3Jkcz48a2V5d29yZD5BbGdvcml0aG1zPC9rZXl3b3JkPjxrZXl3b3JkPipCYXllcyBU

aGVvcmVtPC9rZXl3b3JkPjxrZXl3b3JkPkJyYWluLypwaHlzaW9sb2d5PC9rZXl3b3JkPjxrZXl3

b3JkPkJyYWluIE1hcHBpbmc8L2tleXdvcmQ+PGtleXdvcmQ+RmllbGQgRGVwZW5kZW5jZS1JbmRl

cGVuZGVuY2U8L2tleXdvcmQ+PGtleXdvcmQ+SHVtYW5zPC9rZXl3b3JkPjxrZXl3b3JkPk1hcmtv

diBDaGFpbnM8L2tleXdvcmQ+PGtleXdvcmQ+Kk1vZGVscywgU3RhdGlzdGljYWw8L2tleXdvcmQ+

PGtleXdvcmQ+TW9udGUgQ2FybG8gTWV0aG9kPC9rZXl3b3JkPjxrZXl3b3JkPk9yaWVudGF0aW9u

L3BoeXNpb2xvZ3k8L2tleXdvcmQ+PGtleXdvcmQ+UGF0dGVybiBSZWNvZ25pdGlvbiwgVmlzdWFs

L3BoeXNpb2xvZ3k8L2tleXdvcmQ+PGtleXdvcmQ+UHJvYmFiaWxpdHkgVGhlb3J5PC9rZXl3b3Jk

PjxrZXl3b3JkPlNpZ25hbCBEZXRlY3Rpb24sIFBzeWNob2xvZ2ljYWw8L2tleXdvcmQ+PGtleXdv

cmQ+VmlzaW9uLCBPY3VsYXIvKnBoeXNpb2xvZ3k8L2tleXdvcmQ+PGtleXdvcmQ+VmlzdWFsIFBh

dGh3YXlzL3BoeXNpb2xvZ3k8L2tleXdvcmQ+PC9rZXl3b3Jkcz48ZGF0ZXM+PHllYXI+MjAwNjwv

eWVhcj48cHViLWRhdGVzPjxkYXRlPkp1bDwvZGF0ZT48L3B1Yi1kYXRlcz48L2RhdGVzPjxpc2Ju

PjEzNjQtNjYxMyAoUHJpbnQpPC9pc2JuPjxhY2Nlc3Npb24tbnVtPjE2Nzg0ODgyPC9hY2Nlc3Np

b24tbnVtPjx1cmxzPjxyZWxhdGVkLXVybHM+PHVybD5odHRwOi8vd3d3Lm5jYmkubmxtLm5paC5n

b3YvZW50cmV6L3F1ZXJ5LmZjZ2k/Y21kPVJldHJpZXZlJmFtcDtkYj1QdWJNZWQmYW1wO2RvcHQ9

Q2l0YXRpb24mYW1wO2xpc3RfdWlkcz0xNjc4NDg4MjwvdXJsPjwvcmVsYXRlZC11cmxzPjwvdXJs

cz48ZWxlY3Ryb25pYy1yZXNvdXJjZS1udW0+UzEzNjQtNjYxMygwNikwMDEyNi00IFtwaWldJiN4

RDsxMC4xMDE2L2oudGljcy4yMDA2LjA1LjAwMjwvZWxlY3Ryb25pYy1yZXNvdXJjZS1udW0+PGxh

bmd1YWdlPmVuZzwvbGFuZ3VhZ2U+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlIEV4Y2x1ZGVZZWFyPSIx

Ij48QXV0aG9yPkVwc2h0ZWluPC9BdXRob3I+PFJlY051bT4zNDwvUmVjTnVtPjxyZWNvcmQ+PHJl

Yy1udW1iZXI+MzQ8L3JlYy1udW1iZXI+PGZvcmVpZ24ta2V5cz48a2V5IGFwcD0iRU4iIGRiLWlk

PSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0d3d3ZmYiPjM0PC9rZXk+PC9mb3JlaWdu

LWtleXM+PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0aWNsZSI+MTc8L3JlZi10eXBlPjxjb250

cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5FcHNodGVpbiwgQi48L2F1dGhvcj48YXV0aG9yPkxp

ZnNoaXR6LCBJLjwvYXV0aG9yPjxhdXRob3I+VWxsbWFuLCBTLjwvYXV0aG9yPjwvYXV0aG9ycz48

L2NvbnRyaWJ1dG9ycz48dGl0bGVzPjx0aXRsZT5JbWFnZSBpbnRlcnByZXRhdGlvbiBieSBhIHNp

bmdsZSBib3R0b20tdXAgdG9wLWRvd24gY3ljbGU8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+UHJv

Y2VlZGluZ3Mgb2YgdGhlIE5hdGlvbmFsIEFjYWRlbXkgb2YgU2NpZW5jZXM8L3NlY29uZGFyeS10

aXRsZT48L3RpdGxlcz48cGVyaW9kaWNhbD48ZnVsbC10aXRsZT5Qcm9jZWVkaW5ncyBvZiB0aGUg

TmF0aW9uYWwgQWNhZGVteSBvZiBTY2llbmNlczwvZnVsbC10aXRsZT48L3BlcmlvZGljYWw+PGtl

eXdvcmRzPjxrZXl3b3JkPiZhbXA7bW9kZWwgJmFtcDtuZXVyb3NjaWVuY2U8L2tleXdvcmQ+PC9r

ZXl3b3Jkcz48ZGF0ZXM+PHllYXI+MjAwODwveWVhcj48L2RhdGVzPjxsYWJlbD5wMDAwMDE8L2xh

YmVsPjx1cmxzPjxwZGYtdXJscz48dXJsPmZpbGU6Ly9sb2NhbGhvc3QvVXNlcnMvc2VycmUvRG9j

dW1lbnRzL1BhcGVycy9FcHNodGVpbi8yMDA4L0Vwc2h0ZWluUHJvY2VlZGluZ3MlMjBvZiUyMHRo

ZSUyME5hdGlvbmFsJTIwQWNhZGVteSUyMG9mJTIwU2NpZW5jZXMyMDA4LnBkZjwvdXJsPjwvcGRm

LXVybHM+PC91cmxzPjxjdXN0b20zPnBhcGVyczovLzgyODdDNUYxLTkxNTUtNEY0Ri1BRDU1LUMw

MzM1RUExQzVDMi9QYXBlci9wMTwvY3VzdG9tMz48L3JlY29yZD48L0NpdGU+PENpdGU+PEF1dGhv

cj5ncm9zc2Jlcmc8L0F1dGhvcj48WWVhcj4yMDA3PC9ZZWFyPjxSZWNOdW0+NjQ8L1JlY051bT48

cmVjb3JkPjxyZWMtbnVtYmVyPjY0PC9yZWMtbnVtYmVyPjxmb3JlaWduLWtleXM+PGtleSBhcHA9

IkVOIiBkYi1pZD0id2V4dHdydzBiZmVwd3Zlc3gwb3gwYTk4MmZlc2Z3dHd3d2ZmIj42NDwva2V5

PjwvZm9yZWlnbi1rZXlzPjxyZWYtdHlwZSBuYW1lPSJKb3VybmFsIEFydGljbGUiPjE3PC9yZWYt

dHlwZT48Y29udHJpYnV0b3JzPjxhdXRob3JzPjxhdXRob3I+R3Jvc3NiZXJnLCBTLiA8L2F1dGhv

cj48L2F1dGhvcnM+PHNlY29uZGFyeS1hdXRob3JzPjxhdXRob3I+UGF1bCBDaXNlaywgVHJldm9y

IERyZXcsIEpvaG4gS2FsYXNrYTwvYXV0aG9yPjwvc2Vjb25kYXJ5LWF1dGhvcnM+PC9jb250cmli

dXRvcnM+PHRpdGxlcz48dGl0bGU+VG93YXJkcyBhIHVuaWZpZWQgdGhlb3J5IG9mIG5lb2NvcnRl

eDogTGFtaW5hciBjb3J0aWNhbCBjaXJjdWl0cyBmb3IgdmlzaW9uIGFuZCBjb2duaXRpb248L3Rp

dGxlPjxzZWNvbmRhcnktdGl0bGU+UHJvZyBCcmFpbiBSZXM8L3NlY29uZGFyeS10aXRsZT48L3Rp

dGxlcz48cGVyaW9kaWNhbD48ZnVsbC10aXRsZT5Qcm9nIEJyYWluIFJlczwvZnVsbC10aXRsZT48

L3BlcmlvZGljYWw+PHBhZ2VzPjc5LTEwNDwvcGFnZXM+PHZvbHVtZT4xNjU8L3ZvbHVtZT48ZGF0

ZXM+PHllYXI+MjAwNzwveWVhcj48L2RhdGVzPjxwdWItbG9jYXRpb24+QW1zdGVyZGFtPC9wdWIt

bG9jYXRpb24+PHVybHM+PC91cmxzPjwvcmVjb3JkPjwvQ2l0ZT48L0VuZE5vdGU+

ADDIN EN.CITE PEVuZE5vdGU+PENpdGUgRXhjbHVkZVllYXI9IjEiPjxBdXRob3I+UmFvPC9BdXRob3I+PFJlY051

bT41NzwvUmVjTnVtPjxyZWNvcmQ+PHJlYy1udW1iZXI+NTc8L3JlYy1udW1iZXI+PGZvcmVpZ24t

a2V5cz48a2V5IGFwcD0iRU4iIGRiLWlkPSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0

d3d3ZmYiPjU3PC9rZXk+PC9mb3JlaWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0

aWNsZSI+MTc8L3JlZi10eXBlPjxjb250cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5SYW8sIFIu

UC48L2F1dGhvcj48YXV0aG9yPkJhbGxhcmQsIEQuSC48L2F1dGhvcj48L2F1dGhvcnM+PC9jb250

cmlidXRvcnM+PHRpdGxlcz48dGl0bGU+UHJlZGljdGl2ZSBjb2RpbmcgaW4gdGhlIHZpc3VhbCBj

b3J0ZXg6IGEgZnVuY3Rpb25hbCBpbnRlcnByZXRhdGlvbiBvZiBzb21lIGV4dHJhLWNsYXNzaWNh

bCByZWNlcHRpdmUtZmllbGQgZWZmZWN0czwvdGl0bGU+PHNlY29uZGFyeS10aXRsZT5OYXR1cmUg

TmV1cm9zYy48L3NlY29uZGFyeS10aXRsZT48L3RpdGxlcz48cGVyaW9kaWNhbD48ZnVsbC10aXRs

ZT5OYXR1cmUgTmV1cm9zYy48L2Z1bGwtdGl0bGU+PC9wZXJpb2RpY2FsPjxwYWdlcz43OS04Nzwv

cGFnZXM+PHZvbHVtZT4yPC92b2x1bWU+PG51bWJlcj4xPC9udW1iZXI+PGRhdGVzPjx5ZWFyPjE5

OTk8L3llYXI+PC9kYXRlcz48dXJscz48L3VybHM+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlPjxBdXRo

b3I+TGVlPC9BdXRob3I+PFllYXI+MjAwMzwvWWVhcj48UmVjTnVtPjMyPC9SZWNOdW0+PHJlY29y

ZD48cmVjLW51bWJlcj4zMjwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIg

ZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dmZiI+MzI8L2tleT48L2Zv

cmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xlIj4xNzwvcmVmLXR5cGU+

PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPkxlZSwgVC4gUy48L2F1dGhvcj48YXV0aG9y

Pk11bWZvcmQsIEQuPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3JzPjxhdXRoLWFkZHJl

c3M+Q29tcHV0ZXIgU2NpZW5jZSBEZXBhcnRtZW50LCBDZW50ZXIgZm9yIHRoZSBOZXVyYWwgQmFz

aXMgb2YgQ29nbml0aW9uLCBDYXJuZWdpZSBNZWxsb24gVW5pdmVyc2l0eSwgUGl0dHNidXJnaCwg

UGVubnN5bHZhbmlhIDE1MjEzLCBVU0EuIHRhaUBjcy5jbXUuZWR1PC9hdXRoLWFkZHJlc3M+PHRp

dGxlcz48dGl0bGU+SGllcmFyY2hpY2FsIEJheWVzaWFuIGluZmVyZW5jZSBpbiB0aGUgdmlzdWFs

IGNvcnRleDwvdGl0bGU+PHNlY29uZGFyeS10aXRsZT5KIE9wdCBTb2MgQW0gQTwvc2Vjb25kYXJ5

LXRpdGxlPjwvdGl0bGVzPjxwZXJpb2RpY2FsPjxmdWxsLXRpdGxlPkogT3B0IFNvYyBBbSBBPC9m

dWxsLXRpdGxlPjwvcGVyaW9kaWNhbD48cGFnZXM+MTQzNC00ODwvcGFnZXM+PHZvbHVtZT4yMDwv

dm9sdW1lPjxudW1iZXI+NzwvbnVtYmVyPjxrZXl3b3Jkcz48a2V5d29yZD5BbmltYWxzPC9rZXl3

b3JkPjxrZXl3b3JkPkJheWVzIFRoZW9yZW08L2tleXdvcmQ+PGtleXdvcmQ+SGFwbG9yaGluaTwv

a2V5d29yZD48a2V5d29yZD4qTW9kZWxzLCBOZXVyb2xvZ2ljYWw8L2tleXdvcmQ+PGtleXdvcmQ+

UmVzZWFyY2ggU3VwcG9ydCwgTm9uLVUuUy4gR292JmFwb3M7dDwva2V5d29yZD48a2V5d29yZD5S

ZXNlYXJjaCBTdXBwb3J0LCBVLlMuIEdvdiZhcG9zO3QsIE5vbi1QLkguUy48L2tleXdvcmQ+PGtl

eXdvcmQ+UmVzZWFyY2ggU3VwcG9ydCwgVS5TLiBHb3YmYXBvczt0LCBQLkguUy48L2tleXdvcmQ+

PGtleXdvcmQ+VmlzdWFsIENvcnRleC8qcGh5c2lvbG9neTwva2V5d29yZD48L2tleXdvcmRzPjxk

YXRlcz48eWVhcj4yMDAzPC95ZWFyPjxwdWItZGF0ZXM+PGRhdGU+SnVsPC9kYXRlPjwvcHViLWRh

dGVzPjwvZGF0ZXM+PGFjY2Vzc2lvbi1udW0+MTI4Njg2NDc8L2FjY2Vzc2lvbi1udW0+PHVybHM+

PHJlbGF0ZWQtdXJscz48dXJsPjxzdHlsZSBmYWNlPSJ1bmRlcmxpbmUiIGZvbnQ9ImRlZmF1bHQi

IHNpemU9IjEwMCUiPmh0dHA6Ly93d3cubmNiaS5ubG0ubmloLmdvdi9lbnRyZXovcXVlcnkuZmNn

aT9jbWQ9UmV0cmlldmUmYW1wO2RiPVB1Yk1lZCZhbXA7ZG9wdD1DaXRhdGlvbiZhbXA7bGlzdF91

aWRzPTEyODY4NjQ3IDwvc3R5bGU+PC91cmw+PC9yZWxhdGVkLXVybHM+PC91cmxzPjwvcmVjb3Jk

PjwvQ2l0ZT48Q2l0ZSBFeGNsdWRlWWVhcj0iMSI+PEF1dGhvcj5EZWFuPC9BdXRob3I+PFJlY051

bT42MDwvUmVjTnVtPjxyZWNvcmQ+PHJlYy1udW1iZXI+NjA8L3JlYy1udW1iZXI+PGZvcmVpZ24t

a2V5cz48a2V5IGFwcD0iRU4iIGRiLWlkPSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0

d3d3ZmYiPjYwPC9rZXk+PC9mb3JlaWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkNvbmZlcmVuY2Ug

UGFwZXIiPjQ3PC9yZWYtdHlwZT48Y29udHJpYnV0b3JzPjxhdXRob3JzPjxhdXRob3I+RGVhbiwg

VC4gPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3JzPjx0aXRsZXM+PHRpdGxlPkEgQ29t

cHV0YXRpb25hbCBNb2RlbCBvZiB0aGUgQ2VyZWJyYWwgQ29ydGV4PC90aXRsZT48c2Vjb25kYXJ5

LXRpdGxlPlByb2NlZWRpbmdzIG9mIHRoZSBUd2VudGlldGggTmF0aW9uYWwgQ29uZmVyZW5jZSBv

biBBcnRpZmljaWFsIEludGVsbGlnZW5jZSAoQUFBSS0wNSk8L3NlY29uZGFyeS10aXRsZT48L3Rp

dGxlcz48cGFnZXM+OTM4LTk0MzwvcGFnZXM+PGRhdGVzPjx5ZWFyPjIwMDU8L3llYXI+PHB1Yi1k

YXRlcz48ZGF0ZT4yMDA1PC9kYXRlPjwvcHViLWRhdGVzPjwvZGF0ZXM+PHB1Yi1sb2NhdGlvbj5D

YW1icmlkZ2UsIE1hc3NhY2h1c2V0dHM8L3B1Yi1sb2NhdGlvbj48cHVibGlzaGVyPk1JVCBQcmVz

czwvcHVibGlzaGVyPjx1cmxzPjwvdXJscz48L3JlY29yZD48L0NpdGU+PENpdGUgRXhjbHVkZVll

YXI9IjEiPjxBdXRob3I+R2VvcmdlPC9BdXRob3I+PFJlY051bT4zMzwvUmVjTnVtPjxyZWNvcmQ+

PHJlYy1udW1iZXI+MzM8L3JlYy1udW1iZXI+PGZvcmVpZ24ta2V5cz48a2V5IGFwcD0iRU4iIGRi

LWlkPSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0d3d3ZmYiPjMzPC9rZXk+PC9mb3Jl

aWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkNvbmZlcmVuY2UgUGFwZXIiPjQ3PC9yZWYtdHlwZT48

Y29udHJpYnV0b3JzPjxhdXRob3JzPjxhdXRob3I+RC4gR2VvcmdlPC9hdXRob3I+PGF1dGhvcj5K

LiBIYXdraW5zPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0b3JzPjx0aXRsZXM+PHRpdGxl

PkEgaGllcmFyY2hpY2FsIEJheWVzaWFuIG1vZGVsIG9mIGludmFyaWFudCBwYXR0ZXJuIHJlY29n

bml0aW9uIGluIHRoZSB2aXN1YWwgY29ydGV4LjwvdGl0bGU+PHNlY29uZGFyeS10aXRsZT5JbnRl

cm5hdGlvbmFsIEpvaW50IENvbmZlcmVuY2Ugb24gTmV1cmFsIE5ldHdvcmtzPC9zZWNvbmRhcnkt

dGl0bGU+PC90aXRsZXM+PGRhdGVzPjx5ZWFyPjIwMDU8L3llYXI+PHB1Yi1kYXRlcz48ZGF0ZT4y

MDA1PC9kYXRlPjwvcHViLWRhdGVzPjwvZGF0ZXM+PHVybHM+PC91cmxzPjwvcmVjb3JkPjwvQ2l0

ZT48Q2l0ZSBFeGNsdWRlWWVhcj0iMSI+PEF1dGhvcj5ZdWlsbGU8L0F1dGhvcj48UmVjTnVtPjU4

PC9SZWNOdW0+PHJlY29yZD48cmVjLW51bWJlcj41ODwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlz

PjxrZXkgYXBwPSJFTiIgZGItaWQ9IndleHR3cncwYmZlcHd2ZXN4MG94MGE5ODJmZXNmd3R3d3dm

ZiI+NTg8L2tleT48L2ZvcmVpZ24ta2V5cz48cmVmLXR5cGUgbmFtZT0iSm91cm5hbCBBcnRpY2xl

Ij4xNzwvcmVmLXR5cGU+PGNvbnRyaWJ1dG9ycz48YXV0aG9ycz48YXV0aG9yPll1aWxsZSwgQS48

L2F1dGhvcj48YXV0aG9yPktlcnN0ZW4sIEQuPC9hdXRob3I+PC9hdXRob3JzPjwvY29udHJpYnV0

b3JzPjxhdXRoLWFkZHJlc3M+RGVwYXJ0bWVudCBvZiBTdGF0aXN0aWNzLCBVQ0xBLCBTYW4gRnJh

bmNpc2NvLCBDQSA5NDExNSwgVVNBLiB5dWlsbGVAc3RhdC51Y2xhLmVkdTwvYXV0aC1hZGRyZXNz

Pjx0aXRsZXM+PHRpdGxlPlZpc2lvbiBhcyBCYXllc2lhbiBpbmZlcmVuY2U6IGFuYWx5c2lzIGJ5

IHN5bnRoZXNpcz88L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+VHJlbmRzIENvZ24gU2NpPC9zZWNv

bmRhcnktdGl0bGU+PC90aXRsZXM+PHBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+VHJlbmRzIENvZ24g

U2NpPC9mdWxsLXRpdGxlPjwvcGVyaW9kaWNhbD48cGFnZXM+MzAxLTg8L3BhZ2VzPjx2b2x1bWU+

MTA8L3ZvbHVtZT48bnVtYmVyPjc8L251bWJlcj48ZWRpdGlvbj4yMDA2LzA2LzIxPC9lZGl0aW9u

PjxrZXl3b3Jkcz48a2V5d29yZD5BbGdvcml0aG1zPC9rZXl3b3JkPjxrZXl3b3JkPipCYXllcyBU

aGVvcmVtPC9rZXl3b3JkPjxrZXl3b3JkPkJyYWluLypwaHlzaW9sb2d5PC9rZXl3b3JkPjxrZXl3

b3JkPkJyYWluIE1hcHBpbmc8L2tleXdvcmQ+PGtleXdvcmQ+RmllbGQgRGVwZW5kZW5jZS1JbmRl

cGVuZGVuY2U8L2tleXdvcmQ+PGtleXdvcmQ+SHVtYW5zPC9rZXl3b3JkPjxrZXl3b3JkPk1hcmtv

diBDaGFpbnM8L2tleXdvcmQ+PGtleXdvcmQ+Kk1vZGVscywgU3RhdGlzdGljYWw8L2tleXdvcmQ+

PGtleXdvcmQ+TW9udGUgQ2FybG8gTWV0aG9kPC9rZXl3b3JkPjxrZXl3b3JkPk9yaWVudGF0aW9u

L3BoeXNpb2xvZ3k8L2tleXdvcmQ+PGtleXdvcmQ+UGF0dGVybiBSZWNvZ25pdGlvbiwgVmlzdWFs

L3BoeXNpb2xvZ3k8L2tleXdvcmQ+PGtleXdvcmQ+UHJvYmFiaWxpdHkgVGhlb3J5PC9rZXl3b3Jk

PjxrZXl3b3JkPlNpZ25hbCBEZXRlY3Rpb24sIFBzeWNob2xvZ2ljYWw8L2tleXdvcmQ+PGtleXdv

cmQ+VmlzaW9uLCBPY3VsYXIvKnBoeXNpb2xvZ3k8L2tleXdvcmQ+PGtleXdvcmQ+VmlzdWFsIFBh

dGh3YXlzL3BoeXNpb2xvZ3k8L2tleXdvcmQ+PC9rZXl3b3Jkcz48ZGF0ZXM+PHllYXI+MjAwNjwv

eWVhcj48cHViLWRhdGVzPjxkYXRlPkp1bDwvZGF0ZT48L3B1Yi1kYXRlcz48L2RhdGVzPjxpc2Ju

PjEzNjQtNjYxMyAoUHJpbnQpPC9pc2JuPjxhY2Nlc3Npb24tbnVtPjE2Nzg0ODgyPC9hY2Nlc3Np

b24tbnVtPjx1cmxzPjxyZWxhdGVkLXVybHM+PHVybD5odHRwOi8vd3d3Lm5jYmkubmxtLm5paC5n

b3YvZW50cmV6L3F1ZXJ5LmZjZ2k/Y21kPVJldHJpZXZlJmFtcDtkYj1QdWJNZWQmYW1wO2RvcHQ9

Q2l0YXRpb24mYW1wO2xpc3RfdWlkcz0xNjc4NDg4MjwvdXJsPjwvcmVsYXRlZC11cmxzPjwvdXJs

cz48ZWxlY3Ryb25pYy1yZXNvdXJjZS1udW0+UzEzNjQtNjYxMygwNikwMDEyNi00IFtwaWldJiN4

RDsxMC4xMDE2L2oudGljcy4yMDA2LjA1LjAwMjwvZWxlY3Ryb25pYy1yZXNvdXJjZS1udW0+PGxh

bmd1YWdlPmVuZzwvbGFuZ3VhZ2U+PC9yZWNvcmQ+PC9DaXRlPjxDaXRlIEV4Y2x1ZGVZZWFyPSIx

Ij48QXV0aG9yPkVwc2h0ZWluPC9BdXRob3I+PFJlY051bT4zNDwvUmVjTnVtPjxyZWNvcmQ+PHJl

Yy1udW1iZXI+MzQ8L3JlYy1udW1iZXI+PGZvcmVpZ24ta2V5cz48a2V5IGFwcD0iRU4iIGRiLWlk

PSJ3ZXh0d3J3MGJmZXB3dmVzeDBveDBhOTgyZmVzZnd0d3d3ZmYiPjM0PC9rZXk+PC9mb3JlaWdu

LWtleXM+PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0aWNsZSI+MTc8L3JlZi10eXBlPjxjb250

cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5FcHNodGVpbiwgQi48L2F1dGhvcj48YXV0aG9yPkxp

ZnNoaXR6LCBJLjwvYXV0aG9yPjxhdXRob3I+VWxsbWFuLCBTLjwvYXV0aG9yPjwvYXV0aG9ycz48

L2NvbnRyaWJ1dG9ycz48dGl0bGVzPjx0aXRsZT5JbWFnZSBpbnRlcnByZXRhdGlvbiBieSBhIHNp

bmdsZSBib3R0b20tdXAgdG9wLWRvd24gY3ljbGU8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+UHJv

Y2VlZGluZ3Mgb2YgdGhlIE5hdGlvbmFsIEFjYWRlbXkgb2YgU2NpZW5jZXM8L3NlY29uZGFyeS10

aXRsZT48L3RpdGxlcz48cGVyaW9kaWNhbD48ZnVsbC10aXRsZT5Qcm9jZWVkaW5ncyBvZiB0aGUg

TmF0aW9uYWwgQWNhZGVteSBvZiBTY2llbmNlczwvZnVsbC10aXRsZT48L3BlcmlvZGljYWw+PGtl

eXdvcmRzPjxrZXl3b3JkPiZhbXA7bW9kZWwgJmFtcDtuZXVyb3NjaWVuY2U8L2tleXdvcmQ+PC9r

ZXl3b3Jkcz48ZGF0ZXM+PHllYXI+MjAwODwveWVhcj48L2RhdGVzPjxsYWJlbD5wMDAwMDE8L2xh

YmVsPjx1cmxzPjxwZGYtdXJscz48dXJsPmZpbGU6Ly9sb2NhbGhvc3QvVXNlcnMvc2VycmUvRG9j

dW1lbnRzL1BhcGVycy9FcHNodGVpbi8yMDA4L0Vwc2h0ZWluUHJvY2VlZGluZ3MlMjBvZiUyMHRo

ZSUyME5hdGlvbmFsJTIwQWNhZGVteSUyMG9mJTIwU2NpZW5jZXMyMDA4LnBkZjwvdXJsPjwvcGRm

LXVybHM+PC91cmxzPjxjdXN0b20zPnBhcGVyczovLzgyODdDNUYxLTkxNTUtNEY0Ri1BRDU1LUMw

MzM1RUExQzVDMi9QYXBlci9wMTwvY3VzdG9tMz48L3JlY29yZD48L0NpdGU+PENpdGU+PEF1dGhv

cj5ncm9zc2Jlcmc8L0F1dGhvcj48WWVhcj4yMDA3PC9ZZWFyPjxSZWNOdW0+NjQ8L1JlY051bT48

cmVjb3JkPjxyZWMtbnVtYmVyPjY0PC9yZWMtbnVtYmVyPjxmb3JlaWduLWtleXM+PGtleSBhcHA9

IkVOIiBkYi1pZD0id2V4dHdydzBiZmVwd3Zlc3gwb3gwYTk4MmZlc2Z3dHd3d2ZmIj42NDwva2V5

PjwvZm9yZWlnbi1rZXlzPjxyZWYtdHlwZSBuYW1lPSJKb3VybmFsIEFydGljbGUiPjE3PC9yZWYt

dHlwZT48Y29udHJpYnV0b3JzPjxhdXRob3JzPjxhdXRob3I+R3Jvc3NiZXJnLCBTLiA8L2F1dGhv

cj48L2F1dGhvcnM+PHNlY29uZGFyeS1hdXRob3JzPjxhdXRob3I+UGF1bCBDaXNlaywgVHJldm9y

IERyZXcsIEpvaG4gS2FsYXNrYTwvYXV0aG9yPjwvc2Vjb25kYXJ5LWF1dGhvcnM+PC9jb250cmli

dXRvcnM+PHRpdGxlcz48dGl0bGU+VG93YXJkcyBhIHVuaWZpZWQgdGhlb3J5IG9mIG5lb2NvcnRl

eDogTGFtaW5hciBjb3J0aWNhbCBjaXJjdWl0cyBmb3IgdmlzaW9uIGFuZCBjb2duaXRpb248L3Rp

dGxlPjxzZWNvbmRhcnktdGl0bGU+UHJvZyBCcmFpbiBSZXM8L3NlY29uZGFyeS10aXRsZT48L3Rp

dGxlcz48cGVyaW9kaWNhbD48ZnVsbC10aXRsZT5Qcm9nIEJyYWluIFJlczwvZnVsbC10aXRsZT48

L3BlcmlvZGljYWw+PHBhZ2VzPjc5LTEwNDwvcGFnZXM+PHZvbHVtZT4xNjU8L3ZvbHVtZT48ZGF0

ZXM+PHllYXI+MjAwNzwveWVhcj48L2RhdGVzPjxwdWItbG9jYXRpb24+QW1zdGVyZGFtPC9wdWIt

bG9jYXRpb24+PHVybHM+PC91cmxzPjwvcmVjb3JkPjwvQ2l0ZT48L0VuZE5vdGU+

ADDIN EN.CITE.DATA [38-44]. The computations involved in the initial phase are however non trivial and are essential for any scheme involving feedback to work. A second, related point is that normal visual perception is much more than classification as it involves interpreting and parsing visual scenes. In this sense again, the class of models we described is limited, since it deals with classification tasks only. Thus, more complex architectures are needed (see ADDIN EN.CITE <EndNote><Cite><Author>Serre</Author><Year>2007</Year><RecNum>8</RecNum><record><rec-number>8</rec-number><foreign-keys><key app="EN" db-id="wextwrw0bfepwvesx0ox0a982fesfwtwwwff">8</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Serre, T.</author><author>Kreiman, G.</author><author>Kouh, M.</author><author>Cadieu, C.</author><author>Knoblich, U.</author><author>Poggio, T.</author></authors></contributors><auth-address>Center for Biological and Computational Learning, McGovern Institute for Brain Research, Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge 02139, USA. serre@mit.edu</auth-address><titles><title>A quantitative theory of immediate visual recognition</title><secondary-title>Prog Brain Res</secondary-title></titles><periodical><full-title>Prog Brain Res</full-title></periodical><pages>33-56</pages><volume>165</volume><number> </number><edition>2007/10/11</edition><dates><year>2007</year></dates><isbn>0079-6123 (Print)</isbn><accession-num>17925239</accession-num><urls><related-urls><url><style face="underline" font="default" size="100%"> face="underline" font="default" size="100%">S0079-6123(06)65004-8 [pii]</style><style face="normal" font="default" size="100%">&#xD;</style><style face="underline" font="default" size="100%">10.1016/S0079-6123(06)65004-8</style></electronic-resource-num><language>eng</language></record></Cite></EndNote>[8] for a discussion).Finally, we described a class of models, not a theory. Computational models are not sufficient on their own. Our model, despite describing quantitatively several aspects of monkey physiology and of human recognition, does not yield a good understanding of the computational principles of cortex and of their power. What is needed is a mathematical theory – to explain the hierarchical organization of the cortex.ACKNOWLEDGMENTSWe would like to thank Jake Bouvrie as well as the referees for valuable comments on this manuscript.REFERENCES[1] DiCarlo, J. J. and Cox, D. D. Untangling invariant object recognition. Trends Cogn Sci, 11, 8 (Aug 2007), 333-341.[2] Bengio, J. and Le Cun, Y. Scaling learning algorithms towards AI, 2007.[3] Heisele, B., Serre, T. and Poggio, T. A component-based framework for face detection and identification. Int J Comput Vis, 74, 2 (Jan 1 2007), 167-181.[4] Hegdé, H. and Felleman, D. J. Reappraising the functional implications of the primate visual anatomical hierarchy. The Neuroscientist, 13, 5 (2007), 416-421.[5] Serre, T., Kouh, M., Cadieu, C., Knoblich, U., Kreiman, G. and Poggio, T. A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex. MIT AI Memo 2005-036 (2005).[6] Logothetis, N. K., Pauls, J. and Poggio, T. Shape representation in the inferior temporal cortex of monkeys. Curr Biol, 5 (May 1 1995), 552-563.[7] Hung, C. P., Kreiman, G., Poggio, T. and DiCarlo, J. J. Fast read-out of object identity from macaque inferior temporal cortex. Science, 310, (2005), 863-866.[8] Serre, T., Kreiman, G., Kouh, M., Cadieu, C., Knoblich, U. and Poggio, T. A quantitative theory of immediate visual recognition. Prog Brain Res, 165, (2007), 33-56.[9] Serre, T., Oliva, A. and Poggio, T. A feedforward architecture accounts for rapid categorization. Proc Natl Acad Sci, 104, 15 (Apr 10 2007), 6424-6429.[10] Serre, T., Wolf, L., Bileschi, S., Riesenhuber, M. and Poggio, T. Object recognition with cortex-like mechanisms. IEEE TPAMI, 29, 3 (2007), 411-426.[11] Marko, H. and Giebel, H. Recognition of handwritten characters with a system of homogeneous Layers. Nachrichtentechnische Zeitschrift, 23 (1970), 455-459.[12] Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cyb, 36 (1980), 193-202.[13] Wallis, G. and Rolls, E. T. A model of invariant recognition in the visual system. Prog Neurobiol, 51 (1997), 167-194.[14] Mel, B. W. SEEMORE: Combining color, shape and texture histogramming in a neurally-inspired approach to visual object recognition. Neural Comp, 9, 4 (1997), 777--804.[15] Riesenhuber, M. and Poggio, T. Hierarchical models of object recognition in cortex. Nature Neurosci, 2, 11 (1999), 1019-1025.[16] Ullman, S., Vidal-Naquet, M. and Sali, E. Visual features of intermediate complexity and their use in classification. Nat Neurosci, 5, 7 (Jul 2002), 682-687.[17] Thorpe, S. Ultra-Rapid Scene Categorization with a Wave of Spikes. In Proc of BMCV (2002). [18] Amit, Y. and Mascaro, M. An integrated network for invariant visual detection and recognition. Vision Research, 43, 19 (2003), 2073-2088.[19] Wersing, H. and Koerner, E. Learning optimized features for hierarchical models of invariant recognition. Neural Comp, 15, 7 (2003), 1559-1588.[20] Hubel, D. H. and Wiesel, T. N. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. J Physiol, 160, (Jan 1962), 106-154.[21] Perrett, D. and Oram, M. Neurophysiology of shape processing. Image Vision Comput, 11 (1993), 317-333.[22] Hochstein, S. and Ahissar, M. View from the top: hierarchies and reverse hierarchies in the visual system. Neuron, 36, 5 (Dec 5 2002), 791-804.[23] Biederman, I. Recognition-by-Components: A Theory of Human Image Understanding. Psych. Rev., 94 (1987), 115--147.[24] LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. of the IEEE, 86, 11 (1998), 2278--2324.[25] Mutch, J. and Lowe, D. Multiclass Object Recognition Using Sparse, Localized Features. In Proc of IEEE CVPR (2006). [26] Masquelier, T., Serre, T., Thorpe, S. and Poggio, T. Learning complex cell invariance from natural videos: a plausibility proof. MIT-CSAIL-TR #2007-060 (2007).[27] Polk, T. A., Park, J. E., Smith, M. R. and Park, D. C. Nature versus nurture in ventral visual cortex: A functional magnetic resonance imaging study of twins. J Neurosci, 27, 51 (2007), 13921-13925.[28] Li, N. and DiCarlo, J. J. Unsupervised natural experience rapidly alters invariant object representation in visual cortex. Science, 321, 5895 (Sep 12 2008), 1502-1507.[29] Hinton, G. E. Learning multiple layers of representation. Trends Cogn Sci, 11, 10 (Oct 2007), 428-434.[30] Rust, N., Schwartz, O., Simoncelli, E. P. and Movshon, J. A. Spatiotemporal elements of macaque V1 receptive fields. Neuron, 46, 6 (Jun 16 2005), 945-956.[31] Foldiak, P. Learning invariance from transformation sequences. Neural Comp, 3, (1991), 194-200.[32] Thorpe, S., Fize, D. and Marlot, C. Speed of processing in the human visual system. Nature, 381, 6582 (1996), 520-522.[33] Reynolds, J. H., Chelazzi, L. and Desimone, R. Competitive mechanisms subserve attention in macaque areas V2 and V4. J Neurosci, 19, 5 (Mar 1 1999), 1736-1753.[34] Pinto, N., Cox, D. D. and DiCarlo, J. J. Why is real-world visual object recognition hard? PLoS Comp Biol, 4, 1 (2008).[35] Jhuang, H., Serre, T., Wolf, L. and Poggio, T. A Biologically Inspired System for Action Recognition. In Proc of IEEE ICCV (2007). [36] Rolls, E. T. and Deco, G. Computational Neuroscience of Vision. Oxford University Press, Oxford, 2002.[37] Chikkerur, S., Tan, C., Serre, T. and Poggio, T. An integrated model of visual attention using shape-based features. MIT-CSAIL-TR-2009-029 (2009).[38] Rao, R. P. and Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neurosc., 2, 1 (1999), 79-87.[39] Lee, T. S. and Mumford, D. Hierarchical Bayesian inference in the visual cortex. J Opt Soc Am A, 20, 7 (Jul 2003), 1434-1448.[40] Dean, T. A Computational Model of the Cerebral Cortex. In Proc of AAAI (2005). [41] George, D. and Hawkins, J. A hierarchical Bayesian model of invariant pattern recognition in the visual cortex. In Proceedings of IJCNN (2005). [42] Yuille, A. and Kersten, D. Vision as Bayesian inference: analysis by synthesis? Trends Cogn Sci, 10, 7 (Jul 2006), 301-308.[43] Epshtein, B., Lifshitz, I. and Ullman, S. Image interpretation by a single bottom-up top-down cycle. Proc Natl Acad Sci (2008).[44] Grossberg, S. Towards a unified theory of neocortex: Laminar cortical circuits for vision and cognition. Prog Brain Res, 165 (2007), 79-104. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download