The ultimate goal for a system of visual perception is representing visual scenes. It is generally assumed that this requires an initial ‘break-down’ of complex visual stimuli into some kind of “discrete subunits” (De Valois & De Valois, 1980, p.316) which can then be passed on and further processed by the brain. The task thus arises of identifying these subunits as well as the means by which the visual system interprets and processes sensory input. An approach to visual scene analysis that prevailed for many years was that of individual cortical cells being ‘feature detectors’ with particular response-criteria. Though not self-proclaimed, Hubel and Wiesel’s theory of a hierarchical visual system employs a form of such feature detectors. I …show more content…
Applying this notion to mammalian vision is however problematic; humans for example, are capable of visually perceiving greater detail and variety than a frog and would thus require considerably more of these uniquely coded feature detectors. The notion of a ‘grandmother cell’ was thus introduced to highlight the entailment of such a theory; if every unique stimulus requires its own feature-detector cell, an absurdly high number of neurons would be required for humans to represent the vast variety of visual scenes encountered in a lifetime.
Aware of this shortcoming, Hubel and Wiesel (1962,1965,1968) were cautious not refer to ‘feature detectors’ when examining the receptive fields of the mammalian visual cortex of live cats (Hubel & Wiesel, 1962) and monkeys (Hubel & Wiesel, 1968). Nonetheless, it is now widely accepted that Hubel and Wiesel’s theory of simple, complex and hyper-complex cells remains a form of the original feature-detector theory, albeit formulated into a more economical hierarchical structure. (Lennie, 2003; Martin, 1994) Hubel and Wiesel (962, 1965) concluded that vision involved a hierarchical process starting in the retina, continuing through the lateral geniculate body, the primary visual cortex and possibly even into areas V2 and V3. As sensory information travels further up the hierarchy, it passes through progressively higher-order cells that become increasingly
The human brain is capable of perceiving and interpreting information or stimuli received through the sense organs (i.e., eyes, ears, nose, mouth, and skin) (Weiten, 1998). This ability to perceive and interpret stimulus allows the human being to make meaningful sense of the world and environment around them. However, even as the human being is able to perceive and interpret stimuli information through all sense organs, stimuli is most often or primarily interpreted using the visual (eyes) and auditory (ears) sense organs (Anderson, 2009). However, for the purpose of this paper, the visual information process will be examined.
The system consists of the eyes where the information is collected, geniculate lateral nucleus and visual cortex. The visual cortex could be subdivided into the primary visual cortex and the striate cortex. This sensory system is located at the back of a brain parts of which are located in both hemispheres. Recent studies obtained a sufficient amount of information in order to construct the two-stream hypothesis, which describes the ventral and dorsal streams. The ventral stream begins at the primary visual cortex and goes to the inferior temporal cortex. Main functional responsibilities of this area include identification of objects and the emergence of the long-term memory the origins of which are placed in this area (Rauschecker, Josef P and Sophie K. Scott 722). Dorsal stream also begins at the primary visual cortex and ends at the posterior parietal cortex. It is mainly responsible for the body part control required in order to manipulate an object. The research shows that these "what" and "why" systems are not directly related, and the damage in one affect the other only to a certain extent. It could be explained by the fact that that input is transformed differently via action and
"Perception is not determined simply by stimulus patterns. Rather, it is a dynamic searching for the best interpretation of the available data..... which involves going beyond
4. You are shown a picture of an elephant. Explain how that stimulus is processed from the retina to the visual cortex of the brain.
(1) The first question is dealing with the causal or functional role of phenomenal qualities: Under the assumption that seeing is based on cortical information-processing, the question arises, whether the phenomenal qualities of visual perceptions have a function with regard to this processing, in the sense that the intentional content of visual perceptions depends not only on their intentional, but also on their phenomenal qualities. Is it true, as among other authors Frank Jackson and Steven Pinker claim, that phenomenal qualities are only epiphenomena, not having any function for information-processing? (1)
The primate visual system is usually separated in two partially independent pathways; the dorsal pathway subserves mostly motion perception, while the ventral one subserves object feature recognition. The primary visual cortex (V1) receives most of its retinal input through the lateral geniculate nucleus (LGN). Anatomical and functional segregation of visual perception starts at the level of the retina, where parvocellular (P) ganglion cells have small receptive fields and have sustained colour-sensitive synaptic response to light, whereas magnocellular (M) ganglion cells have larger receptive fields and a faster adapting achromatic response to light [Livingston et al., 1992]. Both types of cells project to the layers 3-6 and 1-2 of the LGN, respectively, which in turn send most of their outputs to layers 4Cβ and 4Cα of V1, forming what is known as the P and M pathways [Refs].
Fig. __ Feed-forward projections from the eyes to the brain and topographic mapping. In each eye the visual field on the left and right of the fovea (the cut goes right through the fovea!) projects to different cortical hemispheres: the ipsilateral retina projects to the ipsilateral visual cortex, and the contralateral retina crosses the contralateral cortex (hemifield crossing in the optic chiasma). The first synapse of the retinal ganglion cells is in the lateral geniculate nucleus (LGN), but information from the left (L) and right (R) eye remains strictly separated. The LGN consists of six layers, layers 1 and 2 are primarily occupied by the magnocellular pathway, and 3–6 by the parvocellular. Information from both eyes comes first together
After investigating spatial cognition and the construction of cognitive maps in my previous paper, "Where Am I Going? Where Have I Been: Spatial Cognition and Navigation", and growing in my comprehension of the more complex elements of the nervous system, the development of an informed discussion of human perception has become possible. The formation of cognitive maps, which serve as internal representations of the world, are dependent upon the human capacities for vision and visual perception (1). The objects introduced into the field of vision are translated into electrical messages, which activate the neurons of the retina. The resultant retinal message is organized into several forms of sensation and is
According to current research there are about 800,000 ganglion cells in the human optic nerve (J.R. Anderson, 2009,pg. 35). The ganglion cells are where the first encoding of the visual information happens. Encoding is the process of recognizing the information and changing it into something one’s brains can understand and store. Each ganglion cell is dedicated to encoding information from a specific part of the retina. The optic nerve goes then to the visual cortex and the information enters the brain cells. There are two types of cells that are subcortical, or below the cortex; the lateral geniculate nucleus and the superior colliculus. The lateral geniculate nucleus is responsible for understanding details and recognizing objects. The superior colliculus is responsible for understanding where objects are located spatially. This collection of cells working together is called the “what-where” distinction. The division of labor continues, as the information is further processes. The “what” information travels to the temporal cortex, the “where” information travels to the parietal regions of the brain.
Early research by Hubel and Wiesel (1959) on the visual system of cats and kittens suggested that the adult visual system is unable to ‘recover’ after damage. They propose a critical period for normal nervous system developed, during the first few months of life, in which the brain is still able to change, after which the brain was hard-wired (Hubel & Wiesel, 1959).
The Magnocellular pathway carries information from the M ganglion cells at rapid speed along the dorsal stream to the parietal lobe to help us understand motion, spatial relationships and contrast. The Parvocellular pathway carries information from the P ganglion cells at slower speed along the ventral stream to the temporal lobe to help us process fine details of such as color and form of an object. It is thought that the Parvocellular pathway is our primary source for recognition and identification, but there are speculations that its allocentric frame of reference can also be used in a more egocentric approach (i.e., the Parvocellular pathway is able to elicit an autonomic response like the Magnocellular pathway). This research expands on these theories by studying the role of color vision in autonomic attention responses. The experiment attempts to study the relationship between the Magnocellular pathway and Parvocellular pathway through color cues and its effects in capturing attention and control visual behavior (e.g., moving the eyes to locate the
3. Barry, A. M. S. 1997. Visual intelligence: Perception, image, and manipulation in visual communication, Albany: State University of New York Press.
After some careful reading, it appears feature detectors are simply the individual neurons or groups of neurons in the human brain which code for perceptually significant stimuli. Feature detectors are located in the visual cortex, although Barlow’s idea is that the retina could allow act as a feature detector. The visual cortex is located in the most posterior part of the brain occipital lobe, which is one of the four major lobes of the cerebral cortex and can’t be seen outside of the brain or at the back of the head.
Humphreys and Bruce (1989) proposed a model of object recognition that fits a wider context of cognition. According to them, the recognition of objects occurs in a series of stages. First, sensory input is generated, leading to perceptual classification, where the information is compared with previously stored descriptions of objects. Then, the object is recognized and can be semantically classified and subsequently named. This approach is, however, over-simplified. Other theories like Marr and Nishihara’s and Biederman’s
Normal vision occurs by a coordinated synthesis of the retinal images into a single brain image. If, however, one of the eyes does not transmit a coordinated or useful image the brain may choose to ignore this image when conducting its synthesis. The region of the