Will Koning Case Study Essay - University College London



Application of the Multi-Channel Gradient Model to the Footsteps Illusion

By Will Koning

Supervisors:

Alan Johnston, Peter McOwan and Andy Anderson

Second Case Study Essay

4964 Words

CoMPLEX

Centre for Mathematics and Physics in the Life Sciences and EXperimental Biology

University College London

Wolfson House, 4 Stephenson Way, London, NW1 2HE



Contents

Contents 2

1 Abstract 3

2 Introduction 4

3 Motion Detection 4

3.1 Vision 4

3.2 How We Perceive Motion 5

3.3 Motion Detection Models 5

4 Multi-Channel Gradient Model 6

5 Footsteps Illusion 7

5.1 Illusion 7

5.2 Anstis Conclusions 8

6 Method 9

7 Results 12

8 Conclusions 16

9 Future Work 19

10 Summary 20

11 References 22

12 Appendix 1, MATLAB code for stimulus 24

1 Abstract

While the initial processing of the visual system is well understood, the computational processing that occurs subsequently, to infer properties such as motion, is debated. Computational models are applied to different motion paradigms to see whether the models can predict motion and whether they display the same flaws in predicting motion as our own visual system. This essay reports on the predictions the Multi-Channel Gradient Model (McGM) provides when tested with the Footsteps Illusion (Anstis, 2001. Perception. 30:785-794). The Footsteps Illusion is due to contrast differences but an intrinsic property of the McGM is that it normalizes the effect of contrast. Thus, it was surprising to find that the McGM actually predicts the Footsteps Illusion. At very low contrast this is due to thresholds of contrast not being reached, but the illusion occurs at higher contrast as well and I propose this is due to a ‘faulty’ normalization process through differential treatment of contrast in the numerator and denominator within the McGM.

2 Introduction

Our visual system can detect motion but we do not actually know how it does it. Somehow our eyes and brain can convert light waves reaching our eyes in to a description of the world around us. In this essay, I give a brief background to current ideas about motion perception. We have a good understanding of the processes that occur initially but debate surrounds how this information is computationally processed. Various motion paradigms are developed and used to test different computational models. I will process a motion paradigm (the Footsteps Illusion) with a particular model of motion detection (the Multi-Channel Gradient Model; McGM) to see whether it can detect motion and whether it detects motion incorrectly, as we do when we look at the illusion. I shall then try to explain why this model makes the predictions of velocity that it does.

3 Motion Detection

3.1 Vision

Light from a 3D world reaches the eye where 2D arrays of photoreceptors process it. There are 130 million photoreceptors in the primate retina but only about one million ganglion cells in the optic nerve that takes visual information to the brain for further processing, so cells in the eye are combining to start processing visual information (McOwan and Johnston, 1995). Some cells detect the wavelength of light (colour) while other cells combine to form spatial and temporal filters.

3.2 How We Perceive Motion

We can detect motion in a variety of ways. Motion can be inferred from displacement (e.g. the moon was over there before) or detected by cells that register a translation in space and time. All mammalian motion models start with low-level space-time filters and differ in the way these are combined (Johnston et al., 1992). The stimuli that provide information about motion have been broken down to improve understanding. First order motion involves a translation of luminance while second order motion involves translation of a second order image property (e.g. contrast or texture).

Reichardt detectors are paired cells that are linked by a delay so that the detector will only fire if first one photoreceptor is stimulated by a moving object and then the other photoreceptor after the programmed delay (Bruce et al., 2003). They fire if, and only if, motion is in the direction this particular detector is tuned to (orientation of the two cells) and at the right speed (length of delay). A single photoreceptor cannot detect motion as an increase in light followed by a decrease could be caused by either a spot of light moving across its receptive field or a change in lighting. Correlation, Energy and Gradient models have been proposed to solve this problem.

3.3 Motion Detection Models

The correlational approach combines detectors separated in both space and time but only describes the elementary processes occurring and does not describe how the output from these motion detectors is combined to compute velocity (Bruce et al., 2003). The motion energy approach constructs space-time filter-kernels (see Figure 1) and the outputs are squared and added (Bruce et al., 2003). Gradient models also construct space-time filter-kernels but develop this further by looking at gradients of intensity in space and time (Bruce et al., 2003).

The models may appear similar but are conceptually quite different and vary in details and implementation (Johnston and Clifford, 1995a; Johnston et al., 2001). However, they can become almost identical if the initial filters are chosen specifically for this purpose (Benton, 2004).

[pic]

Figure 1. This figure shows filters that are responsive to movement in space and time.

The basic spatial filter shown in Figure 1 calculates the local gradient of the intensity profile (dI/dx; first derivative) and would produce the strongest output for a vertical edge stepping up. Edges can occur in any orientation and any spatial scale so either many differently orientated filters or more complicated filters (which can be made by combining simple filters) are needed. Gradient models calculate velocity by calculating the ratio of a spatial derivative over a temporal derivative. Consequently, they are ill conditioned to when the denominator approaches or reaches zero.

4 Multi-Channel Gradient Model

The Multi-Channel Gradient Model (McGM) is a gradient model that calculates the ratio of space change divided by time change, which is of course velocity. The model is conditioned so that it takes sums of many higher order derivatives of space change and divides them by sums of many higher order derivatives of temporal change so that it is unlikely the denominator will approach zero. The model is consistent with current knowledge of the eyes and parts of the brain involved with processing vision (Hess and Snowden, 1992) and it combines raw and processed information simply with minimal processing (Johnston et al. 1992; Johnston and Clifford, 1995a).

The model assumes that the stimulus parameters brightness, contrast and wavelength (colour) have separable multiplicative influences on neuronal firing. The model removes these parameters through a quotient operation, except when threshold effects differ between the numerator and denominator, so that the model is selective to motion only (Johnston et al. 1992; Johnston and Clifford, 1995a; Johnston and Clifford, 1995b; McOwan and Johnston, 1995). The model takes the quotient to avoid zero, or almost zero denominators, and the normalization effect on contrast and wavelength is an added benefit (Johnston and Clifford, 1995b). The model is ‘multi-channel’ as it takes multiple channels of higher derivatives. It calculates truncated Taylor series (truncated power series for an infinitely differentiable function) which provide representations of local structure at single points.

The McGM can recover first order motion, second order motion and even perceived velocity from motion illusions (Johnston et al. 1992; Johnston and Clifford, 1995a; McOwan and Johnston, 1995; Johnston et al., 1999). Commendably, the McGM explains a number of motion paradigms with a single motion mechanism based on psychophysical data (Hess and Snowden, 1992; Johnston et al. 1992; Johnston and Clifford, 1995a; McOwan and Johnston, 1995; Johnston et al., 1999).

5 Footsteps Illusion

5.1 Illusion

Because the way we see is not perfect, we can use systematic errors to elucidate how representations of image values are processed. Illusions, which by definition create systematic errors in perception, are good tools to use to understand the processes occurring.

Stuart Anstis first described the Footsteps and Inchworms Illusions in 2001. In the illusion, a light bar and a dark grey bar, that are vertically aligned, move horizontally across a black and white vertically striped background (see Figure 2). The bars move at constant velocity but appear to slow down when the leading and trailing edges of each bar are on a stripe of low contrast and accelerate when on a stripe of high contrast. Consequently, the light and dark bars appear to slow down and speed up out of phase with each other in a manner similar to feet walking across the stripes. [The Inchworm illusion occurs when the leading and trailing edges of each bar are on different coloured stripes and the bar appears to change in length. I have focused on the Footsteps Illusion where the leading and trailing edges of each bar occur on the same coloured stripes and are the length of one full grating period.]

[pic]

Figure 2. The Footsteps Illusion stimulus.

A version of the Footsteps Illusion can be viewed on Stuart Anstis’s webpage at:

5.2 Anstis Conclusions

Anstis (2001) concluded the illusion is due to the instantaneous contrast of moving edges against their background. When the contrast of the leading edge is high the leading edge is more conspicuous which translates in to the motion of that bar being more conspicuous. He concluded the difference in velocity depends on contrast rather than illuminance or polarity. He claimed a positive correlation between contrast and apparent speed (Thompson, 1982; and many other publications – reviewed in Anstis, 2001). Anstis (2004) further investigated the Footsteps Illusion by trying different variations and identified it is the leading and trailing edges that are important in creating the illusion, and that we perceive changes in motion rather than changes in position that are used to elucidate changes in motion. Anstis (2003) concluded that the Footsteps Illusion is compatible with models of motion that use velocity-tuned neural units tuned to particular speeds. He suggests that apparent speed increases with contrast arise as fast detectors respond more rapidly with increases in contrast than slow detectors, that is fast and slow tuned neural units do not compensate correctly for changes in contrast. Anstis (2003) also claims that moving objects appear to slow down at low contrasts and gives the Footsteps Illusion as an example.

6 Method

The aim of my research was to apply the McGM to the Footsteps Illusion to investigate its predictions. I wrote a program in MATLAB (Version 7.0.1.15[R14]) that creates the Footsteps Illusion stimulus and exports each frame as a bitmap image (Appendix 1). The program also exports the stimulus as a movie. While I endeavoured to recreate the stimulus Anstis (2001) used, I have included five bars of varying tones in each sequence of frames to be analysed by the McGM. This allows me to investigate how the McGM predictions vary with regard to contrast while reducing the processing required. The movie allowed me to confirm pairs of bars within my sequence did indeed display the illusion. The Red, Green, Blue values for the colours of grey I used were:

75% luminance grey (R:192, G:192, B:192)

25% luminance grey (R:64, G:64, B:64)

50% luminance grey (mid-tone; R:128, G:128, B:128)

almost white (R:255[248], G:251[248], B:240[248])

almost black (R:32[42], G:32[42], B:64[42])

The McGM did not always read the colours exactly the same way as MATLAB exported them so where the values differ I have put the McGM values in square brackets[]. Anstis (2001) used luminance values and percentages of white but I am relying on the calibration of the RGB system to produce my ‘percentage luminance’ values. Anstis (2001) actually ran one bar at a time in his experiments, and only used two bars to demonstrate the effect. As long as the bars are spatially separated enough to not interfere in the filtering process, it does not matter how many are used at once. The stimulus had a cycle length of 21 frames (this is an odd number as the bands spend one more frame with their leading and trailing edges on a white bar due to a scaling error in MATLAB that resulted in the white bars being wider than the black bars despite being coded to be the same thickness).

I used the McGM (IPLMcGM ©UCL 2002 version 1.2 compiled! 27/02/2003) to process the exported bitmap images. I exported the stimulus generated in MATLAB as an output of 512 by 512 pixels but I ran the stimulus through the model as 256x256 pixels and 128x128 pixels by using ‘subsample factors’ ‘2’ and ‘4’. I used the default model parameters (X Order: 5; Y Order:2; T Order: 2; Orientation Cols: 24; Sigma: 1.5; Alpha: 10; Tau: 0.25; Integration Size: 11) which are based on known biology of the eye and the brain (Johnston et al. 1992; Johnston and Clifford, 1995a). I collected the optic flow results for three different runs; a 256x256 pixel run, a noise reduced 256x256 pixel run, and a 128x128 pixel run. I reduced the noise for the second run by changing the ‘McGM Tweakables’ (turning on ‘Mask Threshold’ and ‘Quotient Blurring’).

My aim was to test whether the McGM can account for the illusory perception of acceleration and deceleration in the Footsteps stimulus. I replicated the Anstis stimulus but also tried with mid grey, lighter and darker rectangles and a smaller stimulus. The smaller stimulus simulates a change from foveal to peripheral vision that is due to larger cells in the periphery reducing resolution (however this is only one of the many changes that would occur in moving from foveal to peripheral vision).

Figure 3 shows a representation of the stimulus by showing a one-dimensional slice taken horizontally through the middle of the stimulus that is plotted for each time step. It is also overlaid showing band placement in the final frame in the image sequence.

[pic]

Figure 3. Space-time one-dimensional slice through the image sequence (I(x,t) plane showing traverse of the band). This image takes advantage of the static nature of the stripes to overlay the final frame of the stimulus and show a horizontal bar that crosses the width of the stimulus showing where the slice was taken for the space-time image. Note colours in descending placement are 25% luminance grey, 75% luminance grey, mid-tone grey, almost-white grey and almost-black grey. (These colours neither show well in Word nor print well but can be verified in Photoshop).

7 Results

The McGM model predicts the perceived Footsteps Illusion from the stimulus when noise reduction is applied with the disappearance of the almost-white band when its leading and trailing edges are at low contrast against white (see Figure 4). It also shows slight differences in where it calculates the leading edges to be for bars of different contrasts.

[pic]Figure 4. Four frames of optic flow output of the McGM from the analysis with noise removal. The original output has been scaled down to show 4 key frames of the 21 frames that make up a full cycle (frames 41, 45, 51, 55 ordered left to right vertically). The top row shows the initial stimulus but after filtering. The second row shows the perceived direction of every pixel of the stimulus moving towards the matching colour of the colour wheel on the edge of each of the four squares (e.g. red is heading horizontally right – towards three o’clock – as the bars actually moved in the stimulus; black is where no motion is detected). In the bottom row each pixel has its predicted velocity, in the direction shown in the second row, displayed in greyscale where white is maximum and black is stationary.

Note that in Figure 4, in the optic flow frames 41 and 51 (1st and 3rd) the bars with lower contrast at their leading and trailing edges appear a pixel behind the mid-tone and high-contrast-at-leading-and-trailing-edges bands. This is in keeping with the illusion but is a very weak effect. Motion is detected at and near the edges of each band and is undetected in the middle of the band.

[pic]

Figure 5. A single frame (51) of optic flow output of the McGM from the analysis without noise removal at output size. This is the same frame, but for a different analysis, as the one where the almost white band disappears in the noise reduced analysis in Figure 4. Note that the top left image is incorrect as it out of phase with the others. This is a problem with the model’s output and needs correction. The other three images display the same information as in Figure 4.

The analysis without noise reduction did not predict the perceived motion very well. The output is very noisy but notice that a horizontal core along the middle of where each band is depicted in the direction graph actually shows movement in the correct direction for all colours although notably not as clearly for the almost white bar. A direction of movement is provided for the entire image including the stationary stripes but note that the velocity graph shows the background as not moving except at its perimeter.

[pic]

Figure 6. A single frame (42) of optic flow output of the McGM from the analysis without noise removal using smaller input displayed at output size. Note again the top left image is incorrect. The other three are as in Figures 4 and 5.

The output displayed in Figure 6 shows that when the stimulus is smaller the dark bands are also confused at low contrast. At the leading edge of the 2nd and 5th bands the model predicts the leading edge is moving to the left (green front). The bands do not disappear as with the almost-white in the noise reduced analysis but they are perceived to be heading in the wrong direction, which could also lead to the Footsteps Illusion. In the next two frames the other bands display this same phenomenon but in order of increasing contrast. This image is optic flow 42, one later in the sequence than the very first image in Figure 4 (but analysed as smaller images without noise reduction). In the next frame, the low contrast bars have the green bands move to the left and the mid-tone grey and the 25% luminance grey (top band) have a green band at the leading edge. The frame after that (44), all the green bands move to the left, and the almost white band develops a green band at the leading edge.

The Footsteps Illusion appears stronger when viewing the stimulus at a smaller size (personal observation). Cell size, sampling and focus all change in moving from foveal to peripheral vision allowing lots of possibility for complexity. Anstis (2001) noted that the illusion was stronger when optically blurred as well as when seen in peripheral vision, so while viewing the stimulus in peripheral vision would blur it, reduced size is also increasing the effect of the illusion. Anstis (2003) claims that motion perception is more contrast dependent in peripheral than foveal vision, due to noise removing thresholds, which would increase the strength of the illusion.

The almost-white which is almost as illuminant as white behaved differently to the other tones of grey (although because of difficulties producing colours in MATLAB it does have the highest and lowest contrast depending on whether it is on black or white). It does not appear to move while its leading and tailing edges are on white and it can only be seen to move on the black. The McGM predicts this behaviour as the almost-white is predicted to stop moving totally for one frame when its leading and tailing edge are both on white.

The Footsteps Illusion is due to low contrast ‘feet’ dragging rather than high contrast ‘feet’ stepping forward as the mid-tone grey bar appears to move pretty steadily and the ‘high-contrast’ bars do not appear to overtake it – the bars may appear to leap forward at high contrast but only ever to where they actually are. There is no difference in contrast between ‘white and mid-grey’ and ‘black and mid-grey’ (equal and opposite Weber contrasts, Anstis 2001).

8 Conclusions

The motion is a first order stimulus with simple translation of some bars that differ only in contrast. However, the contrast of these bars is a second order stimulus that produces an incorrect perception of velocity.

The McGM predicts the illusion, but not strongly, and apart from the cases where movement stops totally at contrasts below a threshold level, it is surprising it predicts the illusion at all. The effects of contrast and wavelength are reportedly removed through a quotient operation (except when threshold effects differ between the numerator and denominator) so that the model is selective to motion only (Johnston et al. 1992; Johnston and Clifford, 1995a; McOwan and Johnston, 1995). While these stimulus parameters are independent of velocity, and should be removed for constancy in motion perception, this illusion, and a variation of it, demonstrate that contrast (Anstis, 2001) and wavelength (Pretto and Chatziastros, 2005) are involved in motion perception.

When the almost-white band disappearing in frame 51 of Figure 4, the McGM correctly demonstrates the effect of contrast below a certain threshold. However the 25% and 75% luminance grey bands still display the Footsteps Illusion. Why do they and why does the McGM predict, albeit weakly, differing effects for these bars that are consistent with the Footsteps Illusion? The illusion is due to contrast so the contrast normalization system, which arose as a byproduct, must be faulty. The McGM quotient operation does not totally remove the effect of contrast. Some parts of the model have higher orders of temporal differentiation in the numerator than the denominator (Johnston and Clifford, 1995a) which could allow an effect of contrast to drop out of the numerator yet remain in the denominator. Also at lower temporal frequencies, high bandpass filters may gate products resulting from low contrast stimuli on the numerator before the denominator goes to zero resulting in an underestimation of speed (Johnston et al., 1999). The Gaussian filter may ‘blur’ or ‘extend’ edges out further and the smaller the contrast the closer to the edge it may drop below a threshold, thus allowing detection of the stimulus but also an effect based on contrast if the filters used to calculate spatial and temporal derivatives differ in their treatment of contrast. When the stimulus is smaller this putative differential edge-extension relative to contrast may be scale independent and relatively larger at a smaller size.

Contrast above a detectable level should be irrelevant to any computation of motion but this illusion clearly shows that motion perception is sensitive to contrast. This illusion shows that detectable contrast (25%) produces an inaccurate perception of velocity. Contrast perception is not constant with regard to polarity so thresholds may be different for light on white to dark on black despite having the same contrast (Benton and Johnston, 1999). Note that the two low contrast examples (bottom two bars in stimulus when leading edge and trailing edge are on their respective low contrast stripes) do not have equal contrasts as I could not program this with MATLAB.

Apparent speed is not totally contrast dependent (Pretto and Chatziastros, 2005). In a variation of the Footsteps Illusion, Pretto and Chatziastros (2005) tested isoluminant bands and bars of different colours (no contrast variation in the pattern) and the illusion was still evident. It is likely that both contrast and wavelength influence motion through the same mechanism. Without contrast normalization, correlational and energy models are contrast sensitive. As this stimulus is dependent on contrast, some models with an absent or faulty contrast-normalizing step could explain it. Standard correlational and energy models cannot detect second order motion (Clifford and Vaina 1999) although if they had their contrast normalization step removed they may be able to detect this illusion which is apparent due to second order motion. The effect of motion-sensitive neurons responding more vigorously to higher contrast (macaques; Thiele et al.2000) would be increased in any models that include a squaring processing, thus increasing the effect of contrast further.

Anstis (2001, 2004) claims the high contrast bands jump forward, but as they never appear to overtake the mid-grey band they must instead lag behind (which he also claims ). Note however, the green bands in Figure 6 and the discussion in the results about the subsequent two frames to Figure 6 where sequentially with increasing contrast the leading edge of the bands appears to move in the opposite direction to actual motion. That is the leading edge appears to be moving backwards temporarily starting first ant low contrast and then moving through mid-grey to high contrast so it is possible that mid-grey could appear behind a high contrast bar according to the McGM. I personally only perceive the bars to be in line with or behind the mid-grey when I watch the stimulus. Blurring increases the illusory effect (Anstis 2001) yet blurring would reduce the instantaneous contrast thus reducing the effect of high contrast bands ‘jumping forward’ providing further evidence that it is likely the illusion is due to apparent slowing or stopping of bands with their vertical edges on low-contrast stripes. When the contrast between the leading and trailing edges is high, the McGM shows the direction and velocity of motion clearly, accurately and with little noise, thus predicting we see what is happening accurately and that the illusion is caused when the contrast between the leading and trailing edges is low. This is analogous to dragging the feet rather than swinging them forward. The illusion is due to masking of the movement when edge contrast is low rather than conspicuous movement when edge contrast is high.

I disagree with Anstis’ (2003) suggestion that apparent increases in speed arise as fast-tuned neural units respond more rapidly with increases in contrast than slow tuned ones as in this illusion it appears low-contrast masks motion and high contrast ‘shows it as it is’. His argument could be reversed to say that slow-tuned neural units respond more slowly than their fast-tuned colleagues with decreases in contrast. Johnston et al. (1999) found that low-contrast stimuli can appear to move more slowly than high contrast stimuli at low speeds (0.5°s-1, 0.75Hz) while at higher speeds (2-4 s-1, 3-6Hz) velocity perception is remarkably unaffected by contrasts above 0.05 (see also Thompson, 1982 and McKee et al., 1986). However, this is not the case in the Footsteps Illusion as the bars move at 2.25°s-1 (3.375Hz) and velocity perception is affected by contrasts above 0.05 (Anstis, 2001).

9 Future Work

It would be interesting to try the Footsteps Illusion at different speeds after calibrating the temporal component of the frames with the temporal component of the model. Speeds of 0.5°s-1 (slow; 0.75Hz), 5.3 s-1 (medium; 8Hz) and 8 s-1 (fast; 12Hz) would be useful for investigating whether velocity modulates motion salience at low contrast. Actual illuminance values and Weber fractions should be computed following the original study (Anstis, 2001). Noise removal improved the predictions of the model and it would be interesting to vary the noise removal parameters and repeat the analysis at the smaller size but with noise reduction on to see if the almost-black band disappears (as with the almost-white band). In the analyses done above the model was looking at a similar number of frames to compute velocity as a complete cycle. This could produce sampling effects so it would be good to export the stimulus at the same bitmap size but only covering 1.5 periods of the cycle, rather than 6, so that the number of frames per cycle is much greater than the model uses to calculate motion (in this experiment both were about 20). It would be interesting to investigate the Inchworm Illusion by changing the stimulus (increasing the length of the moving bars by 50%) so the leading and tailing edge occur on stripes of different contrasts. Graphing the sum and average of rightward velocities (calculate the horizontal component of each angle/velocity vector from the numerical output data) for each tone of grey across a period of stimulus might show oscillatory patterns. Finally, the output of the model needs to be corrected so the actual frame being processed is exported (top left frame in Figures 5 and 6).

10 Summary

We can detect motion from a collection of cells that register changes in luminance. These cellular light receptors are spatially and temporally linked. When the signals these cells send to the brain are processed, the brain can calculate the speed and direction of movement of objects in the visual field. There is debate as to computationally what is going on but the McGM provides a single mechanism based on psychophysical data that explains most motion paradigms.

The McGM approximates the computational processes that go on in the brain to predict motion from individual cells firing in response to luminance. It creates 'filter kernels' that average the images in space and time and calculates multiple spatial and temporal derivatives. It sums spatial derivatives and divides them by summed temporal derivatives to form a ratio giving speed (dx/dt = velocity).

I coded a MATLAB program to replicate the Footsteps Illusion and the export it as a sequence of bitmap images which I then processed using the McGM. In the Footsteps Illusion everything moves at the same speed but we perceive that things do not because they differ in contrast at their leading and trailing edges. The McGM normalizes the contrast information by dividing a sum of temporal derivatives by a sum of spatial derivatives. This is a good thing to do for most visual stimuli, as contrast does not affect velocity. However in this illusion contrast affects perceived velocity. Despite the bands differing only in contrast, the model produces different outputs for them so it does not totally normalize contrast.

In one case, the model predicted a band stopping due to its edges having low contrast with their background (below a threshold) which fits very well our perception of the Footsteps Illusion. However, the model does not explain the Footsteps Illusion very well where it is evident at higher contrast. It would probably also falter when applied to the isoluminant stimuli version of the illusion.

The visual system cannot consistently provide an accurate measure of speed as it is sensitive to contrast. The basic input for all models should be more than just brightness values as wavelength is shown to be important. We need to rethink the importance of supposedly ‘irrelevant’ stimuli to our perception of motion.

11 References

Anstis, S., 2001. Footsteps and inchworms: illusions show that contrast modulates motion salience. Perception. 30:785-794.

Anstis, S., 2003. Moving objects appear to slow down at low contrasts. Neural Networks. 16:933-938.

Anstis, S., 2004. Factors affecting footsteps: contrast can change the apparent speed, amplitude and direction of motion. Vision Research. 44:2171-2178.

Benton, C.P., 2004. A role for contrast-normalisation in second-order motion perception. Vision Research. 44:91-98.

Benton, C.P., Johnston, A., 1999. Contrast inconstancy across changes in polarity. Vision Research. 39:4076-4084.

Bruce, V., Green, P.R., Georgeson, M.A., 2003. Visual perception: physiology, psychology and ecology. 4th Edition. Psychology Press, Hove, U.K.

Clifford, C.W.G., Vaina, L.M., 1999. A computational model of selective deficits in first and second-order motion processing. Vision Research. 39:113-130.

Hess, R. F., Snowden, R. J., 1992. Temporal properties of human

visual filters: number, shapes and spatial covariation. Vision Research, 32:47–60.

Johnston, A., Benton, C.P., Morgan, M.J., 1999. Concurrent measurement of perceived speed and speed discrimination threshold using the method of single stimuli. Vision Research. 39:3849-3854.

Johnston, A., Clifford, C.W.G., 1995a. A unified account of three apparent motion illusions. Vision Research, 35:1109-1123.

Johnston, A., Clifford, C.W.G., 1995b. Perceived motion of contrast-modulated gratings: predictions of the multi-channel gradient model and the role of full-wave rectification. Vision Research, 35:1771-1783.

Johnston, A., Clifford, C.W.G., Benton, C.P., McOwan, P.W, 2001. Why correlation, energy and gradient motion models are not equivalent [Abstract]. Journal of Vision, 1:240a, , doi:10.1167/1.3.240.

Johnston, A., McOwan, P.W., Buxton, H., 1992. A computational model of the analysis of some first-order and second-order motion patterns by simple and complex cells. Proceedings: Biological Sciences, 250:297-306.

McKee, S., Silverman, G., Nakayama, K., 1986. Precise velocity discrimination despite random variation in temporal frequency. Vision Research. 26:609-620.

McOwan, P.W., Johnston, A., 1995. The algorithms of natural vision: the multi-channel gradient model. First IEE/IEEE International Conference on genetic algorithms and applications. Sheffield, Conference publication, 414:319-324

Pretto, P., Chatziastros, A., 2005. Apparent speed in the footstep illusion is not totally contrast dependent [Abstract]. TWK: 8th Tübingen Perception Conference, .

Thiele, A., Dobkins, K.R., Albright, T.D., 2000. Neural correlates of contrast detection at threshold. Neuron. 26:715-724.

Thompson, P., 1982. Perceived rate of movement depends upon contrast. Vision Research. 22:377-380.

12 Appendix 1, MATLAB code for stimulus

axis([0 10 0 10])

set(gcf,'InvertHardCopy','off')

colormap(gray)

nframes = 128;

a = 1.5

p = 0.75

q = 0.25

r = 0.5

s = 0.9

t = 0.1

grid off

axis off

for k=1:nframes

rectangle('Position',[0 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[1 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[2 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[3 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[4 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[5 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[6 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[7 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[8 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[9 0 0.5 10],'erasemode','normal','edgecolor','k','facecolor','k')

rectangle('Position',[0.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[1.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[2.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[3.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[4.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[5.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[6.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[7.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[8.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

rectangle('Position',[9.5 0 0.5 10],'erasemode','normal','edgecolor','w','facecolor','w')

a = a + 0.05

rectangle('Position',[a 7 1 0.5],'erasemode','normal','edgecolor',[p p p],'facecolor',[p p p])

rectangle('Position',[a 6 1 0.5],'erasemode','normal','edgecolor',[q q q],'facecolor',[q q q])

rectangle('Position',[a 4.5 1 0.5],'erasemode','normal','edgecolor',[r r r],'facecolor',[r r r])

rectangle('Position',[a 3 1 0.5],'erasemode','normal','edgecolor',[s s s],'facecolor',[s s s])

rectangle('Position',[a 2 1 0.5],'erasemode','normal','edgecolor',[t t t],'facecolor',[t t t])

saveas(gcf,sprintf('img%d',1000+k),'bmp')

M(k)=getframe;

end

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download