In Homer’s Odyssey, the ancient Greece king Ulysses lost his way at sea. Among his many supernatural encounters was the Cyclops Polyphemos. He was a feared warrior, cannibal, and giant demigod with one important weakness: he had only one eye in the center of his forehead. Ulysses stabbed that eye to blind Polyphemos and escaped by hiding among the Cyclops’ sheep. But there is another myth that, if true, would have affected Polyphemos severely: the myth that the one-eyed couldn’t estimate depth.
This article appeared in the Spring 2014 issue of Current Exchange Magazine.
Common knowledge is that seeing ‘in 3D’ requires good sight on two eyes, or stereovision. The mechanism is based on the two eyes being focused on one object. As each eye looks at the object from a different angle, the two images differ from each other.
Based on these differences, the brain can estimate distances. The greater the difference, the better stereovision works. This has two important consequences: stereovision works better when the object is close and when your eyes are further apart. Thus, stereovision only works within a limited range depending on how far the two eyes are apart – just a few meters for people, centimeters in small birds.
Not every animal – or human – has two eyes that are far apart. For example, small animals like bees or even small birds are almost ‘one eyed’ for the purpose of stereovision. Their eyes are very close to each other, allowing only very short ranges of working stereovision. Still, they are capable of virtuous flight maneuvers. Further, one-eyed people are – when it comes to managing in every-day life – usually far less impaired than the depth perception myth is trying to sell us.
Finally, most of us seem to share the notion that the stereovision-based 3D cinema technology doesn’t really add much depth to a movie (in the literal sense, in the context of perception) – we immerge as easily into conventional, ‘two dimensional’ movies.
The question arises, what other mechanisms allow us to see the three dimensions. There are several mechanisms for monocular depth perception, or depth perception with only one eye. All of them are based on the design of the eye itself and the consequences that arise from basic optical geometry. In this article I introduce two of them which are closely related and have an impact on our everyday life: perspective and motion parallax.
Perspective means that objects far away create a smaller image on the retina than objects that are close to you. This is because we can only see light beams that cross exactly in the iris. Thus, the single eye covers a cone-shaped space in front of it. While the eye sees a small area in close distance and a large area in far distance, the total image stays the same size. In other words, distant objects must produce smaller images than close objects. As the angles between the contours of an object and the iris become smaller the further away it is and so does the image on the retina.
The brain is trained to take into account the effects of perspective and uses hints from a scene to estimate distances. Painters make use of this effect and include a point of focus in their realistic art. All lines that are supposed to look like they point away from the viewer, ‘into the image’, converge on this point of focus, creating the illusion of distance. It also helps the painter estimate the size an object needs to be so as to appear at the correct distance.
Some artists play with perspective to create astonishing illusions. For example, the actors playing hobbits in The Lord of the Rings movies were often simply placed further back than the other actors. By carefully ensuring that depth cues that would give their real position away were hidden, the illusion of very small people was created.
Motion parallax is basically ‘perspective in motion’. Like in stereovision, the brain compares different images. However, the images are not acquired simultaneously but in a sequence while the observer moves around. The same principles that apply to perspective also apply here, with some very interesting effects.
The images of objects at different distances do not change with the same dynamics. When you approach an object, its image looms bigger and when you move away, the image shrinks. Also, images of objects in the distance will move slower across the retina than those of close objects. This is because the close environment is represented on your retina larger than the distant environment.
A change in eye position thus leads to a large change in the position of close objects on the retina but only a small change for far objects.
You can observe motion parallax when looking out sideways of a moving car. You will see the objects at the side of the road, like signs, move across your field of vision much quicker than objects in the distance. Or you can hold up your finger and move your head side-to-side to see how the finger changes its relative position to the background.
Neuroscientists studying the natural behavior of animals – including myself and my collaborators at Bielefeld University in Germany during my PhD training – found that animals control their movements so that they can extract depth cues from visual motion more easily. They also make specific use of the visual motion for navigation. We can learn from these animals how to solve our own navigational problems. For example, three months ago, Ig Nobel prize laureate and neuroethologist Dr. Emily Baird and her co-authors published a general mechanism for landing aircrafts. It is based on their studies on honey bees landing on flat, vertical surfaces. The mechanism is strictly based on the visual motion. Such a mechanism could be applied by anybody and anything that flies and lands on any surface.