Showing posts sorted by relevance for query eye tracking. Sort by date Show all posts
Showing posts sorted by relevance for query eye tracking. Sort by date Show all posts

Thursday, February 19, 2009

What Are You Looking At?

As designers, we spend a lot of time crafting our images or graphics, but how much do we really know about how people look at them?

Greg Edwards uses eye tracking technology to understand how our eyes move over computer screens.

He helped create the Advanced Eye Interpretation Project at Stanford University, and is the CEO and founder of Eyetools, Inc. in San Francisco. Most of the work that Dr. Edwards currently does at Eyetools is to help clients understand how to make their websites communicate more effectively through a better understanding of viewer behavior.

The eye tracking tools have come a long way since the first pioneering work decades ago (see GJ post on the 1967 Yarbus eye tracking studies).

A typical basic hardware setup (this example from the lab at University of California San Diego) includes a non-invasive head-mounted system.


With eye tracking technology, scientists can carefully follow the saccades (jumps) and fixations of subject’s eyes as they review text and images on a computer screen. This graphic, sometimes called a scanpath or a gaze trace shows the sequence and position of an individual’s center of attention.


Scientists can also record the gaze behavior of a large group of people to find out what part of the design attracts the eyes the most, creating what’s known as a heatmap. The areas receiving the most attention are indicated in red and yellow. Areas receiving less attention are mapped in blue or dark.

The technology can also record the activity of the hands on the keyboard and mouse and correlate it with the gaze data.

Dr. Edwards and his team at Stanford were able to use this information to infer the mental state of the computer user. They called their technology “the eye interpretation engine.”

You can make basic inferences about mental states from this data. There’s a clear difference between “reading,” “scanning,” and “searching,” for example. Another discovery is that people look at banner ads even though they don't click on them.

As Dr. Edwards puts it, “the eye interpretation engine parses eye-position data into higher-level patterns that can then be used to infer a user's mental state or behavior.”

I asked Dr. Edwards if we can we tell from scanpath data if a subject is just looking at the style of a type font rather than reading the text? He replied:

“We can tell if a graphic designer is looking at the style of a type font or reading because the behavior changes -- looking at the font keeps the eyes localized in areas longer than would be natural as they examine the font, or the eye movement wouldn't be consistent since they would be looking at features of the font as the driving factor rather than the text itself. Now, could someone purposefully fool this to behave as if they were reading while they were actually examining the font? Yes, if one consciously did that. Would it occur naturally? No.”


I also wondered if it is possible develop higher levels of inference about the cognitive behavior behind eye behavior, to know not merely where someone is looking, but what they’re thinking when they’re looking at it.

For example, you might look at this woman’s red jacket and think that it doesn’t fit her right, and I might look at the same red coat and wonder where she bought it.

At the present time, Dr. Edwards told me, we cannot make such conclusions from the data. The purpose of his original patent work was not to determine what people were thinking, but to determine their mental state and current behavior—are they searching, examining, spacing out—which is different from thinking.

“You can see someone initially checking out the lay of the land of an unfamiliar scene, and you can see when they narrow in to focus on particular areas -- these are behavioral shifts that often happen very quickly and unconsciously -- people are not often able to accurate self report these. You can tell these with the scanpath data. You can't tell how they feel without some other means.”

It seems to me that this would be a very interesting area for future research, especially if eye tracking and keyboard/mouse data were combined with functional MRI (fMRI) data, which shows where activity is localized within the brain in real time.
------------
For more on fMRI data, check out the previous GurneyJourney post on Neuroaesthetics
Eye Interpretation Project, link.
Wikipedia article on eye tracking, link.
Eyetools blog, link.
Thanks to Dr. Edwards.

Tuesday, July 17, 2018

What does eyetracking tell us about the rules of composition?

Eyetracking heat map of The Last Supper by Leonardo da Vinci   
Artist and blog reader Eric Wilkerson asks:

     "I had a discussion with another illustrator over composition recently. Specifically about the usage of directional lines and shapes to lead the eye to the focal point of the painting or cinematic frame in a movie.
     "I know you refer to it as spokewheeling and shapewelding. I learned all this back in college and it was drilled into us based on the old Loomis books.
     "Anyway, my friend says that all of that is nonsense due to eye tracking and that it doesn't matter where the lines are going because the brain is going to look for a face or random points of interest every time.
     "So do you think eye tracking negates spokewheeling etc or is it all a combination of elements to lead the viewer through a composition?
     "I'm firmly in the camp that it doesn't. I've been studying the work of some famous cinematographers lately and they compose whole frames through use of strong light, shadow, color and directional shapes to lead the viewer.

     "I don't know....So I'm writing you. Hope you can settle this for me or at least offer some insight."

Eye tracking scanpath  by A.L. Yarbus
on Repin's painting "The Did Not Expect Him"
Hi, Eric,
That's a fascinating question, and I'm glad you asked it. Here's the short answer: I believe that scientific insights from eyetracking challenges a lot of the art-school dogma about how we look at pictures. But don't throw out the compositional toolkit just yet. Many of those compositional devices are probably still valid.

Eye tracking heatmap in a bar. Viewers apparently want to know
what brands of beer are on tap  
You and your friend are both right. Your friend is right that faces (or other psychologically important objects) will attract the most attention wherever you place them in the design. Eye tracking proves that. It also shows that the way each viewer explores the picture is highly individual. No two viewers will experience the picture in the same way.

Venice by Turner. I'd love to see an eyetracking heatmap of this painting. I believe
 that I'm most attracted to the light buildings on the light background,
not to the areas of highest contrast. But maybe I'm misreporting my experience,
and maybe I look at this painting differently than others do.

The scanpath (the track of eye movements over time) of a given viewer depends to a great extent on what psychological or narrative expectations he or she brings to the interpretation of the image. Contrary to many dogmatic assertions that we learned in art school, the eye's path through the picture does not really follow passively along the directional lines. Instead it jumps around in unpredictable jagged leaps all over the picture. While we customarily speak about "leading the eye" or "forcing the viewer" or "directing the attention" by means of leading lines, we have to remember that the eyes are not driven in a deterministic way, like a train on a track.

Eyes are active extensions of a hungry brain.

Does this mean that those traditional compositional devices have no effect on our experience of the picture?

No, and here is where I think you are also right. I believe that most of the classical design devices (including  spokewheeling, chromatic accents, edge control, value organization, etc.) can influence the way we perceive a composition. When used intelligently, they can help the average viewer decode what's important in a picture, and they accentuate the viewer's satisfaction in having their attention anchored to the centers of interest as they further explore subordinate areas.
Yarbus's data originally published
in "Eye movements and vision" (read more)
.

But it's difficult to know exactly how we're influenced by such devices. I suspect that we perceive them by means of our peripheral vision, even if we don't perceive them directly with our center of vision.

For example, let's look at the two paintings in this post. In "The Last Supper," Leonardo's placement of the vanishing point behind Christ's head seems to reinforce our focus on that important center of intererst.

But in the case of "They Did Not Expect Him," Repin doesn't place the vanishing point behind any of the major heads, but that doesn't seem to compromise the ability of viewers to find what is important in his painting.

Yarbus showed that people looked at the the Repin painting many different ways (right) depending on what question they were prompted with first.

Viewers are perhaps more influenced by leading questions than leading lines.

Science is beginning to reveal that visual processing of any image—but especially a realistic, narrative image—involves many areas of the brain. How we look at a picture appears to be affected by several interrelated factors, such as lines, tones, lighting, color, psychology, title, caption, and other factors. The leading lines and the shapes are just two of those elements.

My advice
Science can help us bayonet sacred cows, but it can't guide us very much in designing pictures. How we look at artwork is a topic that is still mostly unexplored by cognitive scientists using modern technology. Until more studies are carried out, we can't fully understand the logic behind pictorial design. My advice is to be skeptical when you hear any dogmatic assertions about composition. Instead, follow your instincts. Don't concern yourself with following compositional "rules," and don't bother with making your pictures pleasing or harmonious. Instead just work to make your picture interesting. Figure out what you want to say and say it emphatically.

If a graduate student in neurobiology is reading this and wants to devise some experiments, please contact me! I'll volunteer some of my paintings as guinea pigs.
----
More info
Previous posts:
Spokewheeling 
Shapewelding
Eyetracking and Composition (series)
Books: Vision and Art (Updated and Expanded Edition)
Imaginative Realism: How to Paint What Doesn't Exist

Wednesday, December 21, 2011

Dog Cam

You get up out of your chair, and your dog is watching you. Are you going to feed him or take him for a walk? He studies your every move.

What is he looking at? Is he watching your hands to see if you’re reaching for the leash, or your eyes to see where you’re looking? Does he look over at his dog bowl or at the door?


And what happens if you take him for a walk and he sees a female dog? Does he look at her the same way we would?

According to this week’s Science magazine, a team of U.S., Dutch, and Belgian researchers have developed an eye tracking device called a “DogCam” to see what a dog actually looks at when it studies the subtle cues of its owner and its surroundings.

Graduate student Alejandra Rossi at Indiana University in Bloomington says the wireless device uses three cameras: one to capture the dog’s eye view of the world, and two others to track where each eye is looking within that visual field.


Other scientists are conducting eye tracking studies to try to understand how the visual behavior of chimpanzees differs from that of humans. One early observation is that humans tend to look more at faces, while chimps look more at other parts of the body.

So if we know where dogs or apes are looking, can we tell what they're thinking? Not yet, unless we can add further lines of evidence, such as a simultaneous brain scan. The ability to infer cognitive states in non-human animals based on eye tracking data alone is still a rather uncertain prospect. But I'd love to take a dog with an Eye Cam into an art museum...

READ MORE:
Courtesy of Science magazine and Indiana University’s cognitive science program.
Chimp vs. Human scanpaths courtesy Kyoto University
Eye tracking studies in comparative cognitive science

Monday, April 3, 2017

Eye tracking a pianist


(Link to YouTube) Expert pianist Daniel Beliavsky and his student Charlotte Bennett analyze eye-tracking footage taken while they perform, both from memory and from sheet music.

The more experienced pianist knows where the keys are and where his hands are, but he's thinking and looking ahead of the notes he's playing, and his gaze position is generally more stable.

I'd love to see what this technology, called Tobii Pro Spectrum, could tell us about how a visual artist sees the world.

As wonderful as it is, however, the limitation of such an eye-scanning device is that it can only track the center of the gaze. It can't account for peripheral vision. Without much changing gaze direction, experienced artists are able to widen their peripheral attention to see overall relationships or to focus the attention on small details.

Learning when and how to do that is one of the things art students must master.
----
Previous posts on eye tracking
Via BoingBoing

Monday, May 23, 2011

Automated Selectivity

One of the reasons we like to look at paintings is that reality is filtered through someone's brain. Painters select the important elements out of the infinite detail that meets our eyes.


Here, Al Parker chooses to show us detail in the hands and face. He sinks everything else into a flat tone.


John Singer Sargent could have detailed every paving stone and roof tile in this Venice street scene. Instead he softened and simplified the background areas and put the focus on the faces.

This selectivity isn’t arbitrary. The detailed areas correspond with the parts of the picture that we want to look at anyway.  As we’ve seen in previous posts, eye tracking studies have demonstrated the cognitive basis of selective attention. Viewers’ eyes consistently go to areas of a picture with the greatest psychological salience: things like faces, hands, and signs. We’re hard-wired for it.

What happens if you combine eye tracking data with computer graphics algorithms to automate the process of selective omission? Would the result look “expressive” or “artistic?”


(Click to enlarge) In his doctoral thesis for Rutgers University, Anthony Santella did just that. The photographs on the left include a set of overlapping circles showing where most people spent their time looking in each image. The larger the circle, the longer the concentration on those areas.

Santella combined that data with a rendering algorithm which simplified other areas of the image. In the top image, note the flattening of the far figures and the arches above them. In the bottom image of the woman, note how the wrinkles in the drapes and the textures in the sweater are rendered with flat tones. But her eyes, nose and mouth are still detailed.


The rendering algorithms can be designed to interpret the source photo in terms of line and color. Or the shapes can modulated in size according to the interest factor. Note how the outlying areas of each rendering is simplified.

Whichever rendering style one desires, the output image has a sense of psychological relevance, more so than rendering algorithms based merely on abstract principles such as edge detection. As a result these computer-modified photographs have a sense of something approaching true human artistry.

The results of this interaction between eye-tracking data and computer rendering algorithms suggests a heretical thought: What we think of as a rare gift of expressive artistic judgment is really something fairly simple and logical, something you can teach a machine to do.

"THE ART OF SEEING: VISUAL PERCEPTION IN
DESIGN AND EVALUATION OF
NON-PHOTOREALISTIC RENDERING"
by Anthony Santella
www.research.rutgers.edu/~asantell/thesis.pdf
Final two sets of images are courtesy this online graduate thesis. 

Previous related posts on GurneyJourney:
Abstraction Generator
Automated Painting
Al Parker at the Rockwell
The Eyes Have It
Stroke Module
Eyetracking and Composition, part 1
Eyetracking and Composition, part 2
Eyetracking and Composition part 3
Introduction to eyetracking, link.
How perception of faces is coded differently, link.

Monday, December 26, 2016

Do cultural factors influence how we look at faces?

According to experimental findings reported on LiveScience, the way we look at faces is not entirely hard-wired, and may be influenced by cultural factors, which vary between the east and west.

One study suggests that when reading an expression of a person in a group photo, Westerners zero in on the individual, while East Asians pay more attention to reactions of the other members of the group. Lead researcher Takahiko Masuda, a psychology professor at the University of Alberta, says "East Asians seem to have a more holistic pattern of attention, perceiving people in terms of the relationships to others," while "People raised in the North American tradition often find it easy to isolate a person from [their] surroundings."

In another study, illustrated above, cognitive neuroscientists compared the eye tracking data of Western and East Asian observers who looked at faces on a computer screen. The results suggest that "Westerners tend to look at specific features on an individual's face such as the eyes and mouth whereas East Asian observers tend to focus on the nose or the centre of the face which allows a more general view of all the features."

The author of the article, Charles Q. Choi, suggested the conclusion that "Westerners often concentrate on individual details, while East Asians tend to focus on how details relate to each other."

I'm a little skeptical about these findings, partly because there may be factors other than cultural ones that greatly affect how we look at faces, such as professional training and media exposure. For example, the way artists look at things, based on our training and inclinations, may supercede East/West cultural predispositions (see previous blog post on that topic). Also, I don't think eye tracking data alone can be used accurately to assess the degree to which people look at things holistically, since eye tracking can only record the path of the fovea, or center of vision. We need experimental data that can indicate to what degree the attention is focused on one spot versus a wider view.

Source articles: 
Culture Affects How We Read Faces
Face Recognition Varies by Culture

Tuesday, May 22, 2018

How do we look at architecture?

Where do we put our attention when we look at a building? 



Here's a photograph of a Civil-War-era field hospital with an eye-tracking heat map overlaid. It shows that observers pay the most attention (red and yellow areas) to direct human presence.

There's a figure standing in the doorway, and a group of other figures to the left. The interest in the upper windows appears to be a search strategy for finding other people, or at least for learning about human presence indirectly. No one looks at the ground, the trees, or the chimney.


What if no people appear in the photograph? How do we respond to the purely abstract elements of architecture on their own terms? Here are two photos of a building, one with the side windows removed by Photoshop.

Researchers Ann Sussman and Janice Ward have discovered from such studies that "People ignore blank facades. People don’t tend to look at big blank things, or featureless facades, or architecture with four-sides of repetitive glass."


They also observed that "buildings with punched windows or symmetrical areas of high contrast perennially caught the eye, and those without, did not."
-----
Eye tracking of Civil War photos
Here's What You Can Learn About Architecture from Tracking People's Eye Movements

Friday, September 18, 2009

Eye Tracking and Composition, Part 1

When a viewer looks at a painting, how does the eye travel? Does it move in a circular pathway? Does it follow contours? Does it go to the grid lines of a golden section? Is it attracted to areas of maximum contrast? Is it possible to design a picture so that it controls the eye?

Eye-tracking scanpath studies show how individual viewers actually explore an image. This information can be valuable for us as artists, because it allows us to test our assumptions about how the design of a picture influences the way people perceive it.

TRADITIONAL VIEWS
Most books on composition seem fairly sure about how people’s eyes move around in pictures. Henry R. Poore’s influential book Pictorial Composition (1903) presents the notion that the eye moves in a flowing, circular way through a design.

“One’s vision involuntarily makes a circuit of the items presented,” Poore claims, “starting at the most interesting and widening its review toward the circumference, as ring follows ring when a stone is thrown into water.”


In his book Composing Pictures (1970), Donald W. Graham argues that the artist “must find graphic controls so strong that they will force most of his audience to see the elements of his picture in the order he has planned.”

I was curious to find out whether these claims had any basis in fact, and I really wanted to try a study using my own artwork. So I approached Greg Edwards, president and CEO of Eyetools, Inc. (Left to right: Larry Kresek of RMCAD, Greg Edwards, and me).

Scientists at Eyetools use the latest technology to record how a viewer’s gaze actually travels over a picture. Sensitive instruments track the pathway of the center of vision, or fovea. The eye movements are input into a computer, which then outputs a map called a scanpath, superimposed over the image itself.

Here's the first painting we'll take a look at: Marketplace of Ideas, from my recent book called Dinotopia: Journey to Chandara. The painting is approximately 12 by 18 inches, roughly a golden rectangle. When I designed the painting, I placed the main vertical column near one of the key grid lines of the golden section. I was curious to see if the placement of that column drew any particular attention.

Tomorrow we’ll see what happens when we try this image with a series of test subjects.
-------
(Note: This material is adapted from Imaginative Realism: How to Paint What Doesn’t Exist, published by Andrews McMeel, ©James Gurney 2009.)

Related posts on GurneyJourney:
Eyetracking and Composition, part 1
Eyetracking and Composition, part 2
Eyetracking and Composition part 3
Introduction to eyetracking, link.
How perception of faces is coded differently, link.

Saturday, September 19, 2015

Eye tracking the stairway illusion


When I painted this Dinotopia image I wanted to do my own spin on the famous "infinite stairway" optical illusion invented by Lionel Penrose and M.C. Escher.

If you walk around the stairs clockwise, you proceed infinitely downstairs, and if you walk counterclockwise, you go upstairs forever without gaining in altitude.

"Scholar's Stairway," Oil on board, 12 x18 inches.
The way I painted it, the illusion is fairly subtle, and I wondered if other people even noticed the illusion, and if so, whether their eyes moved systematically around the stairs.

To find out, I asked vision scientist Greg Edwards, president of Eyetools, Inc., to run some eye tracking tests using this image as the subject.

Dr. Edwards had fifteen subjects look at my pictures on a computer screen for fifteen seconds each while a sensor tracked their eye movements in real time. Below is the eye track of one subject's experience. The colored line shows the pathway of the eyes, beginning randomly at the green circle. The numbers in the black squares show where they eye traveled at each second of the fifteen second session. 

One can’t know for sure without a follow-up interview, but evidently this particular observer didn’t notice the optical illusion.


The second image shows the "heatmap," which aggregates data from all fifteen observers. The red and orange blobs are the areas of the image received nearly 100% of people's attention. The rider on the brachiosaur took attention away from the central illusion. The dark blue and black areas received almost no attention. 

What can we conclude from the heatmap image? Viewers definitely looked at the figures, wherever I placed them. Beyond that, we can't say much because we didn't design a very thorough experiment. I would love to work with a larger sample size and to gather followup interview data, and ideally collect simultaneous fMRI data set to see if we could correlate cognitive behavior with eye movement. That way we could understand better what happens when people "get" the illusion. If there's any vision scientist who has the equipment and wants to try an experiment like this, please contact me.

This original painting is in the "Art of James Gurney" exhibition at UARTS museum in Philadelphia through November 16.
-------
Previous posts about my stairway painting:
Credit to Mr. Penrose
Using a Perspective Grid

Tuesday, October 21, 2014

Ticking Clocks and Tracking Eyes

I'm excited to be visiting the Texas A&M. I did a couple of radio interviews in the morning, and then painted this 45-minute gouache sketch of the old clock in downtown Bryan. I used four colors: white, ultra blue, burnt sienna, and cad yellow.

I had lunch with professors Ann McNamara of Texas A&M and Donald House of Clemson University, both of whom share my fascination with eye tracking as it relates to artists.


I was thrilled to have a chance to try out the eye tracking tech setup at the Visualization Lab. Here, graduate student Laura Murphy is calibrating the system. She's checking alignment points on stereo images of my face as I look at a test screen.

Below the computer monitor are the two infrared sensors of the FaceLab 5 system. The sensors track both the exact direction of my eyes and the direction of my head so that the system can record exactly where I'm looking within the display monitor. 

The monitor has a photo of grocery store shelves crowded with products and overlaid info tags that pop up in response to where I'm looking, part of an augmented reality experiment they presented at Siggraph this year.
---
I'll be spending time with students of the Department of Visualization in their classes today and tomorrow, and I'll give a free digital slide lecture about picturemaking and worldbuilding in Dinotopia in the Geren Auditorium in the Langford Architecture Center, Building B, Thursday at 7 p.m.
-----
Previously on GurneyJourney:
Eyetracking and Composition, part 1
Eyetracking and Composition, part 2
Eyetracking and Composition part 3

Saturday, September 19, 2009

Eye Tracking and Composition, Part 2

Below is a scanpath image of the artwork that we saw in yesterday’s post. The chart represents the behavior of an individual who, with no prompting, looked at the artwork for a sixteen second period on a computer screen.

The computer recorded a series of circles, indicating where the eye paused momentarily, connected by a thin blue line.

The scanpath reveals that the eye darts unpredictably in straight jagged leaps known as saccades. Saccades occur between three and five times per second, alternating with brief periods of rest called fixations.

The white glow around each circle represents the subject’s peripheral vision. (The heavier blue shows a running average of the center of attention and the orange line is an attempt by the computer to detect reading behavior. Those lines are not important for the study of artwork.)

The numbered black boxes are time markers, indicating the position of the eye at each passing second. The session begins at the green dot and ends at the red dot, the last point of rest before the image disappeared. By following the blue line second by second, you can precisely reconstruct the viewer’s experience.

The test subject’s eye enters the composition at the top center and zigzags down to the figures at left center. This happens within the first second. In the next three seconds it swoops to the right, leaps upward to glance at the upper right corner, and then moves across the center of the picture in large strokes, pausing briefly to look at the near and far buildings.

For the remaining ten seconds the subject’s gaze slides back and forth in smaller saccades, examining the people in the scene.

According to Greg Edwards, President and CEO of Eyetools, “During the first 3 1/2 seconds, this particular person was getting the lay of the land. How long people take to get this initial overview will depend on each picture. They’re trying to understand the basic structure or the context of the picture.”

After that, they usually settle into finer eye movements. “If they make a big movement,” he said, “they’re typically searching for context. If they make a smaller movement, they’re looking for detail.”

The second person’s scanpath (above) both resembles and differs from the first one. The eye also makes large orienting moves initially, taking in the far vista and the full array of people below. But this scanpath shifts between large and small movements throughout the session and spends more of the time looking at the distant vista and the surrounding architecture.


It might be hard to make out these diagrams in small Web illustrations. For the sake of clarity, this video roughly reconstructs the sequence of saccades over the same approximate overall duration——though it doesn’t accurately represent the relative duration of each fixation.

Tomorrow we’ll see what we can learn from crunching together data from a lot of different observers, and I'll suggest some preliminary conclusions.
---
Thanks, Greg! Link to Greg Edwards's Eyetools blog. and Eyetools website.

Related posts on GurneyJourney:
Eyetracking and Composition, part 1
Eyetracking and Composition, part 2
Eyetracking and Composition part 3
Introduction to eyetracking, link.
How perception of faces is coded differently, link.

Sunday, September 20, 2009

Eye Tracking and Composition, Part 3

(Note: This is the third and final part of a series of posts adapted from Imaginative Realism, Andrews McMeel, October, 2009). Please follow these links to the earlier posts, Part 1 and Part 2.)

By adding together the eye movement data from a group of test subjects, we can learn where most people look in a given picture.

To create the image below, the eye-tracking technology recorded the scanpath data of sixteen different subjects and compiled the information into composite images, called heatmaps. The red and orange colors show where 80-100% of the subjects halted their gaze. The bluer or darker areas show where hardly anyone looked.

Here’s the heatmap for the painting Marketplace of Ideas, which we discussed in the last two posts.

It turns out that there was very little interest in either of the main vertical columns. Instead, the red splotches reveal a concentration of interest in the figures. There were secondary interest areas in the far buildings and the sign in the upper right.

The interest in people, especially faces, appears to reflect a hardwired instinct to understand our fellow humans.

In the heatmap for Chasing Shadows, which shows a group of children running along a beach with a Brachiosaurus, there’s a strong focal point around the dinosaur's front feet and the nearby running children.

There are secondary points of interest at the dinosaur’s head and the leading child. Note how the action of the walking pose was read without directly looking at the rear leg.

Other spots of interest congregate around the dinosaur’s tail, the base and the top of the tree, and the vanishing point along the beach.

Hardly anyone looked directly at the sky, the upper palm fronds, or the middle section of the palm trunk. But these areas were presumably perceived in the halo of peripheral vision around the center point of vision.

Have a look at this painting, and be aware of where your eyes travel.


The heatmap for the painting Camouflage (click to enlarge) shows that everyone noticed the dinosaur’s face. They also spotted the hidden man and the small pink dinosaur.

According to statistical data connected to timing, these three faces drew almost everyone’s attention within the first five seconds. The dinosaur's face was statistically the first thing most people looked at, followed quickly by the hiding man. Below is one subject's scanpath, with the black numbers counting off seconds.

I was surprised that the two patches of lichen on the tree above the man scored near 100% attention. Evidently viewers noticed these strange shapes in their peripheral vision and checked them to make sure they weren’t important, or somehow a threat to the man. From a narrative standpoint, I suppose they were a bit of a red herring, distracting with no payoff.

The sunken log and the detailed patch of leaves in the lower left drew 60% of the viewers, perhaps because those were likely places for other dangers to hide.

Just because an element has sharp detail or strong tonal contrasts, it doesn’t necessarily attract the eye. The dark branches behind the dinosaur’s head drew almost no attention because they fit into the natural schema of a forest scene. Apparently the viewers developed a search strategy based on the threatening situation of a hungry dinosaur looking for a bite to eat.

PRELIMINARY CONCLUSIONS
These experiments force us to question a few of our cherished notions about composition and picture-gazing.

1. The eye does not flow in smooth curves or circles, nor does it follow contours. It leaps from one point of interest to another. Curving lines or other devices may be "felt" in some way peripherally, but the eye doesn't move along them.

2. Placing an element on a golden section grid line doesn’t automatically attract attention. If an attention-getting element such as a face is placed in the scene, it will gather attention wherever you place it.

3. Two people don’t scan the same picture along the same route. But they do behave according to an overall strategy that alternates between establishing context and studying detail.

4. The viewer is not a passive player continuously controlled by a composition. Each person confronts an image actively, driven by a combination of conscious and unconscious impulses, which are influenced, but not determined, by the design of the picture.

5. The unconscious impulses seem to include the establishment of hierarchies of interest based on normal expectations or schema of a scene. For example, highly contrasting patterns of foliage or branches will not directly draw the gaze unless they are perceived as anomalous in the peripheral vision.

5. As pictorial designers we shouldn’t think in abstract terms alone. Abstract design elements do play a role in influencing where viewers look in a picture, but in pictures that include people or animals or a suggestion of a story, the human and narrative elements are what direct our exploration of a picture.

As Dr. Edwards succinctly puts it, “abstract design gets trumped by human stories.” The job of the artist, then, in composing pictures about people is to use abstract tools to reinforce the viewer’s natural desire to seek out a face and a story.

--------------
Related posts on GurneyJourney:
Eyetracking and Composition, part 1
Eyetracking and Composition, part 2
Eyetracking and Composition part 3
Introduction to eyetracking, link.
How perception of faces is coded differently, link.

All the paintings are from Dinotopia: Journey to Chandara.

Many thanks to the team at Eyetools, Inc. for their assistance.

Thursday, July 7, 2011

Video Eye Tracking

In some previous posts, we’ve looked at how eyetracking technology tells us something about how people look at paintings. But what about movies? How does the element of motion influence the attention of the viewer?


Eye Movements during a segment on Chilli Plasters from TheDIEMProject on Vimeo.

(Feed readers may not get the video, so link to it here)

Scientists at the DIEM Project (Dynamic Images and Eye Movement) have shown snippets from films to viewers and tracked the movements of their eyes. In the case of this clip, 48 viewers participated, so the sampling size is quite large. In addition to little ovals showing where individuals glanced, the video is overlaid with a “heatmap” which compiles viewer data to show where the vast majority of viewers were looking at a given moment.

Here are some of my observations:
1. In scenes with an even overall visual texture (such as at 1 minute: 2 seconds), the center of gaze goes to a default position in the middle of the screen.
2. People seem to anchor their gaze on the nose of the face, perhaps “reading” the rest of the face in peripheral vision from that position.
3. Viewers tend to look at the person who is speaking (not surprisingly). Getting them to look at a listener in a dramatic film is a collaboration of acting, directing, and editing.
4. When one scene cuts to another, the eye hangs in its last focal point for a bit, so editors who place the focus for the next frame in the same location will do the viewers a favor.
5. Viewers are highly goal-driven in the way they look at movie scenes. They scan for meaning.
6. Anomalies attract attention, like the goop stuck on the side of the pot at 32 seconds.
7. In fast cutting, the eye reverts to the default center (1:14-1:18)...
8. ...Which suggests that most visual information in fast-cut action scenes in movies is processed from peripheral, not foveal information. So why bother with detailed VFX, other than to give eye-candy to DVD stop-frame hounds?
9. What the heck are chili plasters?


LINKAGE
More at the DIEM project.
Related previous posts on GurneyJourney:
Eyetracking and Composition, part 1
Eyetracking and Composition, part 2
Eyetracking and Composition part 3
Introduction to eyetracking, link.
How perception of faces is coded differently, link.
Eyetracking analysis of a scene "There Will be Blood"
University of Edinburgh, Visual Cognition Lab, Copyright 2009

Wednesday, January 9, 2008

Eye Magnets



Have a look at this painting of bears in a forest by Ivan Shishkin. As you look at the composition, take note of where your eyes travel.


Do the same thing with this one by Thomas Moran. What did you notice first? What parts of the picture did you just you glance at, and where do your eyes linger the longest?


Here’s another Turner. There are a lot of things to look at here. Allow your eyes to peruse it casually, but try to be aware of what they just glance at and where they spend the most time.

Here’s one by David Roberts. Where do you look first? How do your eyes explore the scene?

OK, one last picture. You saw this on an earlier post. Look at it again, and try to be aware of how your eyes track around the picture.

I asked you to play this game in order to pose a couple of fundamental questions: Does everyone look at pictures in the same way? And do we really understand how pictorial design influences the movements of our eyes?

Scientists have designed experiments to explore these questions. In 1967, Russian psychologist Alfred Yarbus developed sensitive instruments to track the involuntary jumping movement of the eyes, called “saccades.”


Here’s a map, or “scanpath,” of the movement of one person’s center of vision, or fovea, as it scans the bears in the forest. The eyes clearly fixate on the bears, but they also circulate generally around the perimeter of the picture.


Yarbus showed his subjects the Repin painting “They Did Not Expect Him.” The scene shows a prisoner returning to his family after a long exile. Yarbus asked his subjects a series of different leading questions, like how old the people were, or how rich they were, or how long the man was away. He found the chart of eye movements differed wildly each time. And the scanpaths varied from person to person.

These scanpath studies lead to a number of conclusions—and questions—for us as artists:

1. Different people don’t look at the same picture in the same way. And a single person will look at a given picture differently depending on what questions they bring to the image. This has profound implications to curators writing museum tags and comic artists writing word balloons.

2. Pictures do not “control” the eye. The viewer’s thought process plays a huge role in how their eyes travel through a composition.

3. Standard compositional theory assumes that our eyes follow contours. That doesn’t seem to happen at all. They never follow along the curve of the woman’s back, for example, they just jump from face to face. Of course we do perceive lines of action and flowing contours, but our eyes don’t actually follow along them.

I also wondered if there is any basis to the assumption in standard compositional theory that the eye is attracted to areas of strongest contrast. That’s why I showed you the Turner and the Roberts and the Moran. I noticed when I looked at those pictures that my attention was sometimes attracted to the edges with the least contrast.


In the Turner, for example, I found myself looking at the light-colored tower (1) more than the black gondola (2), which had much more contrast. Was that true for you, too?

My hunch is that the areas of strong contrast are somehow felt or registered by the peripheral vision, but that the eye’s center of vision quickly moves to other tasks, in this case to sorting out close contrasts.

To my knowledge, there hasn't been much scientific study at all on the subject of what's going on in our peripheral vision when we're decoding an image.

In any case, when it comes to how we look at pictures, there is more than just abstract design theory going on. Regardless of how the picture is designed in abstract terms, we seem to be involuntarily attracted to sorting out the human stories.

I hope you’ll share your own experience of looking at these pictures in the comment section. For more information on the science of eye tracking, check out this link:

Tomorrow: Stretching a Face

Monday, January 27, 2020

Where We Look at Madame X

Where do people spend the most time looking at Sargent's Madame X? We study the face and hands as expected, but we also look at her chin, neck, and striking decolletage.

Left: John Singer Sargent, Madame Gautreau
Right: Eyetracking heatmap
Dan Hill, the vision scientist who did this study, says: "The mind's eye can go anywhere. In reality, faces command attention. What gets noticed first, typically? The answer is faces and what's in the vicinity, namely people's heads."

Thomas Gainsborough, Mr. and Mrs. Andrews

Hill says, "Faces matter. After twenty-plus years of conducting market-research studies, I can tell you most definitively that nothing changes the underlying pattern. If there's a face involved, as much as seventy percent or more of all the gaze activity goes to the face(s) present."
------
Previously on the blog: 
Eye Tracking and Composition
Men, Women, and Eye Tracking
Images from the book "First Blush: People's Intuitive Reactions to Famous Art" by Dan Hill

Tuesday, April 9, 2019

Eye tracking and film

Where do our eyes look when we watch a movie? Eye-tracking technology can record in real time how  the observer's eyes land on important features, and then jump to new points of interest. 

Image from Pixar. Overlay by Avni Pepe
Scientists can take the data from the visual experience of many individuals and compile it into heat maps showing where most observers actually look. That information can suggest the cognitive activity driving the eye movements, and it can help editors understand what information people are taking away from the moving pictures.

Researchers used this technology on three films: the war movie "Saving Private Ryan" and two Pixar movies, "Up" and "Monsters, Inc." 

The researchers' concluded that the eyes and the mouth were especially important for delivering narrative information:
1. "Hot spots emerge around the character’s mouths, as if our viewers are conditioned to look for identification through the way a film’s central characters 'speak', and are searching for narrative clarification through dialogue exchanges that actually never emerge."

2. "Objects and motifs that actively move the story along and define character mood are picked out of the mise-en-scène and gazed at even when the scene is fluid and action is taking place across the filmic space."

3. "Our eyes seem to be active in finding emotive objects even when they are found in a busy scene. There is a caveat here of course: the 'extra' attention to significant objects may well have been because the scenes were free of dialogue." 
4. "When the sound is off, viewers actually migrate slightly away from the character’s mouths to focus on their eyes, and to objects that are pregnant with narrative information." 
5. "The viewer is searching for non-verbal signs to confirm what might be taking place and because the eyes, culturally speaking, are where “truth” is to be revealed."
-----
Read the whole article at The Conversation

Thursday, March 15, 2012

Men, Women, and Eyetracking

Have a look at the photos below for five seconds or so. We'll come back to them later.


Scientists have used eyetracking technology to see where people look in a photo. One question they have asked is whether men and women look at other people in the same way.

In one experiment, groups of men and women were asked to look at the picture of baseball player George Brett. 

The eyetracking heatmap shows that both men and women spent time looking at the head, but men also looked at the crotch. This isn't necessarily a sign of sexual attraction. They could be sizing up the competition or identifying with him.

According to Nielsen and Coyne, men also tend to look more at private parts of animals when shown American Kennel Club photos.

Here are the results of thirty men and thirty women looking without prompting at that first pair of photos.

The company Think Eye Tracking observes from the results:

1. Men check out other men, especially their "assets."
2. Women checked out his wedding ring.
3. Guys don't seem to care about the woman's marital status, but looked at her face, breasts, and stomach.
4. If you ask people to self-report where they looked, they tend not to be very honest or they're just not consciously aware.
 -------------
Read more:
Bathing Suit Photo Study (Think Eye Tracking)
Online Journalism Review
Studio Moh
Related GurneyJourney posts
Do Artists See Differently?
Dog cam: Where do dogs and chimps look?