Where Do We Look While We Act?

Authors: Niklas Hypki

Whenever we enter a new environment, we intuitively begin to explore using eye movements (Rothkopf, Ballard, and Hayhoe 2016; Michael F. Land, Mennie, and Rusted 1999; Hayhoe et al. 2003). But how does our visual system know where to look next? In many everyday situations, we use our eyes to facilitate the action we are performing, such as when watching birds pick cherries, exploring a particular object in a virtual world or contemplating at a work of art in a museum. Seemingly effortless, our gaze finds its way from one object to the next, allowing us to perceive and interact with our surroundings. We can move our eyes in a certain way based on tasks and strategies we have decided on even before we have seen the environment in which we are acting by incorporating previously accumulated knowledge and our current expectations.

This chapter provides an overview of where we look when performing various actions and explains how we plan and monitor our actions with the help of precisely timed eye movements. In particular, the chapter covers where we look when walking, where we look when overcoming obstacles, how we orient ourselves toward a destination when walking, and how walking influences our perception.

Task-dependent eye and head movements have been systematically observed and analysed in a range of everyday activities (Schütz, Braun, and Gegenfurtner 2011; Adhanom et al. 2020; Michael F. Land and Tatler 2009).

In many activities, such as making tea or reading, we look at places and objects that are related to our actions (Michael F. Land, Mennie, and Rusted 1999). When reading, the amplitude of the saccades and the duration of the fixation adapt to the structure of the words and sentences we are currently looking at (Schütz, Braun, and Gegenfurtner 2011).

When we are looking for something, we tend to make slightly longer saccades than when we are just glancing at a picture (Benjamin W. Tatler, Baddeley, and Vincent 2006).

Through examining eye movements during different actions, it becomes clear that the frequency of different types of eye movements change and our fixations and actions are temporally perfectly coordinated (Benjamin W. Tatler et al. 2011). This can for example be observed when reading sheet music (Furneaux and Land 1999), driving (M. F. Land and Lee 1994) or motor racing (Michael F. Land and Tatler 2001).

In addition to saccade amplitude and fixation position, an ongoing task can also influence other types of eye movements (Karson et al. 1981; Doughty 2001): For example, the blink rate is lowest when reading (7–10 blinks/minute), but increases slightly when we are in a relaxed state in a silent room (12–16 blinks/minute). This rate is also known as the spontaneous eye-blink rate. During conversation or when memorizing while listening, the blink rate increases to more than 24 blinks/minute. During repetitive tasks fixational eye movements such as drifts increase (Friedman 2025; Di Stasi et al. 2013).

Interestingly, there are also a number of examples that show that the top-down appeal and engagement of visual stimuli can have an effect on our gaze. For example, the blink rate can decrease, when watching repetitive actions or when a person leaves a scene (Andreu-Sánchez et al. 2021). When we look at less obvious characteristics of a film clip, such as the length of the shots, we can see that shorter shots slightly reduce our blink rate (Andreu-Sánchez et al. 2017). Thus, the blink rate can be used to measure changes in engagement in periods of seconds to a minute during free viewing (Ranti et al. 2020). Similarly, our saccade amplitude, fixation duration and eye movement variability change when we watch different types of outdoor scene videos with varying styles of cutting or effects such as stop-motion (Dorr et al. 2010).

Task Dependent Gaze Patterns

To understand where we look while we act, we need to understand how different tasks influence our eye movements. We also need to understand that the currently available visual input is also important since it defines most of our fixation targets. This interaction between top-down and bottom-up factors influencing our gaze was first systematically described in (1967) when showed that the patterns of eye movements are similar when the same painting is viewed by different people, even if there is a gap of one or two days between viewings.

This underlined the important influence that the visual input has on our eye movements. However, although the patterns were similar, they were not identical, with the eye movement patterns within a person being more similar than those between different viewers. then asked one person to look at the same painting seven times while giving various viewing instructions, such as making a judgement about the depicted scene, remembering aspects of the image or simply looking at the painting freely, whereupon the eye movements changed significantly (Duchowski 2007; Benjamin W. Tatler et al. 2010). This showed that the instruction influenced eye movements substantially.

Later Noton and Stark (1971a) expanded these results by showing that subjects without an explicit task also fixate similar regions of interest when confronted with the identical visual stimuli. Just like Yarbus (1967), they observed small differences between viewers and noted that the order of fixations within the regions that all participants looked at were not stable. For example, when inspecting a square, participants usually fixated on the corners of it. The order of the corners, however, varied from one observer to the other and even between successive observations by the same person (Duchowski 2007; Noton and Stark 1971a, 1971b).

These results show that our eye movements vary dependent on the ongoing actions. During an action, our saccade and fixation patterns seem to follow some common underlying principles that help us solve the ongoing tasks while still perceiving what is around us. In the phase before a task begins or an instruction is given, we seem to mainly use our gaze to explore our surroundings. Thus, the distribution of fixations between task-relevant and irrelevant objects is roughly equal. Shortly before and during an interaction, we mostly fixate on locations relevant to the task (Hayhoe et al. 2003; Rothkopf, Ballard, and Hayhoe 2016).

Task related eye movements can be classified into different types. Michael F. Land, Mennie, and Rusted (1999) proposed a categorisation based on monitoring functions. They distinguished between eye movements that serve to locate an object that will be used later in the process, eye movements that monitor how we move our hand or an object in our hand to a new location, eye movements related to the approach of one object to another, and eye movements related to checking a condition or variable related to the ongoing action.

While this classification is well suited to tasks that involve interactions with one or more objects, Foulsham (2015) developed a classification of fixations that is applicable to an even broader range of situations: He divides fixations during actions into three categories depending on how they relate to the task at hand temporally.

Planning

The first category consists of fixations that are used to plan actions. These anticipatory fixations are part of task-dependent strategies and can be observed relatively early on, as they are often unrelated to the action currently being performed. In a study in which participants had to solve a modelling task, look-ahead fixations occurred before 20% of all grasping movements and took place up to 3 seconds before the actual movement (Mennie, Hayhoe, and Sullivan 2007). Similar fixations can also be observed when making binary decisions, such as deciding which path to follow at a junction. Usually, these precede the actual decision only by a few hundred milliseconds (Wiener, De Condappa, and Holscher 2011).

In addition, there are less common anticipatory fixations, which are not related to planning the next action, but can be used to memorise the position of objects for later use (Foulsham 2015). In an experiment in which eye movements were compared between different tasks in the same environment, Jeff B. Pelz and Canosa (2001) found that anticipatory fixations were typically on task relevant objects.

Just-in-Time Fixations

The second category mostly includes fixations that occur when we interact with objects in our environment. These frequently occurring fixations clearly relate to the upcoming action and are called just-in-time fixations (Ballard et al. 1992; Hayhoe, Bensinger, and Ballard 1998; Foulsham 2015). In an experiment by Michael F. Land, Mennie, and Rusted (1999), gaze moved to the next object an average of 0.6 seconds before the participants had finished manipulating the previous object. observed a similar time sequence when grasping. He reported that we fixate on objects approximately 0.5 to 1 s before reaching out for them.

Brouwer, Franz, and Gegenfurtner (2009) described the time course of eye movements during such a grasping task in more detail: The first eye fixation is normally directed towards the centre of gravity of the target object. When we simply look at an object, our gaze normally rests close to this position. Sometimes, a second saccade slightly corrects a possible undershoot of the initial landing position. Depending on the next intended action, the point of fixation may also be at a different location. For example, if we grasp an object, our second fixation is focused on regions that contain relevant information for the grasping positions of our index finger and the thumb.

Thus, by looking ahead and fixating task-relevant landmarks during interaction, our eyes support our hand movements (Johansson et al. 2001). However, when we try to walk past an obstacle, we look at its outer edges rather than its centre (Rothkopf, Ballard, and Hayhoe 2016).

This means that the additional visual information is gained just in time to carry out the next step (Ballard, Hayhoe, and Pelz 1995). Thus, when putting a cup somewhere, we fixate empty areas, such as the place on the table where we want to put the cup down (Michael F. Land, Mennie, and Rusted 1999; Hayhoe 2000; Schütz, Braun, and Gegenfurtner 2011). Interestingly, this pattern of fixating empty space between two potential future targets has also been observed in some visual search tasks (Najemnik and Geisler 2008; Findlay 1997). This suggests that during a search, some just-in-time preparation is also beneficial.

In addition to the fixation position, the timing of these task-related eye movements also appears to adjust depending on the task. For example, saccades towards anticipatory fixations during interaction seem to be faster and shorter than saccades used to examine visual objects (Epelboim et al. 1997, 1995). Such shortening of fixation time and acceleration of saccades can also be observed in other tasks, for example when an eye movement is performed as part of a pre-planned sequence of fixations in the same direction (Carpenter 2001).

Moreover, on the basis of an anlaysis of eye movement strategies of cricket players, M. F. Land and McLeod (2000) argue that the correct temporal placement of the eyes is more crucial for the successful execution of behaviours than fixating accurately and that skilled performance depends as much on the correct temporal as spatial allocation of gaze.

Furthermore, Hacques (2022) showed that the complexity of climbers’ gaze paths reduced when they trained a particular route for several weeks. In addition, their ratio of observational and anticipatory eye movements adjusted depending on the training.

Another function of short-term look ahead fixations is likely to relieve our visual working memory: Ballard et al. (1992) conducted an experiment, in which participants were instructed to copy an arrangement of coloured blocks on a computer. Instead of storing and recalling the original structure in visual memory, they were constantly looking back and forth between the original figure and their copy (Ballard, Hayhoe, and Pelz 1995; Ballard et al. 1992). Thus, by continuously using eye movements as a method for information seeking, we seem to reduce the need of using a large capacity in our visual memory.

Monitoring

Finally, fixations can be related to our ongoing behaviour. This can be helpful to quickly adjust our actions based on visual information if necessary.

When we cut bread, for example, we first fix the point of contact with the knife and then move our gaze along the cut directly in front of the knife (Hayhoe et al. 2003).

Similarly, we can adjust our pouring speed while filling a cup if we keep a close eye on the cup during this action (Michael F. Land, Mennie, and Rusted 1999).

Our eye and head movements during an action are closely linked to our behaviour and are often necessary to complete a task successfully. Most of our eye movements are associated with monitoring, anticipating and planning our actions. This means that leading up to, and during a task interacting with our environment, we are constantly gathering useful and necessary visual information with our eyes and thus adapting our actions to the requirements of different tasks. This also helps us to perform more precise hand movements. However, it also means that task-related gaze patterns do not always occur in fixation blocks that only relate to one subtask. Instead, a time series of eye movements often appears chaotic and complex at first glance, as successive fixations can relate to the current, the next and then again to the current subtask. Therefore, analysing eye movements in relation to associated actions can be a helpful method for untangling the chaos.

Interestingly, describing gaze data according to this principle can shed light on how far ahead we typically plan and what information we use in different types of tasks. This means that task-related eye movements can give us a better understanding of how we act and could ultimately allow us to make predictions about future behaviour given a predefined task in a known environment.

Gaze During Locomotion

Natural locomotion allows us to constantly shift our FOV to perceive relevant visual information around us. In many situations where we want to interact with an object in our environment, we walk towards it. This is because walking can be easily integrated into many of our daily tasks, such as when we go to the kitchen, find our favourite cup, fill it with water and then go to the living room to sit down and drink. Of course, we also move our head and eyes during these subtasks. Gaze tracking recordings during such natural tasks may appear chaotic at first glance. However, many of the eye and head movements follow systematic patterns and can be attributed either to interactions with objects or to walking planning. To better understand and recognise these patterns, various typical gaze behaviours when walking are described in the following section. The next few pages also discuss how walking itself can influence our perception.

Looking Ahead

Like many other actions, walking benefits from visual information that we collect on the fly. However, the walking movements themselves make it difficult for walkers to capture this information straight away. During walking, vertical and horizontal movements of the head occur with approximately 2 Hz vertical and 1 Hz horizontal head movements (S. T. Moore et al. 1999; Imai et al. 2001; Steven T. Moore et al. 2001). Thus, eye movements are compensating for the head motion to stabilise our gaze (Steven T. Moore et al. 2001). In addition, our eyes compensate for our progress as we walk, allowing us to keep a fixed object more stable on the retina even as we approach (Aftab E. Patla and Vickers 2003). At the same time, we align our head to a point approximately 0.8 m in front of us (S. T. Moore et al. 1999; Hirasaki et al. 1999). In walking experiments on flat terrain we usually direct slightly more than half of our fixations toward the ground (Jeff B. Pelz and Rothkopf 2007; Jonathan Samir Matthis, Yates, and Hayhoe 2018; Aftab E. Patla and Vickers 2003).

The targets of these travel-gaze fixations are usually in locations that we reach between 0.8 and 1.5 s later, two to four steps ahead (Jonathan Samir Matthis, Yates, and Hayhoe 2018; Aftab E. Patla and Vickers 2003; Mark A. Hollands et al. 1995; Jonathan S. Matthis and Fajen 2014). Most of these fixations are shorter than 0.6 s (Aftab E. Patla and Vickers 2003) and tend to be mainly directed at locations that are relevant to the ongoing task (Mark A. Hollands, Patla, and Vickers 2002; Rothkopf, Ballard, and Hayhoe 2016; Marigold and Patla 2007).

In general, these fixations appear to serve to inspect specific locations in the terrain, while saccades direct the gaze back towards future waypoints further ahead (Hart and Einhäuser 2012).

Brenner, Ghiani, et al. (2024) found that we apparently look less far ahead when running than when walking. They noticed that during running, we do not adjust our gaze distance to the running speed. In other words, regardless of speed and step length, we look a certain distance ahead and reach the place we are looking at earlier and in fewer steps when we are moving faster while taking larger steps. This also means, we usually do not look at the ground to place our feet. Instead, we seem to rely on the passive mechanical response of our body to remain stable (Dhawale and Venkadesan 2023). However, when running with others, we intuitively adapt our gaze behaviour. Brenner, Janssen, et al. (2024) found that when running in groups, the average time spent looking at the path in front increases by about 10 %, presumably to make sure we do not trip over other runners’ feet. At the same time, we are also able to adapt our gait to the demands of vision to a certain extent. For example, Mulavara and Bloomberg (2003) found that participants in a walking and reading task were able to extend the double support phase of the gait cycle by 10% without changing step length or step duration to stabilise their upper body and thus their gaze. The more stable a target is in relation to the surroundings along the axes perpendicular to the direction of movement, the faster and more precise the gaze fixation (Manakhov et al. 2024). In contrast, visual objects that are attached to the head are perceived less accurately during walking (Borg 2015; Genç et al. 2016).

Overcoming Obstacles & Rough Terrain

If no obstacle is in our way, we mostly look at the path ahead of us and occasionally at our walking target (Aftab E. Patla and Vickers 2003). In contrast to reaching movements, we use fewer fixations to monitor how exactly we place our feet.

Thus, in a study using mobile eye tracking, Aftab E. Patla and Vickers (1997) found that when we find an obstacle in our way, we tend to look at it before we reach it, but not while we step over. During object avoidance preparation, we direct our gaze mainly to the outer edges of the obstacle (Rothkopf, Ballard, and Hayhoe 2016). If we are confronted with stairs, our gaze patterns also adapt. For example, the number of saccades when walking down stairs compared to walking on a descending surface is clearly increased (Hart and Einhäuser 2012). Moreover, our gaze distance seems to increase slightly when a walking task becomes more difficult and risky. For example, we look further ahead when descending stairs than when ascending them (Ghiani et al. 2023). During stair walking, we tend to fixate targets about three steps ahead (Zietz and Hollands 2009) and our gaze often skips the first step. In addition, Marigold and Patla (2007) noted that, when we change from one surface to another, a particularly large number of fixations are directed towards this transition. They concluded that in this way, fixations maximise the amount of available information to enable safe foot placement.

Interestingly, there is some evidence to suggest that peripheral vision rather than foveal vision, is important for passing obstacles. This fits with previous observations in athletes: Without central vision, slalom skiers were easily able to complete a 150 m course, javelin throwers hurled their javelins far and figure skaters performed clean spiral patterns on the ice (Franchak and Adolph 2010; Craybiel, Jokl, and Trapp 1955). Even patients with central vision loss navigate through their environment without major impairment (Hassan et al. 2002). Without peripheral vision, however, patients have great difficulty finding their way in the world and often stumble over obstacles (Geruschat, Turano, and Stahl 1998). Skiers without peripheral vision go off course, javelin throws become shorter and figure skating patterns become unpredictable (Franchak and Adolph 2010; Craybiel, Jokl, and Trapp 1955). The results of a field test with mobile eye trackers, in which participants ran through an obstacle course while searching for stickers, point in a similar direction. In 41% of the obstacle encounters of children and 68% of those of adults, participants controlled their locomotion adaptively without ever fixating the obstacle (Franchak and Adolph 2010).

On uneven terrain, where different surfaces alternate and we have to overcome gaps and small differences in height, we also adjust our walking. For example, we increase the stride variability and the height of the swing foot (Kowalsky 2021). Complex terrain seems to influence our sensorimotor decision-making and path planning based on depth information since we consistently tend to choose indirect routes to reach flatter paths (Muller et al. 2024). We typically reduce our speed and simultaneously lower our gaze so that our fixations are closer to us (Jonathan Samir Matthis, Yates, and Hayhoe 2018; Hart and Einhäuser 2012; Thomas et al. 2020a). These shorter fixations could also serve to gather more information about where we set foot. When comparing gaze patterns across different terrains, Jonathan Samir Matthis, Yates, and Hayhoe (2018) found that gaze was almost exclusively focused on the upcoming path and was closely associated with the upcoming footfalls, in medium and rough terrain. In rougher terrain especially, gaze was more evenly distributed between the upcoming footprints 2 and 3.

This finding fits well with a previous study in which Jonathan S. Matthis and Fajen (2014) found that if participants could not see what is coming in 2.5 steps, walking speed was reduced and the likelihood of colliding with some of the objects increased. They explain this finding based on the different phases of walking. If we would only plan ahead one step, visual information about an obstacle is perceived while we are already in the one-legged phase of the opposite leg. In this situation the foot position and the alignment of the centre of mass are already partially predetermined since the toes of the previous step have been pushed away from the ground already. Thus, at this point, only less efficient adjustments in the flight phase of the step are possible to adjust the foot landing position. However, if visual information about the location of obstacles is available before the start of the single support phase (at least two full stride lengths before the obstacle), walkers can adapt the initial velocity of their centre of mass and the location of the touchdown foot to available footfalls by applying an appropriate push-off force. Similarly Aftab E. Patla (1998) also found that it is sufficient to have the necessary information available only in a critical time window before overcoming an obstacle, and found that walking movements are not compromised if visual information about an obstacle is withheld during the overcoming of the obstacle or up to two steps before overcoming it.

In a study, in which the participants had to step on visual targets that could be hidden or visible, Jonathan Samir Matthis, Barton, and Fajen (2017) found that the latest possible time at which the participants needed the visual information was 1.5 steps before reaching the target. So if visual information was available just in time, participants were able to walk over the visual markers fairly accurately. Interestingly, they also found that visual information that was only available further in advance, but was not visible 1.5 steps before reaching the target, led to less accurate foot placements. This suggests that we process visual information for walking, similar to when grasping an object, just before the actual movement.

When comparing the accuracy of foot placement with one or both eyes, Bonnen et al. (2021) concluded that, although it was possible to place the feet fairly accurately in both conditions, we probably use depth information from both eyes for our walking plans. Furthermore, under certain conditions of their experiment, artificially restricting vision to one eye led to the slightly more frequent fixations on nearby footholds.

Interestingly, despite the different viewing behaviours in the different environments and despite the varying difficulty of the terrain, participants maintained a constant look-ahead time of approximately 1.5 seconds under all terrain conditions and appeared to use their eye movements in such a way that they knew what would happen in the next 1.5 to 2 seconds (Jonathan Samir Matthis, Yates, and Hayhoe 2018). This could explain why we make specific, anticipatory walking speed adjustments on difficult terrain: The slower speeds may represent the maximum speed at which we are able to process the information needed to support locomotion in the face of higher uncertainty in complex terrains. This would also fit to the results of a walking study by Darici and Kuo (2023). Here, variations in walking speed could be reproduced over several repetitions of the same trail and began about 6 to 8 steps before reaching a terrain feature, while closer features were weighted higher. Usually these speed adjustments are accompanied by a downward movement of the gaze, which consists of both head and eye movements (Thomas et al. 2020b).

Gaze During Turning

If we want to change direction while already moving, we can also continuously align ourselves with the new target by walking in a curve. When we go around curves, our head and eyes normally point to a location about 1 s ahead of us in the direction of the apex of the curve (Grasso et al. 1998). Initially, our eyes move in saccades in the direction of the rotation, interrupted by slower compensatory movements that compensate for the simultaneous head movement (Imai et al. 2001). The gaze thus guides the rotation of the head so that the eyes are initially directed further in the direction of the rotation than the head. Once the rotation is complete, the eye position relative to the head returns to zero (Imai et al. 2001). Interestingly, eye movements towards the apex do not appear to be based on our visual input, as they occur in both light and darkness (Grasso et al. 1998). Therefore, Grasso et al. (1998) argue that instead these type of eye movements are a necessary part of our behavioural repertoire to prepare a stable frame of reference.

When we decide to start walking in a certain direction, we usually orientate ourselves towards the target first, rather than approaching it in a curve. Such turns also occur, when we suddenly change our walking target. Mark A. Hollands, Patla, and Vickers (2002) found that as soon as participants do such turns, they make a synchronised eye and head movement towards the new target. They argued that the close temporal relationship between the onset of eye, head and turning movements suggests that they are all generated as part of a single reorientation process.

This idea is also supported by another experiment in which the head was immobilised during guided turning, resulting in a visible change in the timing of the body’s realignment (Mark A. Hollands, Sorensen, and Patla 2001). Interestingly, participants typically spent less than a third of their time looking at aspects of their future route before turning. However, while turning, they focused their gaze strictly on the updated destination of their walking route until the slower head movement, compared to saccades, had reached the new walking direction.

Thus, Mark A. Hollands, Patla, and Vickers (2002) concluded that the visual information about a new target is used to orient the body towards the new direction by creating a gaze-centred frame of reference by first completing the saccade, then a head-centred frame of reference after completing the head movement and finally realigning the body (Mark A. Hollands, Sorensen, and Patla 2001).

A similar study by Mark A. Hollands, Ziavra, and Bronstein (2004) observed the orientation behaviour of standing participants when they were asked to align themselves with a visual target and then walk towards it. In this experiment, it took between 0.3 and 0.4 s to make the first eye movement after the target was visible. After about 0.6 s, the head and upper body followed and after 1.2 s, the feet also moved. This order was independent of how far the participants had to turn and also remained the same for turns to targets that were not initially visible in the FOV. The way in which the foot moved to align with the target was similar to the eye movement that was directed at the target about 1 s earlier. Interestingly, larger turns led to a slightly increased delay in the onset of the initial eye movement of about 0.1 s. Moreover, Mark A. Hollands, Ziavra, and Bronstein (2004) found a positive correlation between the latencies of eye and foot movements. As an explanation Mark A. Hollands, Ziavra, and Bronstein (2004) suggest that the long latency of the saccades indicates the presence of underlying coordination networks. This would mean that the central nervous system delays the onset of the eye movement until it is ready to initiate a coordinated whole-body movement, or when an eye movement is part of a coordinated whole-body movement, saccade programming incorporates additional information about other body segments, delaying the onset of the movement.

Walking Alters Eye Movements & Perception

In addition to the eye movements that help us orient ourselves, there is also some experimental evidence suggesting that walking itself influences some of our gaze movements. For example, Cao, Chen, and Haendel (2020) found that blinking and saccades during walking occur preferentially during the phase in which both feet are on the ground (double support phase). They found that increasing the walking speed went along with an increased blink rate. Interestingly, blink rate increased regardless of whether the task was performed in the dark or in the light, suggesting that the change in blink rate was related to the walking rhythm rather than the visual input. This was not the case for saccade frequency, which only increased in light, which fits well with the idea that saccades during walking are mainly used to focus on the next waypoint. Barnes, Davidson, and Alais (2025) observed that the probability of eye movements matched the rhythm of the steps, with saccade probability peaking in the approximate swing phase of each step, just after the midpoint of the stance. At the same time, EEG power increased during the swing phase and decreased during the approach to heel strike, which was mainly observed in the theta and alpha bands, producing an oscillation that also corresponded to the stride frequency of about 2 Hz. This effect was stronger when walking at natural speed than at slow speed.

Some walking studies with simultaneous EEG measurements provide evidence of coordination between saccades, walking behaviour and associated cortical activity. This suggests that locomotion might not only influence our gaze movements but also visual perception. For example, Benjamin et al. (2018) found that walking leads to an increased effect of masking around low contrast stimuli in a contrast recognition task. Cao and Händel (2019) showed that walking can alter contrast perception in the periphery. At the same time, they observed alpha oscillations, which indicate that cortical activity changes depending on walking behaviour. Generally, alpha power is also considered a valuable marker for the inhibition of sensory processing and have been shown to correlate negatively with the neural firing rates in monkeys (Haegens et al. 2011). In a follow-up study, Cao, Chen, and Haendel (2020) showed that walking led to a decrease in alpha activity in the occipital cortex in both light and darkness. Moreover, they found that alpha activity was lower in the swing phase than in the double support phase.

Finally, there are two studies that showed varying performance of tasks performed during walking dependent on the walking speed and phase: Davidson et al. (2023) found that the performance of tracking a virtual object with a controller was improved when walking at a natural pace as opposed to walking slowly. Later, Davidson, Verstraten, and Alais (2024) found phasic modulations and optimal periods of sensorimotor precision across the step cycle. A follow-up study in which participants had to respond to visual targets with varying contrasts showed that the participants’ recognition rate fluctuated at a rate of approximately 2 cycles per step (Davidson, Verstraten, and Alais 2024). In this study, accuracy, reaction time and reaction probability also showed clear oscillations that were systematically linked to the phase of each step. Our eye movements play an essential role in locomotion. We use our gaze to plan our future path, avoid obstacles, navigate difficult terrain and change direction. Thus, we can adjust our steps at short notice and react to other people walking near us. However, we usually do not monitor each of our steps closely. Instead, we make many task relevant fixations on prominent objects in our FOV mostly related to just-in-time information needed for the next steps and long-term planning. Especially when avoiding obstacles, we also seem to rely on information from the periphery. It also appears that our walking rhythm is closely linked to the rhythm of our eye movements and other processes relevant to visual perception in our brain. As a result, eye movements during walking serve several purposes and are influenced not only by our visual input but also our own movement patterns. This usually makes it difficult to analyse eye movement data recorded during long sequences of natural tasks that include locomotion.

With HMDs equipped with eye tracking sensors, it is possible to record walking and eye movement data simultaneously while maintaining complete control over visual input. This could make it possible to untangle the multi-causal stream of eye movements, to get a better understanding of our behaviour. This setup might enable the extraction of navigation information contained in our eye movements, as these are an essential part of planning walking behaviour. Due to the large amount of movement data that can be measured during a walking task, this could even be automated using machine learning algorithms. It might also possible to use this information as an early indicator of imminent or just initiated behaviour. In particular, fixations associated with future waypoints and changes in direction should precede our actions by several steps. To further investigate this approach, we conducted the study [II], in which we recorded eye and walking movements during various tasks in order to predict future waypoints based on these data.

References

Adhanom, Isayas B., Samantha C. Lee, Eelke Folmer, and Paul MacNeilage. 2020. “GazeMetrics: An Open-Source Tool for Measuring the Data Quality of HMD-Based Eye Trackers.” In ACM Symposium on Eye Tracking Research and Applications, 1–5. ETRA ’20 Short Papers. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3379156.3391374.
Andreu-Sánchez, Celia, Miguel Ángel Martín-Pascual, Agnès Gruart, and José María Delgado-García. 2017. “Eyeblink Rate Watching Classical Hollywood and Post-Classical MTV Editing Styles, in Media and Non-Media Professionals.” Scientific Reports 7 (1): 43267. https://doi.org/10.1038/srep43267.
Idem. 2021. “Viewers Change Eye-Blink Rate by Predicting Narrative Content.” Brain Sciences 11 (4). https://doi.org/10.3390/brainsci11040422.
Ballard, Dana H., Mary M. Hayhoe, Feng Li, Steven D. Whitehead, J.p. Frisby, J. G. Taylor, R. B. Fisher, et al. 1992. “Hand-Eye Coordination During Sequential Tasks.” Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 337 (1281): 331–39. https://doi.org/10.1098/rstb.1992.0111.
Ballard, Dana H., Mary M. Hayhoe, and Jeff B. Pelz. 1995. “Memory Representations in Natural Tasks.” Journal of Cognitive Neuroscience 7 (1): 66–80. https://doi.org/10.1162/jocn.1995.7.1.66.
Barnes, Lydia, Matthew J. Davidson, and David Alais. 2025. “The Speed and Phase of Locomotion Dictate Saccade Probability and Simultaneous Low-Frequency Power Spectra.” Attention, Perception, & Psychophysics 87 (1): 245–60. https://doi.org/10.3758/s13414-024-02932-4.
Benjamin, Alex V., Kirstie Wailes-Newson, Anna Ma-Wyatt, Daniel H. Baker, and Alex R. Wade. 2018. “The Effect of Locomotion on Early Visual Contrast Processing in Humans.” Journal of Neuroscience 38 (12): 3050–59. https://doi.org/10.1523/JNEUROSCI.1428-17.2017.
Bonnen, Kathryn, Jonathan S. Matthis, Agostino Gibaldi, Martin S. Banks, Dennis M. Levi, and Mary Hayhoe. 2021. “Binocular Vision and the Control of Foot Placement During Walking in Natural Terrain.” Scientific Reports 11 (1): 20881. https://doi.org/10.1038/s41598-021-99846-0.
Borg, Remy AND Bootsma, Olivier AND Casanova. 2015. “Reading from a Head-Fixed Display During Walking: Adverse Effects of Gaze Stabilization Mechanisms.” PLOS ONE 10 (6): 1–14. https://doi.org/10.1371/journal.pone.0129902.
Brenner, Eli, Andrea Ghiani, David Mann, and Jeroen BJ Smeets. 2024. “Where Do People Look When They Walk or Run at Different Speeds?” Journal of Vision 24 (10): 271–71. https://doi.org/10.1167/jov.24.10.271.
Brenner, Eli, Marit Janssen, Nadia de Wit, Jeroen B. J. Smeets, David L. Mann, and Andrea Ghiani. 2024. “Running Together Influences Where You Look.” Perception 53 (5-6): 397–400. https://doi.org/10.1177/03010066241235112.
Brouwer, Anne-Marie, Volker H. Franz, and Karl R. Gegenfurtner. 2009. Differences in fixations between grasping and viewing objects.” Journal of Vision 9 (1): 18–18. https://doi.org/10.1167/9.1.18.
Cao, Liyu, Xinyu Chen, and Barbara F. Haendel. 2020. “Overground Walking Decreases Alpha Activity and Entrains Eye Movements in Humans.” Frontiers in Human Neuroscience, December. https://www.proquest.com/scholarly-journals/overground-walking-decreases-alpha-activity/docview/2471936104/se-2.
Cao, Liyu, and Barbara Händel. 2019. “Walking Enhances Peripheral Visual Processing in Humans.” PLOS Biology 17 (10): 1–23. https://doi.org/10.1371/journal.pbio.3000511.
Carpenter, R. H. S. 2001. “Express Saccades: Is Bimodality a Result of the Order of Stimulus Presentation?” Vision Research 41 (9): 1145–51. https://doi.org/10.1016/S0042-6989(01)00007-4.
Craybiel, Ashton, Ernst Jokl, and Claude Trapp. 1955. “Notes: Russian Studies of Vision in Relation to Physical Activity and Sports.” Research Quarterly. American Association for Health, Physical Education and Recreation 26 (4): 480–85. https://doi.org/10.1080/10671188.1955.10612840.
Darici, Osman, and Arthur D. Kuo. 2023. “Humans Plan for the Near Future to Walk Economically on Uneven Terrain.” Proceedings of the National Academy of Sciences 120 (19): e2211405120. https://doi.org/10.1073/pnas.2211405120.
Davidson, Matthew J., Robert Tobin Keys, Brian Szekely, Paul MacNeilage, Frans Verstraten, and David Alais. 2023. “Continuous Peripersonal Tracking Accuracy Is Limited by the Speed and Phase of Locomotion.” Scientific Reports 13 (1): 14864. https://doi.org/10.1038/s41598-023-40655-y.
Davidson, Matthew J., Frans A. J. Verstraten, and David Alais. 2024. “Walking Modulates Visual Detection Performance According to Stride Cycle Phase.” Nature Communications 15 (1): 2027. https://doi.org/10.1038/s41467-024-45780-4.
Dhawale, Nihav, and Madhusudhan Venkadesan. 2023. “How Human Runners Regulate Footsteps on Uneven Terrain.” Edited by Monica A Daley, Tirin Moore, and Andrew A Biewener. eLife 12 (February): e67177. https://doi.org/10.7554/eLife.67177.
Di Stasi, Leandro L., Michael B. McCamy, Andrés Catena, Stephen L. Macknik, José J. Cañas, and Susana Martinez-Conde. 2013. “Microsaccade and Drift Dynamics Reflect Mental Fatigue.” European Journal of Neuroscience 38 (3): 2389–98. https://doi.org/10.1111/ejn.12248.
Dorr, Michael, Thomas Martinetz, Karl R. Gegenfurtner, and Erhardt Barth. 2010. “Variability of Eye Movements When Viewing Dynamic Natural Scenes.” Journal of Vision 10 (10): 28–28. https://doi.org/10.1167/10.10.28.
Doughty, Michael J. 2001. “Consideration of Three Types of Spontaneous Eyeblink Activity in Normal Humans: During Reading and Video Display Terminal Use, in Primary Gaze, and While in Conversation.” Optometry and Vision Science 78 (10). https://journals.lww.com/optvissci/fulltext/2001/10000/consideration_of_three_types_of_spontaneous.11.aspx.
Duchowski, A. T. 2007. Eye Tracking Methodology. Springer London. https://doi.org/10.1007/978-1-84628-609-4.
Epelboim, Julie, Robert M. Steinman, Eileen Kowler, Mark Edwards, Zygmunt Pizlo, Casper J. Erkelens, and Han Collewijn. 1995. “The Function of Visual Search and Memory in Sequential Looking Tasks.” Vision Research 35 (23): 3401–22. https://doi.org/10.1016/0042-6989(95)00080-X.
Epelboim, Julie, Robert M. Steinman, Eileen Kowler, Zygmunt Pizlo, Casper J. Erkelens, and Han Collewijn. 1997. “Gaze-Shift Dynamics in Two Kinds of Sequential Looking Tasks.” Vision Research 37 (18): 2597–607. https://doi.org/10.1016/S0042-6989(97)00075-8.
Findlay, John M. 1997. “Saccade Target Selection During Visual Search.” Vision Research 37 (5): 617–31. https://doi.org/10.1016/S0042-6989(96)00218-0.
Foulsham, T. 2015. “Eye Movements and Their Functions in Everyday Tasks.” Eye 29 (2): 196–99. https://doi.org/10.1038/eye.2014.275.
Franchak, John M., and Karen E. Adolph. 2010. “Visually Guided Navigation: Head-Mounted Eye-Tracking of Natural Locomotion in Children and Adults.” Vision Research 50 (24): 2766–74. https://doi.org/10.1016/j.visres.2010.09.024.
Friedman, Oleg V., Lee AND Komogortsev. 2025. “Fixation Drift Increases as a Function of Time-on-Task in a Brief Saccade Tracking Study.” PLOS ONE 20 (6): 1–17. https://doi.org/10.1371/journal.pone.0310619.
Furneaux, S, and Michael F Land. 1999. “The Effects of Skill on the Eye-Hand Span During Musical Sight-Reading.” Proceedings. Biological Sciences 266 (1436): 2435—2440. https://doi.org/10.1098/rspb.1999.0943.
Genç, Çağlar, Shoaib Soomro, Yalçın Duyan, Selim Ölçer, Fuat Balcı, Hakan Ürey, and Oğuzhan Özcan. 2016. “Head Mounted Projection Display & Visual Attention: Visual Attentional Processing of Head Referenced Static and Dynamic Displays While in Motion and Standing.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 1538–47. CHI ’16. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2858036.2858449.
Geruschat, Duane R., Kathleen A. Turano, and Julie W. Stahl. 1998. “Traditional Measures of Mobility Performance and Retinitis Pigmentosa.” Optometry and Vision Science 75 (7). https://doi.org/10.1097/00006324-199807000-00022.
Ghiani, Andrea, Liz R. Van Hout, Joost G. Driessen, and Eli Brenner. 2023. “Where Do People Look When Walking up and down Familiar Staircases?” Journal of Vision 23 (1): 7–7. https://doi.org/10.1167/jov.23.1.7.
Grasso, Renato, Pascal Prévost, Yuri P Ivanenko, and Alain Berthoz. 1998. “Eye-Head Coordination for the Steering of Locomotion in Humans: An Anticipatory Synergy.” Neuroscience Letters 253 (2): 115–18. https://doi.org/10.1016/S0304-3940(98)00625-9.
Hacques, Matt AND Komar, Guillaume AND Dicks. 2022. “Visual Control During Climbing: Variability in Practice Fosters a Proactive Gaze Pattern.” PLOS ONE 17 (6): 1–23. https://doi.org/10.1371/journal.pone.0269794.
Haegens, Saskia, Verónica Nácher, Rogelio Luna, Ranulfo Romo, and Ole Jensen. 2011. “Alpha-Oscillations in the Monkey Sensorimotor Network Influence Discrimination Performance by Rhythmical Inhibition of Neuronal Spiking.” Proceedings of the National Academy of Sciences 108 (48): 19377–82. https://doi.org/10.1073/pnas.1117190108.
Hart, Bernard Marius ’t, and Wolfgang Einhäuser. 2012. “Mind the Step: Complementary Effects of an Implicit Task on Eye and Head Movements in Real-Life Gaze Allocation.” Experimental Brain Research 223 (2): 233–49. https://doi.org/10.1007/s00221-012-3254-x.
Hassan, Shirin E., Jan E. Lovie-Kitchin, Woods, and Russell L. 2002. “Vision and Mobility Performance of Subjects with Age-Related Macular Degeneration.” Optometry and Vision Science 79 (11). https://doi.org/10.1097/00006324-200211000-00007.
Hayhoe, Mary M. 2000. “Vision Using Routines: A Functional Account of Vision.” Visual Cognition 7 (1-3): 43–64. https://doi.org/10.1080/135062800394676.
Hayhoe, Mary M., David G. Bensinger, and Dana H. Ballard. 1998. “Task Constraints in Visual Working Memory.” Vision Research 38 (1): 125–37. https://doi.org/10.1016/S0042-6989(97)00116-8.
Hayhoe, Mary M., Anurag Shrivastava, Ryan Mruczek, and Jeff B. Pelz. 2003. “Visual Memory and Motor Planning in a Natural Task.” Journal of Vision 3 (1): 6–6. https://doi.org/10.1167/3.1.6.
Hirasaki, Eishi, Steven T. Moore, T. Raphan, and Bernard Cohen. 1999. “Effects of Walking Velocity on Vertical Head and Body Movements During Locomotion.” Experimental Brain Research 127 (2): 117–30. https://doi.org/10.1007/s002210050781.
Hollands, Mark A, Dilwyn E Marple-Horvat, Sebastian Henkes, and Andrew K Rowan. 1995. “Human Eye Movements During Visually Guided Stepping.” Journal of Motor Behavior 27 (2): 155–63. https://doi.org/10.1080/00222895.1995.9941707.
Hollands, Mark A, Aftab E Patla, and Joan N Vickers. 2002. Look Where You’re Going!’: Gaze Behaviour Associated with Maintaining and Changing the Direction of Locomotion.” Experimental Brain Research 143 (2): 221–30. https://doi.org/10.1007/s00221-001-0983-7.
Hollands, Mark A., K. Sorensen, and A. Patla. 2001. “Effects of Head Immobilization on the Coordination and Control of Head and Body Reorientation and Translation During Steering.” Experimental Brain Research 140 (2): 223–33. https://doi.org/10.1007/s002210100811.
Hollands, Mark A., Nausica V. Ziavra, and Adolfo M. Bronstein. 2004. “A New Paradigm to Investigate the Roles of Head and Eye Movements in the Coordination of Whole-Body Movements.” Experimental Brain Research 154 (2): 261–66. https://doi.org/10.1007/s00221-003-1718-8.
Imai, Takao, Steven T. Moore, Theodore Raphan, and Bernard Cohen. 2001. “Interaction of the Body, Head, and Eyes During Walking and Turning.” Experimental Brain Research 136 (1): 1–18. https://doi.org/10.1007/s002210000533.
Johansson, Roland S., Göran Westling, Anders Bäckström, and J. Randall Flanagan. 2001. “Eyehand Coordination in Object Manipulation.” Journal of Neuroscience 21 (17): 6917–32. https://doi.org/10.1523/JNEUROSCI.21-17-06917.2001.
Karson, Craig N., Karen Faith Berman, Edward F. Donnelly, Wallace B. Mendelson, Joel E. Kleinman, and Richard Jed Wyatt. 1981. “Speaking, Thinking, and Blinking.” Psychiatry Research 5 (3): 243–46. https://doi.org/10.1016/0165-1781(81)90070-6.
Kowalsky, John R. AND Ojeda, Daniel B. AND Rebula. 2021. “Human Walking in the Real World: Interactions Between Terrain Type, Gait Parameters, and Energy Expenditure.” PLOS ONE 16 (1): 1–14. https://doi.org/10.1371/journal.pone.0228682.
Land, M F, and P McLeod. 2000. “From Eye Movements to Actions: How Batsmen Hit the Ball.” Nature Neuroscience 3 (12): 1340—1345. https://doi.org/10.1038/81887.
Land, M. F., and D. N. Lee. 1994. “Where We Look When We Steer.” Nature 369 (6483): 742–44. https://doi.org/10.1038/369742a0.
Land, Michael F., N. Mennie, and J. Rusted. 1999. “The Roles of Vision and Eye Movements in the Control of Activities of Daily Living.” Perception 28 (11): 1311–28. https://doi.org/10.1068/p2935.
Land, Michael F., and Benjamin W. Tatler. 2009. Looking and Acting: Vision and Eye Movements in Natural Behaviour. Looking and Acting: Vision and Eye Movements in Natural Behaviour. New York, NY, US: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198570943.001.0001.
Land, Michael F, and Benjamin W Tatler. 2001. “Steering with the Head: The Visual Strategy of a Racing Driver.” Current Biology 11 (15): 1215–20. https://doi.org/10.1016/S0960-9822(01)00351-7.
Manakhov, Pavel, Ludwig Sidenmark, Ken Pfeuffer, and Hans Gellersen. 2024. “Gaze on the Go: Effect of Spatial Reference Frame on Visual Target Acquisition During Physical Locomotion in Extended Reality.” In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. CHI ’24. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3613904.3642915.
Marigold, D. S., and A. E. Patla. 2007. “Gaze Fixation Patterns for Negotiating Complex Ground Terrain.” Neuroscience 144 (1): 302–13. https://doi.org/10.1016/j.neuroscience.2006.09.006.
Matthis, Jonathan Samir, Sean L. Barton, and Brett R. Fajen. 2017. “The Critical Phase for Visual Control of Human Walking over Complex Terrain.” Proceedings of the National Academy of Sciences 114 (32): E6720–29. https://doi.org/10.1073/pnas.1611699114.
Matthis, Jonathan Samir, Jacob L. Yates, and Mary M. Hayhoe. 2018. “Gaze and the Control of Foot Placement When Walking in Natural Terrain.” Current Biology 28 (8): 1224–1233.e5. https://doi.org/10.1016/j.cub.2018.03.008.
Matthis, Jonathan S, and Brett R Fajen. 2014. “Visual Control of Foot Placement When Walking over Complex Terrain.” Journal of Experimental Psychology. Human Perception and Performance 40 (1): 106—115. https://doi.org/10.1037/a0033101.
Mennie, Neil, Mary Hayhoe, and Brian Sullivan. 2007. “Look-Ahead Fixations: Anticipatory Eye Movements in Natural Tasks.” Experimental Brain Research 179 (3): 427–42. https://doi.org/10.1007/s00221-006-0804-0.
Moore, S. T., E. Hirasaki, B. Cohen, and T. Raphan. 1999. “Effect of Viewing Distance on the Generation of Vertical Eye Movements During Locomotion.” Experimental Brain Research 129 (3): 347–61. https://doi.org/10.1007/s002210050903.
Moore, Steven T., Eishi Hirasaki, Theodore Raphan, and Bernard Cohen. 2001. “The Human Vestibulo-Ocular Reflex During Linear Locomotion.” Annals of the New York Academy of Sciences 942 (1): 139–47. https://doi.org/10.1111/j.1749-6632.2001.tb03741.x.
Mulavara, Ajitkumar P., and Jacob J. Bloomberg. 2003. “Identifying Head-Trunk and Lower Limb Contributions to Gaze Stabilization During Locomotion.” Journal of Vestibular Research 12 (5-6): 255–69. https://doi.org/10.3233/VES-2003-125-606.
Muller, Karl S, Kathryn Bonnen, Stephanie M Shields, Daniel P Panfili, Jonathan Matthis, and Mary M Hayhoe. 2024. “Analysis of Foothold Selection During Locomotion Using Terrain Reconstruction.” Edited by Miriam Spering and Tirin Moore. eLife 12 (December): RP91243. https://doi.org/10.7554/eLife.91243.
Najemnik, Jiri, and Wilson S. Geisler. 2008. “Eye Movement Statistics in Humans Are Consistent with an Optimal Search Strategy.” Journal of Vision 8 (3): 4–4. https://doi.org/10.1167/8.3.4.
Noton, David, and Lawrence Stark. 1971a. “Scanpaths in Eye Movements During Pattern Perception.” Science 171 (3968): 308–11. https://doi.org/10.1126/science.171.3968.308.
Idem. 1971b. “Scanpaths in Saccadic Eye Movements While Viewing and Recognizing Patterns.” Vision Research 11 (9): 929–IN8. https://doi.org/10.1016/0042-6989(71)90213-6.
Patla, Aftab E. 1998. “How Is Human Gait Controlled by Vision.” Ecological Psychology 10 (3-4): 287–302. https://doi.org/10.1080/10407413.1998.9652686.
Patla, Aftab E., and Joan N. Vickers. 2003. “How Far Ahead Do We Look When Required to Step on Specific Locations in the Travel Path During Locomotion?” Experimental Brain Research 148 (1): 133–38. https://doi.org/10.1007/s00221-002-1246-y.
Patla, Aftab E, and Joan N Vickers. 1997. “Where and When Do We Look as We Approach and Step over an Obstacle in the Travel Path?” Neuroreport 8 (17): 3661–65. https://doi.org/10.1097/00001756-199712010-00002.
Pelz, Jeff B, and Roxanne Canosa. 2001. “Oculomotor Behavior and Perceptual Strategies in Complex Tasks.” Vision Research 41 (25): 3587–96. https://doi.org/10.1016/S0042-6989(01)00245-0.
Pelz, Jeff B., and Constantin Rothkopf. 2007. “Oculomotor Behavior in Natural and Man-Made Environments.” In Eye Movements, edited by Roger P. G. Van Gompel, Martin H. Fischer, Wayne S. Murray, and Robin L. Hill, 661–76. Oxford: Elsevier. https://doi.org/10.1016/B978-008044980-7/50033-1.
Ranti, Carolyn, Warren Jones, Ami Klin, and Sarah Shultz. 2020. “Blink Rate Patterns Provide a Reliable Measure of Individual Engagement with Scene Content.” Scientific Reports 10 (1): 8267. https://doi.org/10.1038/s41598-020-64999-x.
Rothkopf, Constantin A., Dana H. Ballard, and Mary M. Hayhoe. 2016. “Task and Context Determine Where You Look.” Journal of Vision 7 (14): 16–16. https://doi.org/10.1167/7.14.16.
Schütz, Alexander C., Doris I. Braun, and Karl R. Gegenfurtner. 2011. “Eye Movements and Perception: A Selective Review.” Journal of Vision 11 (5): 9–9. https://doi.org/10.1167/11.5.9.
Tatler, Benjamin W., Roland J. Baddeley, and Benjamin T. Vincent. 2006. “The Long and the Short of It: Spatial Statistics at Fixation Vary with Saccade Amplitude and Task.” Vision Research 46 (12): 1857–62. https://doi.org/10.1016/j.visres.2005.12.005.
Tatler, Benjamin W., Mary M. Hayhoe, Michael F. Land, and Dana H. Ballard. 2011. “Eye Guidance in Natural Vision: Reinterpreting Salience.” Journal of Vision 11 (5): 5–5. https://doi.org/10.1167/11.5.5.
Tatler, Benjamin W, Nicholas J Wade, Hoi Kwan, John M Findlay, and Boris M Velichkovsky. 2010. “Yarbus, Eye Movements, and Vision.” I-Perception 1 (1): 7–27. https://doi.org/10.1068/i0382.
Thomas, Nicholas D. A., James D. Gardiner, Robin H. Crompton, and Rebecca Lawson. 2020a. “Physical and Perceptual Measures of Walking Surface Complexity Strongly Predict Gait and Gaze Behaviour.” Human Movement Science 71: 102615. https://doi.org/10.1016/j.humov.2020.102615.
Idem. 2020b. “Look Out: An Exploratory Study Assessing How Gaze (Eye Angle and Head Angle) and Gait Speed Are Influenced by Surface Complexity.” PeerJ 8 (April): e8838. https://doi.org/10.7717/peerj.8838.
Wiener, Jan, Olivier De Condappa, and Christoph Holscher. 2011. “Do You Have to Look Where You Go? Gaze Behaviour During Spatial Decision Making.” Proceedings of the Annual Meeting of the Cognitive Science Society 33. https://escholarship.org/uc/item/9n91h72n.
Yarbus, Alfred L. 1967. “Saccadic Eye Movements.” In Eye Movements and Vision, 129–46. Boston, MA: Springer US. https://doi.org/10.1007/978-1-4899-5379-7_5.
Zietz, Doerte, and Mark A Hollands. 2009. “Gaze Behavior of Young and Older Adults During Stair Walking.” Journal of Motor Behavior 41 (4): 357–66. https://doi.org/10.3200/JMBR.41.4.357-366.