Research Article
Designs and Algorithms to Map Eye Tracking Data
with Dynamic Multielement Moving Objects
Ziho Kang,
1
Saptarshi Mandal,
1
Jerry Crutchfield,
2
Angel Millan,
2
and Sarah N. McClung
3
1
School of Industrial and Systems Engineering, University of Oklahoma, 202 West Boyd Street, Norman, OK 73019, USA
2
Aerospace Human Factors Research Division, Civil Aerospace Medical Institute AAM-520, Federal Aviation Administration,
P. O. B ox 2 5 0 8 2 , O k l aho m a C i t y, OK 73 1 2 5 , U S A
3
School of Electrical and Computer Engineering, University of Oklahoma, 110 W. Boyd Street, Devon Energy Hall 150,
Norman, OK 73019-1102, USA
Correspondence should be addressed to Ziho Kang; [email protected]u
Received  November ; Revised  March ; Accepted  May 
Academic Editor: Hong Fu
Copyright ©  Ziho Kang et al. is is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when () participants
interrogate dynamic multielement objects that can overlap on the display and () visual angle error of the eye trackers is incapable
of providing exact eye xation coordinates. ese issues were addressed by () developing dynamic areas of interests (AOIs) in the
form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, () introducing the
concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues,
and () nding a near optimal AGT value. e approach was tested in the context of air trac control (ATC) operations where
air trac controller specialists (ATCSs) interrogated multiple moving aircra on a radar display to detect and control the aircra
for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can
dier based on how we dene dynamic AOIs to determine eye xations on moving objects. e results serve as a framework to
more accurately analyze eye tracking data and to better support the analysis of human performance.
1. Introduction
Eye tracking research is useful for evaluating usability or
analyzing human performance and more importantly under-
standing underlying cognitive processes based on the eye-
mind hypothesis []. is hypothesis asserts that what we
observewhenperformingataskishighlycorrelatedwith
our cognitive processes. us, eye tracking research has been
conducted in diverse elds to investigate how objects or
spatially xed areas are interrogated [–]. For example,
anairtraccontrolspecialist(ATCS)musttimelydetect
and control multiple aircra on a radar display in order to
maintain a safe and expeditious ow of air trac. rough
eye tracking data, we can identify which aircra the ATCS
interrogates and what visual search pattern the ATCS applies.
However, the analysis of eye tracking data for a task
that requires interrogating moving objects (e.g., an ATCS
controlling multiple moving aircra on a radar display or a
weather forecaster determining whether to issue a warning
by observing the weather features on a radar display) can
be dicult due to the dierent characteristics of the moving
objects and the limited capabilities of the eye tracking system.
Furthermore, eye tracking analysis becomes more dicult
iftheobjectsoverallshapecanchangeduetotheshape
change of the object’s elements or the physical relocation of
its elements (e.g., an aircra on a radar screen is composed
of elements such as a vector line and a data block, and the
length of the vector line can change due to the aircra speed
change, or the data block can be repositioned by the ATCS).
e details of the issues are as follows.
In order to map and analyze the eye tracking data for
such a task described above, dierent characteristics of those
moving objects need to be identied (Figure ). Objects
can have irregular shapes and sizes and dierent movement
Hindawi Publishing Corporation
Computational Intelligence and Neuroscience
Volume 2016, Article ID 9354760, 18 pages
http://dx.doi.org/10.1155/2016/9354760
Computational Intelligence and Neuroscience
1
2
3
4
5
A
t
A
t+1
D
t
C
t
E
t
B
D
t+1
D
t+2
E
t+1
E
t+2
C
t+1
F : Characteristics of multiple moving objects: each object is
in motion except for object “B.” “A
𝑡
indicates a circular object at
time and A
𝑡+1
indicates the change of its location at time +1.
“D” is an object rotating clockwise, and “E” is an object changing
its shape. e red dots on and around “C” indicate the order of eye
xations at times (eye xation ) and +1(eye xations to ).
AOI
0.5
UAL480
280C
781
480
Actual xation
point
Perceived xation
point of the eye tracker
4
80
F : Area of interest (AOI) and visual angle accuracy error:
e AOI approximates the shape of the object and should be slightly
bigger than the original object size considering the visual angle error.
e object consists of the aircra itself (shown as a small diamond
shape), the direction indicator (currently ying east), the data block
(aircra ID: UAL , altitude: cruising at  ., computer ID:
, and speed:  knots), and the leader line which points to its
corresponding aircra.
characteristics and can be at close proximity or overlap with
one another as time progresses. When the eye xation data
is collected, we can overlay the data with the objects to
determine whether an eye xation occurred on the object.
Eye tracking systems return pixel-based coordinates
where the eyes xated; however, we are more interested in (1)
whether eye xations occurred on the objects of interests as
well as (2) the order of the eye xations among those objects
of interest. Specically, we need to consider the following
issues when mapping the pixel-based eye xations with the
multielement objects on a display.
One of the diculties with mapping the eye tracking data
totheobjectsisduetothevisualangleaccuracyoftheeye
trackers (Figure ). A visual angle accuracy (expressed in
degrees) is dened as the deviation of coordinates, collected
from the eye tracker, from the actual location on which the
individual xated [, ] (e.g., ..
[–]) when using
displays that are approximately below  (horizontal length)
×  (vertical length) inches (or  inches diagonally) in size.
For example, if a display is observed from meter away with
visual angle accuracy of .
,thenwecanhaveuptocmof
error where the eyes xated on. erefore, observing the eye
xations shown as red dots in Figure , in addition to the rst
four eye xations, we could also determine that the h eye
xation may have occurred on object “C.” In addition to the
inherent error of eye tracking systems, accuracy error can also
be aected by experimental conditions.
For example, in the actual air trac control rooms, ATCSs
sit close to a large monitor (i.e., . × . inches) in
order to better detect and control multiple (i.e., sometimes
up to  or more) aircra within their sector. For such an
environment, the accuracy of the eye tracker can drastically
decrease. ese issues occur when measuring eye tracking
data not only in an air trac control task, but also in other
various tasks such as during driving or during a virtual
simulation of oshore oil and gas operations. erefore, the
visual angle accuracy is not xed at .
and can vary based
on the experimental conditions when we pursue high face
validity.
In addition, the mapping of eye tracking data to moving
objects can be dicult if there are multiple small objects
movingonthedisplayandeachobjectiscomposedofseveral
elements (e.g., the aircra position symbol (or target), vector
line, and data block). To accommodate the complex shapes
of objects as well as the visual angle accuracy, the concept of
an area of interest (AOI) can be applied. An AOI is a convex
shape that can approximate and represent the complex object
shapeandcanbesimpleshapessuchascirclesandrectangles.
For example, the AOI can be xed rectangular areas [, , ]
or moving rectangular areas [, ] on a display based on the
task types. Note that the size of an AOI should be slightly
enlarged to consider the visual angle accuracy [, ].
To determine whether an eye xation occurred on an
object,weneedtoconsidertwoaspects.First,theeyexation
should have occurred within the visual angle error range
(e.g., .
from all edge points of an object). Second, there
should be no other object or background image to which the
eye xation occurred. In other words, if two objects are in
close proximity, it can be dicult to determine which object
the participant was interrogating. Even if the objects arrived
from dierent locations, they can come into close proximity
and even overlap as time progresses (Figure ). Although
considerable research was conducted to investigate the eye
movements of air trac control operations [–], it was
limited to creating spatially xed AOIs or did not elaborate
on how overlapping issues were addressed.
Additionally, the mapping issue becomes more complex if
the shapes of the multielement objects change. For example,
if two aircra are approaching close proximity, the aircra
position symbols (or targets) as well as the data blocks can
overlap,andthenanATCScanrepositionthedatablock
(Figure ). e data block can be repositioned in eight
directions relative to the aircra position symbol (e.g., from
the bottom of the target itself to the top or right upper corner
ofthetarget)aswellasincreasedindistance(e.g.,from.cm
away to cm away).
In this paper, we present designs and algorithms to
address the issues raised to facilitate the analysis of the eye
Computational Intelligence and Neuroscience
A
t
A
t+1
B
t
B
t+1
S
A
S
B
S
AB
(a) Overlapping objects over time
A
t
A
t+1
B
t
B
t+1
S
A
󳰀
S
A
󳰀
B
󳰀
S
B
󳰀
(b) Overlapping AOIs over time
F : Overlapping objects and dened AOIs over time: the AOIs are designed slightly larger than the objects themselves to accommodate
the visual angle error. e overlapping areas are denoted as S
AB
and S
A
󸀠
B
󸀠
.
F : e overall shape change of an aircra position indicator along with the data block. An ATCS can freely move around the data
block since the leader line connects the data block to its aircra. e overall shape also changes if the aircra changes its direction.
tracking data for tasks that involve interrogating multiele-
ment moving objects that can change their overall shape and
overlap with one another by considering dierent shapes and
sizes of the AOIs that are tted to represent the objects.
2. Conceptual Designs and Algorithms
e main features of our approach are to (1)develop dynamic
AOIs that continuously t the multielement objects into
convex or rectangular shapes whenever the objects overall
shapes or locations change, (2) modify the size of the AOIs
(through the concept of AOI gap tolerance) to consider
the visual angle error, (3) map the pixel-coordinate based
eyexationswiththeAOIs,and(4) dene eye xations on
overlapping AOIs. Specic to air trac control operations,
the designs and algorithms create AOIs based on matching
the pixel-coordinates from the ight data block, target, and
vector lines with the pixel-coordinates of the eye xations.
Figure  represents the data processing owchart of the
overall methodology. e owchart consists of seven major
steps, which are discussed in detail in the subsequent sections.
Note that the introduced algorithm is based on discretized
movements of the moving objects, and the background
(scene) is xed.
Step 1. Collect and preprocess simulation and eye tracking
data.
Step 1.1 (collect and preprocess simulation data). Assume the
simulation scenario is of duration in minutes, given an
update rate (UR) in seconds (e.g., second), dened as the
refresh rate of the objects’ locations and shapes on a display;
the total duration of a scenario can be divided into UR××60
time frames in seconds. us, if we want to represent the
minutes scenario into discrete time frames we can represent
it as
=
{
UR,UR ×2,UR ×3,...,UR ××60
}
,
()
where represents the time frame counter in seconds.
Computational Intelligence and Neuroscience
(1) Collect simulation and preprocess
simulation and eye xation data
(2) Develop dierent types of AOIs
(3) Map eye xation data with AOIs
(4) Visualize plot eye xation data on
the simulated scenarios
Start
(5) Calculate dierent metrics of
interest for the initial (or minimal)
AOI gap tolerance value
(6) Change
(increase) AOI
gap tolerance
value
All AOI gap
tolerance values
covered?
End
Ye s
No
(7) Find optimal AOI gap
tolerance value
F : Data processing owchart.
20 minutes
Start
time
Frame “1” Frame “3”
Frame “2”
Frame “n
(Total elapsed time “n” secs)
End
time
1
1
1
···
(Total elapsed time 1 sec)
(Total elapsed time 2 secs)
(Total elapsed time 3 secs)
F : Discretization of the simulation video into time frames.
Figure  represents an example of the discretization
process of the simulation output for a -minute duration.
Note an observable (or systematic) discrete movement of the
object (e.g., aircra or radar display). In other words, no
change in position occurs within a time frame; for example,
suppose the simulation starts at seconds, the next change
in positon of the aircra will occur at the end of the rst
second, and the next change will be at the end of two seconds
Computational Intelligence and Neuroscience
T : Example of eye xation data.
pos (pixels) pos (pixels) Start time (secs) Stop time (secs) Duration (secs)
  . . .
  . . .
  . . .
  . . .
  . . .
and so on. Aer discretizing the time frames as part of
the simulation data preprocessing step, the corresponding
multielement object data are identied for each time frame.
Let be the set that contains all the information of the
multielements for the total time duration. en can be
represented as
𝑁
𝑇
,𝑇
=
𝑛
UR
,UR
,
𝑛
UR
,UR×2
,...,
𝑛
UR×𝑚×60
,UR×𝑚×60
, ()
where
𝑁
𝑇
,𝑇
is the set of multielement objects present for each
time frame.
Step 1.2 (collect and preprocess eye xation data). e eye
xation data needs to be processed according to the time
discretization strategy used for processing the simulation
data. Table represents a small sample of eye xation data. e
rstandsecondcolumnsrepresentthehorizontalandvertical
pixel-coordinates of the eye xations, respectively. e third
and fourth columns show the start and stop time of an eye
xation. e h column represents the time duration of an
eyexation.estartandstoptimevaluescanbeusedto
determine the time frame in which the eye xations occurred.
e eye xations during a time frame can be described as
𝑀
𝑇
,𝑇
=
𝑚
1
,UR
,
𝑚
2
,UR×2
,...,
𝑚
UR×𝑚×60
,UR×𝑚×60
, ()
where
𝑀
𝑇
,𝑇
is the set of eye xations that occurred for each
time frame.
Figure shows an example of eye xation durations that
occurred over the time frames. e time frames are based
on the object movement update rate (i.e., objects would
make discrete short burst of movements), and eye xation
durations can either fall within a time frame or stretch over
morethanonetimeframe.
Step 2 (develop dierent types of AOIs). Based on the
preprocessed data from Step , dierent types of AOIs were
developed. Two types of dynamic AOIs are considered:
convex AOI and rectangular AOI. e rectangular AOI is an
adaptation from [], and in this research the shape and size
of the rectangular AOI change based on each time frame.
e convex AOI was developed by calculating the convex hull
[, ] of the set of coordinate points used to represent each
multielement object. e convex AOIs change their shapes
and sizes based on each time frame as well. Figure represents
the two dierent types of AOIs (convex and rectangular) for
a multielement object. us, if an eye xation occurs within a
dynamicAOI,thenweconcludethataneyexationoccurred
on the multielement moving object.
T=1
T=2 T=3
T=T
ma
x
ef
1
ef
2
ef
3
ef
4
T: time frame index
ef
i
: ith eye xation
F : Example of eye xation durations: the lengths of the
arrows represent the eye xation durations.
To dene a parameter that governs the size of the buer,
wedenethebuerasthe“AOIgaptolerance(AGT).”Since
any given AOI corresponds to only one multielement object,
𝑁
𝑇
,𝑇
can be substituted by AOI
𝑁
𝑇
,𝑇
,thesetofAOIsduringa
time frame, as
AOI
𝑁
𝑇
,𝑇
=aoi
𝑛
UR
,UR
,aoi
𝑛
𝑈𝑅×2
,UR×2
,...,aoi
𝑛
UR×𝑚×60
,UR×𝑚×60
.
()
Step 3 (map eye xation data with AOIs). e “AOI mapping
(AM)” performs a match between the eye xation set and
the AOI set during the same time frame. AOI mapping
identies whether the eye xations fell within the boundaries
of the AOIs by comparing the coordinates. e AM can be
expressed as
AM :
𝑀
𝑇
,𝑇
→ AOI
𝑁
𝑇
,𝑇
.
()
e functional mapping described in () is called a many-
to-many mapping. Many-to-many mapping refers to the fact
that eye xations can be mapped to more than one AOI index
andsimilarlyAOIscanalsobemappedtomorethanone
eye xation during a time frame. For example, in a single
time frame, two or more eye xations (that have dierent
pixel-coordinates) can occur within a single AOI, or two or
more AOIs can share a single eye xation (when overlapping).
e resulting mapped AOIs during the time frame can be
expressed as AM(
𝑚
𝑡
,𝑡
)=aoi
𝑛
𝑡
,𝑡
.ecollectionofallmapped
AOIscanbedenedasa“mappedAOIset(MA)”andbe
written as
MA
𝐼,𝑇
=ma
𝑖,𝑡
| AM 
𝑚
𝑡
,𝑡
=aoi
𝑛
𝑡
,𝑡
,ma
𝑖,𝑡
aoi
𝑛
𝑡
,𝑡
, ()
Computational Intelligence and Neuroscience
Loosely tted AOI
Data block
Aircra location
indicator
(or target)
Vector line
AOI gap
tolerance
Tightly tted AOI
(dotted lines)
(a) Convex AOI
Loosely tted AOI
AOI gap tolerance
Vector line
Aircra location
indicator
(or target)
Tightly tted AOI
(dotted lines)
Data block
(b) Rectangular AOI
F : Example of a multielement object (black and red) represented using dynamic AOIs (green solid and dotted lines surrounding the
object): the shape created when using dotted green lines represents a tightly tted AOI, and the shape created when using solid green lines
represents a slightly enlarged AOI with a buer of  pixels.
(a) Rectangle AOI singular mapping (b) Rectangle AOI overlapped mapping
(c) Convex AOI singular mapping (d) Convex AOI overlapped mapping
F : Mapping eye xation with dierent AOI types: the red “+” indicates the eye xation location. For (b) and (d), we determine that
an eye xation occurred in all three AOIs.
where MA
𝐼,𝑇
is the set of mapped AOIs during a time frame
and is index.
Figure represents a mapping example where the rect-
angular and convex AOIs are shown in green. e red +
symbol represents the eye xation point that falls within
theAOIboundary.eremaybesituationswhenaneye
xation falls inside the boundary of more than one AOI
simultaneously. In other words, the eye xation falls into a
region that is in the intersection of several AOI boundaries,
thus giving rise to the concept of overlapped AOI mappings.”
us, in this example, the mapped AOI set for this eye xation
will include three elements, which can be shown as MA
𝐼,𝑇
=
{
1,𝑡
,
2,𝑡
,
3,𝑡
}.
Another important concept, which will be useful in the
analysis, is the cardinality of the MA set, where cardinality
is the number of elements present in that set. is can be
expressed as follows:
ma
𝑖,𝑡
=, 0,1,2,3,...,
𝑡
,
()
where |⋅|is the cardinality function and
𝑡
is the number of
multielement objects present at time .
us, if is the cardinality of the ma
𝑖,𝑡
set we can say that
the corresponding eye xation index has been mapped to
number of AOIs simultaneously. e larger the cardinality of
the ma
𝑖,𝑡
set, the greater the diculty in analyzing those eye
Computational Intelligence and Neuroscience
xations. erefore, an important consideration in the data
analysis is the frequency distribution of dierent cardinal
values of the ma
𝑖,𝑡
set.
Step 4 (visualize plotted eye xation data on the simulated
scenarios). Aer the mapping process, the eye xation data
is overlaid on the simulated display as a function of time
using the update rate. is process requires subsequently
plotting both the eye xations and AOI data pertaining to the
same time frames and covering the time frames sequentially.
Example cases are shown in Figure .
Step 5 (investigate the mapping eects for dierent AOI gap
tolerance (AGT) values). Some of the metrics that are of
particular interest for this study are (1) the percentage of
the number of eye xations falling inside AOIs (PNFIA)” and
(2)the percentage of the duration of the eye xations falling
inside AOIs (PDFIA).”
PNFIA is dened as
PNFIA
=
Number of eye xations falling inside AOIs
Total numb er of eye xations
,
()
where the number of eye xations falling inside the AOIs (in
()) is
Number of eye xations falling inside AOIs
=
max(𝑇)
𝑡=1
𝑚
𝑡
𝑖=1
𝑖,𝑡
,
()
where max()ismaximumvalueofthetimeframecountand
𝑡
is the number of eye xations during time frame :
𝑖,𝑡
=
1 if
ma
𝑖,𝑡
=0
0 otherwise,
()
where the cardinality function is expressed as |⋅|(e.g., |ma
𝑖,𝑡
|).
𝑖,𝑡
is the indicator function that becomes if the
cardinality of the corresponding set ma
𝑖,𝑡
is nonzero; in other
wordsthisfunctiontakesthevalueofiftheassociatedeye
xation falls at least within one AOI boundary. erefore,
using () and () we get
PNFIA =
max(𝑇)
𝑡=1
𝑚
𝑡
𝑖=1
𝑖,𝑡
,
()
where is the total number of eye xations.
PDFIA is dened as
PDFIA
=
Time duration of eye xations f alling within AOIs
Total time duration of alleye xations
.
()
e time duration of eye xations falling within AOIs is
calculated as
=
max(𝑇)
𝑡=1
𝑚
𝑡
𝑖=1
𝑖,𝑡
,
()
where
𝑖,𝑡
is time duration of eye xation index during time
frame and
𝑡
is the number of eye xations that occurred
during time frame .
Forthepurposeofcalculatingthetimedurationofeye
xations falling within an AOI, we need to consider only
those eye xations indexes for which the cardinality of their
corresponding AOI mapped set is nonzero. erefore we
can use the indicator function described in () to take into
account only those specic eye xation indexes that fall at
leastwithinoneAOIboundary.uswegetthefollowing:
󸀠
=
max(𝑇)
𝑡=1
𝑚
𝑡
𝑖=1
𝑖,𝑡
×
𝑖,𝑡
.
()
Using () and () we get that the percent time duration of
eye xations falling within an AOI to be
PDFIA =
󸀠
.
()
e next metric of interest is the frequency distribution
of ma
𝑖,𝑡
of various cardinalities. In other words, it is the
frequency distribution of various possible ”values,where
is as described in (). is can be found by counting the
number of occurrences of various possible values of .” is
frequency distribution is an important metric because it is
a qualitative measure of the diculty associated with the
analysisoftheeyexationsequence.
Step 6 (change AOI gap tolerance (AGT) values). Due to the
visual angle error, the choice of the AGT value depends on the
discretion of the analyst. In absence of any established rela-
tionship between the AGT values and the relevant eye xation
parameters discussed in an earlier section, the optimal range
of the AGT value becomes very much context dependent. As
a result, it becomes important to study this relationship for
the present context. us, the next step involves varying the
AGT value to investigate its impact on the relevant metrics of
interest. e equation governing the change in AGT can be
written as
AGT
𝑅+1
= AGT
𝑅
+,
()
where AGT
𝑅
isAOIgaptolerancevaluefortheiterationvalue
and represents increments of AGT values (e.g., =5
pixels).
Tableshowsthevariousvaluesoftheiterationcounter
and the associated AGT values. All the above-mentioned
steps need to be performed from Steps – for each value.
Step 7 (nd optimal AOI gap tolerance value). Assuming that
a participant or a group of participants interrogate one object
atatime,onemethodtondtheoptimalAGTvalueisto
selecttheAGTvaluethatprovidesthehighestfrequencyof
the mapped AOI set of cardinality , or in other words we can
identify the optimal AGT value for which the number of eye
xations on single AOIs is maximum.
Computational Intelligence and Neuroscience
T : AGT values dened for each iteration ().
Various combination of and AGT values
AGT (pixels)









 
 
 
 

 
 
 
 
 
 
e equation to nd the optimal AGT value (AGT
optimal
)
is as follows:
AGT
optimal
= arg max
AGT∈[5,10,...,100]
freq
(
)
:=1,
()
where is cardinality of the mapped AOI set and freq(⋅) is
frequency of set with cardinality value .
Note that we can also obtain an overall single near
optimal AGT value recommended for an experiment if we
used the aggregated eye tracking data obtained from multiple
participants.
Pseudocode shows the simplied pseudocode based on
the algorithmic owchart shown in Figure .
3. Implementation
e developed approach was benchmarked through retired
professional air trac control specialists (ATCSs) who pri-
marily work as instructors for the Federal Aviation Admin-
istration (FAA). e experiment was held at the FAA Civil
Aerospace Medical Institute (CAMI), located in Oklahoma
City, OK.
3.1. Participants. TencertiedATCSswithoveryearsof
experience participated in the experiment. In addition, three
FAA employees participated as pseudo pilots who maneu-
vered the aircra based on the controllers’ clearances. Eye
tracking data were collected from the certied controllers.
Due to the unforeseen technical issues when using the eye
tracking system and the air trac control simulator, the data
obtained from the rst ve participants were discarded, and
onlythedataobtainedfromthesubsequentveparticipants
were used.
3.2. Apparatus. e experiment environment closely resem-
bled the actual environment in the eld (Air Route Trac
Control Center) in order to obtain high face validity. e
simulated air trac scenarios were displayed using a . ×
.-inch monitor ( × -pixel active display area).
e size and resolution were equivalent to the actual display
size used in the eld. An additional monitor was placed
to the right of the simulation monitor to display the En
Route Automation Modernization (ERAM) tool, a decision
support tool that provides text data with respect to aircra
data,trajectory,andpossibleconicts.Akeyboardwas
placed beneath the simulation monitor for an ATCS to input
commands.
e eye tracking data were only collected from the
simulation monitor to test our designs and algorithms.
Facelab eye tracker system [] was used to collect the eye
tracking data with a sampling rate of  Hz. e threshold
for dening a xation was set at  ms. e accuracy of the
eye tracker was in the range of .
–
of visual angle error.
Each participants eyes were approximately in the range of
– cm from the simulated display. Kongsberg-Gallium I-
Sim soware, internally outsourced and used by the FAA,
was used for generating three dierent air trac scenarios.
e refresh rate of the simulated radar display was second.
Obtained raw eye tracking data was exported through the
Eyeworks soware [], and the data output was similar to
that shown in Table .
estructureoftheairtracsimulationleisprovided
inTable(sampledata).eoutputlecontainsthedetailsof
the aircra movements, their coordinates, and other relevant
details of the aircra representation used for the simulation.
e data update rate (UR) was second. In Table , the rst
and second columns show the elapsed time from the start of
the experiment and the actual time of day, respectively. e
third column named aircra code” shows the code name of
the aircra under consideration. e fourth column is the
target column which shows the horizontal ( pos) and
vertical ( pos) coordinates of the targets (aircra) in pixels.
e h column is the “data block” column which has three
subparts: (1) top le corner coordinates of the data block, (2)
bottom right coordinates of the data block, and (3)direction
column that represents the relative location of the aircra
with respect to the target position (N (north), NE (northeast),
E (east), SE (southeast), S (south), SW (southwest), W (west),
and NW (northwest)). e last column provides the position
coordinates in pixels of the vector lines end point.
3.3. Task and Scenarios. e task was a high delity rep-
resentation of air trac control as performed in the U.S.
National Airspace Systems Air Route Trac Control Cen-
ters. Controlling simulated trac such as this requires an
experienced ATCS to observe the radar screen and give
Computational Intelligence and Neuroscience
for =1till max()(loop to cover all iteration)
for =1till max()(loop to cover all time frames)
for =1till
𝑡
(loop to cover all multi-element objects for the current time frame )
Plot th plane elements for time frame
Plot th AOI boundary for time frame
end for
for =1till
𝑡
(loop to cover all eye xation
𝑖,𝑡
for the current time frame )
Plot th eye xation (
𝑖,𝑡
)fortimeframe
for =1till
𝑡
(loop to check whether the current eye xation falls within the AOI list of the current time frame )
nd whether current xation
𝑖,𝑡
falls inside AOI
𝑗,𝑡
store the result: store for inside, for outside AOI
store the time duration of the eye xation
end for
end for
end for
calculate percent number of eye xations within AOI
calculate percent time duration of eye xations within AOI
calculate the frequency distribution of mapped AOI sets of various cardinalities
end for
calculate the optimal AGT value
P : Pseudocode used for the overall process.
T : Air trac simulation sample output data.
Scenario time Time of the day Aircra code
Target
Data block
Vector line end point
Top le Bottom right
Direction
pos pos pos pos pos pos
pos pos
:: :: DAL       E
 
:: :: EGF       S
 
:: :: NLD       SE

:: :: DAL       E
 
:: :: EGF       W
 
:: :: NLD       SE

T : Characteristics of dierent simulation scenarios.
Scenario name
Average unique
aircra per frame
Min unique
aircra per frame
Max unique
aircra per frame
Std dev unique
aircra per frame
Moderate trac
 
Moderate trac + weather feature

Busy trac
 
clearances to aircra adjusting their altitudes, headings, or
speeds so as to maintain aircra-to-aircra separation and
route aircra through the sector or to their destination airport
within the sector. e ATCSs gave voice commands, via the
communication system, to pseudo pilots who were situated in
a remote room. e pseudo pilots followed the clearances and
provided read-back to the ATCSs. ree scenarios were used
(moderate trac, moderate trac with convective weather,
and busy trac). e duration of each scenario was 
minutes. Table  and Figure  show the details of the
scenarios. In Figure (b), the blue patch represents the
weather feature.
3.4. Data Analysis. e analysis of convex and rectangular
AOIs was automated as follows: Based on the provided
simulation output and the eye tracking output, both data
sets were synchronized (step (1) in Figure ). Aer the
preprocessing steps, the two types of AOIs (convex and
 Computational Intelligence and Neuroscience
(a) Moderate trac scenario (Mod) (b) Moderate trac with weather feature scenario (Mod + W)
(c) Busy trac scenario (Busy)
F : Air trac control scenarios.
rectangular AOI) were created using the aircra coordinates
at every second (step (2)). en, mapping was performed
using the eye tracking data and the simulation data (step
(3)). e mapped data was visualized (step (4)), and relevant
metrics including the PNFIA and PDFIA were calculated
by varying the AGT values (steps (5)and(6)). Finally, the
optimal AGT value was obtained by identifying the highest
percentage of the eye xations on single AOIs (step (7)).
e complexity of the data processing time was
(
1
2
3
4
5
6
),where
1
is the number of participants,
2
is the number of scenarios,
3
isthenumberofAOItypes,
4
isthenumberofAGTvalues,
5
is the number of AOIs per
time frame, and
6
isthenumberofeyexationspertime
frame. Each eye xation was compared with each AOI per
time frame.
IntheResults,thetotaleyexationnumbersanddura-
tions on the display (without using AOIs) were plotted in
order to investigate the oculomotor trends. en, aggregated
PNFIA and PDFIA values for all participants were plotted
based on the AGT values. en, the number of eye xations
Computational Intelligence and Neuroscience 
0
200
400
600
800
1000
1200
1400
1600
Mod Mod+W Busy
Total number of eye xations
Scenario type
1
2
3
4
5
(a)
0
50
100
150
200
250
300
350
400
450
500
Total duration of eye xation (secs)
Mod Mod+W Busy
Scenario type
1
2
3
4
5
(b)
F : Oculomotor trends of the total number and duration of eye xations among scenarios.
that occurred on single and multiple overlapping AOIs was
plotted based on the AGT values. e optimal AGT value
was computed, and examples of dierent scanpath sequences
(resulting from either dierent AOI types or AGT values)
were identied.
4. Results
e oculomotor trends are shown in Figure . Figure (a)
showsthetotalnumberofeyexationsandFigure(b)
shows the total duration of eye xations with respect to
scenario diculties: moderate trac (Mod), moderate trac
with weather feature (Mod + W), and busy trac (Busy). e
legends in Figure  showing , , , , and represent the
participant numbers.
Figure  displays example snapshots of the visualization
process (see Step (4)inFigure)forbothAOItypes.e
example snapshots show the dynamic AOIs with the AGT
value set to pixels. In Figure , the AOIs are highlighted in
green and the order of eye xations along with the associated
saccades (connections between eye xations when moving
from one to the next) are highlighted in red. Note that the
automated illustrations of the ordered eye xations (shown
in numbers) and the saccades linking the eye xations are
accumulated, meaning that the illustrations show all eye
xations from the scenario start time (time frame ) until the
indicated time frame such as time frame  or .
Figure  depicts the eect of changing the AGT values
on (1) the percentages of the numbers of eye xations
that fall within AOIs (PNFIA) shown in grey and (2)the
percentages of the durations of the eye xations that fall
within AOIs (PDFIA) shown in black. e plots show the
T : Mean and standard error for the optimal AGT values for
dierent AOI types.
Optimal AGT
AOI type
Convex AOI Rectangular AOI
Mean (pixels)  .
Standard error (pixels) . .
mean and standard error associated with every AGT value.
In addition, the tted polynomial equations and the
2
values
are provided.
Figure  depicts the change in the frequency of mapped
AOI sets, of various cardinalities, with respect to the change
in AGT values for convex and rectangular AOIs, respectively.
e plots show the mean and the standard error associated
with the coverage percent values. e maximum possible
observed cardinality of the mapped AOI set is . A general
trendamongthevariousplotsisthatthefrequencycount
of the ma
𝑖,𝑡
set having cardinality  (or in other words
=1(shown in red)) increased and then decreased. As the
AGT values increased, the number of overlapping AOIs also
increased, and the eye xations on a single AOI subsequently
decreased.
e near optimal (or recommended) AGT values (by
considering all participants and scenarios) are provided in
Table.eAGTvalueofpixelscapturesapproximately
–% of the total eye xations that fall within the AOIs.
Note that the participants can freely observe other areas that
are not dened as AOIs within the display.
Figuredepictsthechangeinthefrequencyofmapped
AOI sets, of various cardinalities, with respect to the change
 Computational Intelligence and Neuroscience
(a) Rectangular AOI, time frame =  (b) Rectangular AOI, time frame = 
(c) Convex AOI, time frame =  (d) Convex AOI, time frame = 
F : Examples of visual representations of eye xation data plotted onto the AOIs: the eye movements (eye xation orders are numbered,
and the saccadic movements are shown as red lines) were accumulated over time.
in AGT values for convex and rectangular AOIs, respectively.
e plots show the mean and standard error associated with
the frequency values for every AGT value. e maximum
possible observed cardinality of the mapped AOI set is . In
many cases the frequency of cardinality values higher than
ve was zero. us the curves for these cardinalities might
not be exclusively visible on the plots as they are overlapping
each other. A general trend among the various plots is that
the frequency count of the ma
𝑖,𝑡
set having cardinality
(or in other words =1(shown in red)) increased and
then decreased. As the AGT values increased, the number
of overlapping AOIs also increased, and as a result, the eye
xations on a single AOI subsequently decreased.
Figure  shows examples of how dierent AGT values
can aect the resulting AOI-based scanpath sequences. More
relevant eye xations were captured when using the optimal
AGT value of  (obtained from our experiment) than
the AGT value of . As shown in Figure , the identi-
ed scanpath sequence “FFCC(A,B)E” (Figure (b)) shows
much more pertinent mappings compared to the scanpath
sequence “CCA” (Figure (a)). Again, note that the scanpath
sequences can be further collapsed into “FC(A,B)E” and
“CA,” respectively.
5. Discussion
An approach was developed that automatically (1) created
rectangular and convex AOIs around multielement objects,
(2) mapped eye xations with dierent types of AOIs,
(3) systematically evaluated the mapping characteristics by
increasing the size of the AOIs to consider the delity of the
eye trackers, and (4)investigated how the increase of the AOI
sizes aects the overlapping of multiple AOIs. is approach
was applied to the collection of visual scanning data from a
Computational Intelligence and Neuroscience 
Convex AOI type
% within AOI
AGT (pixels)
100
80
60
40
20
0
0 20 40 60 80 100
% numbers
% duration
Y1 = (4.7 × 10
−5
)X
3
0.013X
2
+ 1.5X + 30.17
R
2
=0.88
Y2 = (5.8 × 10
−5
)X
3
0.015X
2
+1.5X+28
R
2
=0.89
Y1: % duration
Y2: % number
X: AGT
(a) Moderate trac scenario
Rectangular AOI type
Y1 = −0.0048X
2
+0.95X+43
R
2
=0.85
Y2 = −0.0045X
2
+0.93X+43
R
2
=0.92
Y1: % duration
Y2: % number
X: AGT
% within AOI
AGT (pixels)
100
80
60
40
20
0
0 20406080100
% numbers
% duration
(b) Moderate trac scenario
Convex AOI type
Y1 = (2.02 × 10
−5
)X
3
0.009X
2
+1.3X+25
R
2
=0.60
R
2
=0.67
Y1: % duration
Y2: % number
X: AGT
% within AOI
AGT (pixels)
100
80
60
40
20
0
0 20 40 60 80 100
% numbers
% duration
Y2 = (1.3 × 10
−6
)X
3
0.005X
2
+1.18X+25
(c) Moderate trac with weather feature scenario
Rectangular AOI type
Y1 = −0.0048X
2
+1.4X+35
R
2
=0.56
Y2 = −0.0048X
2
+1.047X+33
R
2
=0.65
Y1: % duration
Y2: % number
X: AGT
% within AOI
AGT (pixels)
100
80
60
40
20
0
020406080
100
% numbers
% duration
(d) Moderate trac with weather feature scenario
Convex AOI type
Y1 = −0.0057X
2
+1.2X+30
R
2
=0.84
Y2 = −0.0055X
2
+ 1.15X + 30
R
2
=0.83
Y1: % duration
Y2: % number
X: AGT
% within AOI
AGT (pixels)
100
80
60
40
20
0
0 20 40 60 80 100
% numbers
% duration
(e) Busy trac scenario
Rectangular AOI type
Y1 = −0.0048X
2
+ 1.003X + 40
R
2
=0.79
Y2 = −0.0056X
2
+0.98X+38
R
2
=0.82
Y1: % duration
Y2: % number
X: AGT
% within AOI
AGT (pixels)
100
80
60
40
20
0
0 20406080100
% numbers
% duration
(f) Busy trac scenario
F : Plots of coverage percentages of the numbers and durations of the eye xations that occurred within the AOI versus AGT values:
the gures on the le column are the results for the convex type, and the gures on the right column are the results for the rectangular type.
 Computational Intelligence and Neuroscience
Convex AOI type
Frequency of the mapped AOI
sets of various cardinalities
AGT (pixels)
500
450
400
350
300
250
200
150
100
50
0
0 20 40 60 80 100
1
2
3
4
5
6
7
8
Number of AOIs
(a) Moderate trac scenario
Rectangular AOI type
Frequency of the mapped AOI
sets of various cardinalities
AGT (pixels)
500
450
400
350
300
250
200
150
100
50
0
0 20 40 60 80 100
1
2
3
4
5
6
7
8
Number of AOIs
(b) Moderate trac scenario
Convex AOI type
Frequency of the mapped AOI
sets of various cardinalities
AGT (pixels)
500
450
400
350
300
250
200
150
100
50
0
0
20 40
60
80 100
1
2
3
4
5
6
7
8
Number of AOIs
(c) Moderate trac with weather feature scenario
Rectangular AOI type
Frequency of the mapped AOI
sets of various cardinalities
AGT (pixels)
500
450
400
350
300
250
200
150
100
50
0
0 20 40 60 80 100
1
2
3
4
5
6
7
8
Number of AOIs
(d) Moderate trac with weather feature scenario
Convex AOI type
Frequency of the mapped AOI
sets of various cardinalities
AGT (pixels)
500
450
400
350
300
250
200
150
100
50
0
0 20406080100
1
2
3
4
5
6
7
8
Number of AOIs
(e) Busy trac scenario
Rectangular AOI type
Frequency of the mapped AOI
sets of various cardinalities
AGT (pixels)
500
450
400
350
300
250
200
150
100
50
0
0 20 40 60 80 100
1
2
3
4
5
6
7
8
Number of AOIs
(f) Busy trac scenario
F : Distribution of the number of eye xations on single or overlapped AOIs based on AGT values: the top red line shows the change
of the number of eye xations for a single AOI. e subsequent lines show the change of the number of eye xations on overlapping AOIs
(increasing from to ).
Computational Intelligence and Neuroscience 
A
B
C
D
E
F
1
2
3
4
5
6
7
AOI-based scanpath sequence: FFCCAE
(a) AGT =  (pixels)
1
2
A
B
C
D
E
F
3
4
5
6
7
AOI-based scanpath sequence: FFCC(A,B)E
(b) AGT =  (pixels)
F : Examples illustrating how AOI types can aect the AOI-based scanpath sequences: the red “+” shows the location of the eye
xations, and the numbers are the corresponding eye xation orders. For (a), eye xation only falls inside AOI “B,” whereas for (b) eye
xation falls inside both AOIs “A” and “B.”
A
B
C
D
E
F
1
2
3
4
5
6
7
AOI-based scanpath sequence: CCA
(a) AGT = pixels
A
B
C
D
E
F
1
2
3
4
5
6
7
AOI-based scanpath sequence: FFCC(A,B)E
(b) AGT=pixels
F : Examples illustrating how AGT values can aect the AOI-based scanpath sequences: the red “+ shows the location of the eye
xations, and the numbers are the corresponding eye xation orders. For (a), eye xations , , , and fall outside the AOIs, whereas for (b)
only eye xation falls outside the AOIs.
high delity simulation of an air trac control task. e task
required ATCSs to interrogate multielement moving objects
(that can change their overall shapes) on a radar display. e
approach was applied to eye tracking data collected from the
ATCSs as they performed the conict detection and control
task through interrogating multiple moving aircra within
their sector.
e oculomotor statistics on dierent types of scenarios
show that the overall eye xation numbers and durations on
the display (without considering AOIs) did not signicantly
dier among the scenarios. e results dier from previous
aircra conict detection research [, ]. In [], eye
xation numbers and durations increased as the diculty
level increased (easy: many aircra had dierent altitudes;
moderate: many aircra had similar altitudes; dicult: many
aircra changed altitudes), while setting the number of
aircraonthedisplayattwelveforallscenarios.In[],eye
xation numbers and durations increased as the number of
aircra on the display was increased from twelve to twenty. A
major dierence in the scenario settings was that there was
no time limit on detecting possible collisions for [, ],
whereas the experiment in this research had a time limit of
twenty minutes.
Regarding the ATCSs cognitive processes, one reason
that similar oculomotor trends could be found is that the
ATCSs were constantly vigilant on interrogating and control-
ling the aircra throughout the experiment. In addition, the
reason for a marginal decreasing trend on eye xations and
durations may be due to the order eect of the scenarios being
performed in a sequence of moderate trac, moderate trac
withconvectiveweather,andbusytrac.eparticipants
could have become more comfortable with the situation
as they continued to control the multiple aircra. Another
possibility is that the ATCSs may have spent more time
onlookingattheERAMdisplayaswellasthekeyboard.
Unfortunately, the exported eye tracking data only provides
pixel-based eye xations that occurred within the dened
display; therefore, it is dicult to know where the eye
xations occurred outside the display.
e convex and rectangular AOI types did not generally
aect the amount of mapped eye xations among the par-
ticipants and the scenarios due to the relatively small size of
 Computational Intelligence and Neuroscience
the objects as well as the accuracy of the eye tracking system
for a high face validity experiment. However, we were able
to identify specic examples of dierent AOI types aecting
the resulting scanpath sequence (Figure ). e analysis of
human performance using the scanpath sequences may have
substantially diered for the same experiment if the analysts
applied dierent AOI types. e eect may have been overall
signicant if the size of the multielement objects was bigger
due to the increased unnecessary area (Figure ) created by
the rectangular AOI type. e unnecessary areas would also
result in creating more overlapping AOI areas.
e AGT values substantially impacted the amount of
covered eye xations and durations on both AOI types and
the trends tted to polynomial equations. Up to a certain
point,theincreaseoftheAGTvaluewasabletoaccommodate
many eye xations that occurred around the objects; then
the increase rate (of the amount of included eye xations)
began to reduce since lower amount of eye xations occurred
furtherawayfromtheobjects.eeyexationnumbers
and durations were highly correlated for our experiment.
Note that the AGT values also aected the resulting scanpath
sequences (Figure ). e use of too tightly tted AOIs
resulted in missing many eye xations that occurred around
the object. Note that if we used AOIs that were too large, then
the cardinality of the mapped AOI set would increase, leading
to either inaccurate mapping or an increase in the complexity
of the scanpath sequences by having more overlapping AOIs.
us, the selection of the AGT value gives rise to a trade-
o between the coverage (amount of eye xations) versus
complexity (overlapping AOIs) of the algorithm because the
more we increase the coverage, the more we increase the
complexity. As the AGT value increases, the coverage of the
overlapping AOIs increases accordingly, but the coverage of
the single AOIs starts to decrease (Figure ). e reason
is that overlapping AOIs begin to take away the amount of
eyexationsthatoccurredwithinsingleAOIs.erefore,
we were able to determine the near optimal AGT value by
identifying the coverage peak of single AOIs. Having an
adequateAOIsizetomapaneyexationtoasingleAOI
is more preferred to having larger AOIs that would result
in creating unnecessary overlapping areas. In other words,
the more we increase the coverage, the more we increase the
complexity for multielement moving objects that can overlap.
6. Limitation and Future Research
Although the dierent AOI types did not show signicant
dierences when aggregated results were compared, we were
able to identify specic cases where dierences were indeed
present. A follow-up experiment is needed to vary the size of
the actual objects in order to identify a threshold that shows
substantial mapping dierences when using complex convex
approximations versus the simple shaped approximations.
In addition, although the benchmarking of the developed
methods was able to show that trade-os exist when consid-
ering the design of AOIs based on visual angle errors and
overlapping objects, more follow-up experiments are needed
to rene and better support our methods.
In addition, the near optimal AGT values were obtained
from aggregated data across the whole experiment and
among the participants. e limitation to this approach is that
we apply a constant AGT value for the whole duration. e
optimal AGT value might not be a constant for all the time
frames, and further detailed analysis might help to segregate
time segments from the whole experimental duration (i.e.,
identify the amount of variations for dierent segregated time
segments). Note that we would not be able to obtain a trend
to identify the optimal value if the time length was too short
(e.g., for a -second time frame, we would only obtain or
eye xations). To investigate how it would vary, we would rst
need to dene the time segments that we should apply.
Another limitation is that we assumed that the multiele-
ment objects make discretized movements and that the scene
(background) is xed. If the background is moving or the
objects make rapid movements (e.g., from one end of the
screen to another end of the screen in a very short time), then
our approach would not work. ese issues are dicult to
solve and should be addressed in our subsequent research.
eoverarchinggoalofourresearchistoobtainmore
accurate mappings between the eye movements and the mov-
ing objects in order to better support the analysis of human
performance. is research concentrated on prototyping,
implementing, and evaluating new conceptual designs and
algorithms to obtain more accurate mappings. Based on the
obtained results in this research, we are currently analyz-
ingthehumanperformancebasedontheobtainedAOI-
based scanpath sequences through the Directed Weighted
Networks [, ].
Furthermore,theresultscanbeabasistodevelopbetter
scanpath analysis methods that build upon existing methods
[, –], mimic human performance [], and develop
data visualization methods for active learning using the
experts’ visual scanning patterns []. In addition, the visual
scanning data could be combined with EEG analysis [] to
better understand how the dierent types of tasks or incidents
aect brain response and visual scanning and how the brain
response data is correlated with visual scanning data.
7. Conclusion
To address the issue of mapping eye xations with multiele-
ment objects (that move, can change their shape, and overlap
over time), we proposed and implemented dynamic AOIs
that represent the multielement objects. During the process,
we showed a way to map eye xations to overlapping AOIs. In
addition, the concept of AGT was applied in order to address
the issue of the delity of the eye trackers. Our approach was
automated and applied to data collection from a high delity
simulation of an air trac control task. e benchmark
showed that eye tracking data analyses can substantially dier
based on how the AOIs are dened and how we can obtain
near optimal values to better dene the AOIs.
Competing Interests
ere are no competing interests to declare.
Computational Intelligence and Neuroscience 
Acknowledgments
is research was funded based on a cooperative agreement
with the FAA NextGen Organizations Human Factors Divi-
sion,ANG-C(Awardno.-G-)andconductedthrough
collaboration with researchers at the Civil Aerospace Medical
Institutes Aerospace Human Factors Division. e authors
deeply appreciate the support from Dr. Carol Manning.
References
[] M. A. Just and P. A. Carpenter, “Eye xations and cognitive
processes, Cognitive Psychology,vol.,no.,pp.,.
[]R.Pieters,E.Rosbergen,andM.Wedel,“Visualattentionto
repeated print advertising: a test of scanpath theory, Journal of
Marketing Research, vol. , no. , pp. –, .
[] C. Holland and O. V. Komogortsev, “Biometric identication
via eye movement scanpaths in reading, in Proceedings of the
International Joint Conference on Biometrics (IJCB 11),pp.,
Washington, DC, USA, October .
[]P.Konstantopoulos,P.Chapman,andD.Crundall,“Drivers
visual attention as a function of driving experience and visibil-
ity. Using a driving simulator to explore drivers eye movements
in day, night and rain driving, Accident Analysis and Prevention,
vol. , no. , pp. , .
[]G.Underwood,P.Chapman,N.Brocklehurst,J.Underwood,
and D. Crundall, “Visual attention while driving: sequences
of eye xations made by experienced and novice drivers,
Ergonomics, vol. , no. , pp. –, .
[] P.Kasarskis,J.Stehwien,J.Hickox,A.Aretz,andC.Wickens,
Comparison of expert and novice scan behaviors during VFR
ight, in Proceedings of the 11th International Symposium on
Aviation Psychology, pp. –, Columbus, Ohio, USA, March
.
[] A.P.Tvaryanas,“Visualscanpatternsduringsimulatedcontrol
of an uninhabited aerial vehicle (UAV), Av iation Space and
Environmental Medicine,vol.,no.,pp.,.
[]K.Holmqvist,M.Nystr
¨
om, and R. Andersson, Eye Tracking,
OUP Oxford, Oxford, UK, .
[] Z.KangandE.J.Bass,“Supportingtheeyetrackinganalysis
of multiple moving targets: design concept and algorithm, in
Proceedings of the IEEE International Conference on Systems,
Man, and Cybernetics (SMC ’14), pp. –, IEEE, San
Diego, Calif, USA, October .
[] Tobii Pro X-, Tobiipro.com, http://www.tobiipro.com/
product-listing/tobii-pro-x-/.
[] Ekstremmakina.com, “faceLAB —Seeing Machines,http://www
.ekstremmakina.com/EKSTREM/product/facelab/index.html.
[] “SensoMotoric Instruments GmbH, Gaze and Eye Tracking
Systems, Products, RED/RED , Smivision.com,
http://www.smivision.com/en/gaze-and-eye-tracking-systems/
products/red-red-.html.
[]M.Burke,A.Hornof,E.Nilsen,andN.Gorman,“High-
cost banner blindness: Ads increase perceived workload, hin-
der visual search, and are forgotten, ACMTransactionson
Computer-Human Interaction, vol. , no. , pp. –, .
[] S. Mandal and Z. Kang, “Eye tracking analysis using dierent
types of Areas of Interest for multi-element moving objects:
results and implications of a pilot study in air trac control, in
Proceedings of the Human Factors and Ergonomics Society 59th
Annual Meeting,pp.,LosAngeles,Calif,USA,.
[] D. Crawford, D. Burdette, and W. Capron, “Techniques used for
the analysis of oculometer eye-scanning data obtained from an
air trac control display, Tech. Rep., NASA, .
[] E. Stein, Air Trac Controller Scanning and Eye Movements in
Search of InformationA Literature Review,FederalAviation
Administration Technical Center Atlantic, .
[] B. Willems, R. Allen, and E. Stein, Air Trac Control Specialist
VisualScanningII:TaskLoad,VisualNoise,andIntrusionsinto
Controlled Airspace, Federal Aviation Administration Technical
Center,AtlanticCity,NJ,USA,.
[] P.-V. Paubel, P. Averty, and E. Raufaste, “Eects of an automated
conict solver on the visual activity of air trac controllers,
International Journal of Aviation Psychology,vol.,no.,pp.
–, .
[] C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, “e quickhull
algorithm for convex hulls, ACMTransactionsonMathematical
Soware,vol.,no.,pp.,.
[] “MathWorks. Mapping toolbox, http://www.mathworks.com/
products/mapping/.
[] Eyetracking.com, “Powerful eye tracking soware developed
for researchers, http://www.eyetracking.com/Soware/EyeWorks/.
[] Z. Kang and S. J. Landry, An eye movement analysis algorithm
for a multielement target tracking task: maximum transition-
based agglomerative hierarchical clustering, IEEE Transactions
on Human-Machine Systems,vol.,no.,pp.,.
[] S. N. McClung and Z. Kang, Characterization of visual scan-
ning patterns in air trac control, Computational Intelligence
and Neuroscience,vol.,ArticleID,pages,.
[] M. Tory, M. S. Atkins, A. E. Kirkpatrick, M. Nicolaou, and G.-Z.
Yang, “Eyegaze analysis of displays with combined D and D
views, in Proceedings of the IEEE Visualization Conference (VIS
’05), pp. –, Minneapolis, Minn, USA, October .
[] M. Saptarshi, Z. Kang, J. Crutcheld, and A. Millan, “Data visu-
alization of complex eye movements using directed weighted
networks: a case study on a multi-element target tracking task,
in Proceedings of the 60th Annual Meeting of the Human Factors
and Ergonomics Society, Washington, DC, USA.
[] J. Ayres, J. Flannick, J. Gehrke, and T. Yiu, “Sequential pattern
mining using a bitmap representation, in Proceedings of the
8thACMSIGKDDInternationalConferenceonKnowledge
Discovery and Data Mining,pp.,ACM,July.
[] A. C¸
¨
oltekin, S. I. Fabrikant, and M. Lacayo, “Exploring the
eciency of users visual analytics strategies based on sequence
analysis of eye movement recordings, International Journal of
Geographical Information Science,vol.,no.,pp.,
.
[] F. Cristino, S. Math
ˆ
ot,J.eeuwes,andI.D.Gilchrist,“Scan-
Match: a novel method for comparing xation sequences,
Behavior Research Methods,vol.,no.,pp.,.
[] R. Dewhurst, M. Nystr
¨
om,H.Jarodzka,T.Foulsham,R.
Johansson,andK.Holmqvist,“Itdependsonhowyoulookatit:
scanpath comparison in multiple dimensions with MultiMatch,
a vector-based approach, Behavior Research Methods,vol.,
no. , pp. –, .
[] J. Goldberg and J. Helfman, “Scanpath clustering and aggrega-
tion, in Proceedings of the ACM Symposium on Eye-Tracking
Research and Applications (ETRA ’10), pp. –, Austin, Tex,
USA, March .
[] S. Math
ˆ
ot, F. Cristino, I. D. Gilchrist, and J. eeuwes, A simple
way to estimate similarity between pairs of eye movement
sequences, Journal of Eye Movement Research,vol.,no.,
article , .
 Computational Intelligence and Neuroscience
[] Z. Kang and S. J. Landry, Top-down approach for a linguistic
fuzzy logic model, Cybernetics and Systems,vol.,no.,pp.
–, .
[] Z. Kang and S. J. Landry, “Using scanpaths as a learning method
for a conict detection task of multiple target tracking, Human
Factors: e Journal of the Human Factors and Ergonomics
Society,vol.,no.,pp.,.
[] A. N. Belkacem, S. Saetia, K. Zintus-Art et al., “Real-time
control of a video game using eye movements and two temporal
EEG sensors, Computational Intelligence and Neuroscience,vol.
, Article ID ,  pages, .
Submit your manuscripts at
http://www.hindawi.com
Computer Games
Technology
International Journal of
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Distributed
Sensor Networks
International Journal of
Advances in
Fuzzy
Systems
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Reconfigurable
Computing
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Applied
Computational
Intelligence and Soft
Computing
 Advancesin
Articial
Intelligence
HindawiPublishingCorporation
http://www.hindawi.com Volume 2014
Advances in
Software Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Electrical and Computer
Engineering
Journal of
Journal of
Computer Networks
and Communications
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Articial
Neural Systems
Advances in
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Robotics
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Computational
Intelligence and
Neuroscience
Industrial Engineering
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Modelling &
Simulation
in Engineering
Hindawi Publishing Corporation
h
ttp://www.hindawi.com
Volume 2
014
The Scientic
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Human-Computer
Interaction
Advances in
Computer Engineering
Advances in
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014