Research Article
Does Augmented Reality Effectively Foster Visual Learning
Process in Construction? An Eye-Tracking Study in
Steel Installation
Ting-Kwei Wang ,
1
Jing Huang ,
1
Pin-Chao Liao ,
2
and Yanmei Piao
1
1
School of Construction Management and Real Estate, Chongqing University, Chongqing 400045, China
2
Department of Construction Management, Tsinghua University, Beijing 100084, China
Correspondence should be addressed to Pin-Chao Liao; [email protected]
Received 18 April 2018; Accepted 24 June 2018; Published 15 July 2018
Academic Editor: Yingbin
Feng
Copyright © 2018 Ting-Kwei Wang et al. is is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Augmented reality (AR) has been proposed to be an efficient tool for learning in construction. However, few researchers have
quantitatively assessed the efficiency of AR from the cognitive perspective in the context of construction education. Based on the
cognitive theory of multimedia learning (CTML), we evaluated the predesigned AR-based learning tool using eye-tracking data. In
this study, we tracked, compared, and summarized learners’ visual behaviors in text-graph- (TG-) based, AR-based, and physical
model- (PM-) based learning environments. Compared to the TG-based material, we find that both AR-based and PM-based
materials foster extraneous processing and thus further promote generative processing, resulting in better learning performance.
e results show that there are no significant differences between AR-based and PM-based learning environments, elucidating the
advantages of AR. is study lays a foundation for problem-based learning, which is worthy of further investigation.
1. Introduction
Currently, with information technology playing an in-
creasingly important role in various fields, people also pay
increasing attention to the potential of information technol-
ogy in education [1]. e construction industry is a complex
environment and engineers need to deal with integrated in-
formation. Construction education has long been challenged.
Traditional teaching or training is not effective enough to
bridge the gap between academic and practice [2]. However,
information technology enables new education strategies to be
used to assist learning, one of which has gained much attention
in recent yearsthe application of augmented reality (AR) [3].
AR is a technology that can enhance and augment reality by
generating virtual objects in real environments [4]. Such
coexistence of virtual and real objects helps learners visualize
complex spatial relationships and abstract concepts [5].
e application of AR technology in education has been
developing for more than 20 years, and AR has been applied to
many fields like astronomy, chemistry, biology, mathematics,
and geometry [6]. While referring to the effectiveness of the
AR learning environment, it is always compared with the text-
graph- (TG-) based tool for learning. While in the con-
struction industry, apprenticeship programs are common site
training methods where risk is unavoidable [7]. Besides, AR is
also a significant education measure with no health or safety
risks [8]. Many researchers proposed frameworks based on AR
to bring remote job sites indoors [9], transform learning
processes [10], or enhance the comprehension of complex
dynamic and spatial-temporal constraints [11]. e use of AR
technology can be an efficient way to assist learning, but there
is still little quantitative evidence about the effects of AR [3].
Many researchers have evaluated the effects of AR on learning
outcomes, ignoring its potential causes during the learning
processes.
Eye tracking is a measurement of eye movement, which
can reveal aspects of learners’ learning processes [12]. Be-
cause of the use of eye-tracking software for recording and
producing data, studies on learners’ cognitive processes have
entered a new phase [13].
Hindawi
Advances in Civil Engineering
Volume 2018, Article ID 2472167, 12 pages
https://doi.org/10.1155/2018/2472167
TG-based and physical model- (PM-) based are common
tools for construction learning and training. e authors of
the present study conducted an experiment of construction
class learning to (1) evaluate learning outcomes while
comparing TG-based, AR-based, and PM-based environ-
ments and (2) investigate the underlying causes of the effects
of the learning method from a cognitive perspective and the
potential effects of AR by utilizing eye-movement data.
2. Literature Review
2.1. Does AR Facilitate or Inhibit Learning Efficiency?
Multimedia learning theory suggests that appealing design
features can help increase cognitive engagement and retain
learner attention when it was first used [14]. rough more
investigation, the visual detail in the multimedia resource
can result in effective learning and instructional multimedia
design [15]. According to Mayer [16], the following cognitive
load theory is the basis for instructional design principles
[17], cognitive theory of multimedia learning (CTML) be-
tween three kinds of processing demands that arise during
learning: (1) extraneous processing, which is led by the
manner in which the material is presented, increasing the
chances that attention will be split among various in-
formation. Poor instruction may enhance this process and
thus inhibit the effects of transfer learning; (2) essential
processing, which is done to focus on presented material and
is caused by the complexity of the material; and (3) gen-
erative processing, which is done to comprehend the ma-
terial. It is caused by learner’s efforts in the learning process
such as selecting, organizing, and integrating. As asserted in
previous studies, both extraneous and germane cognitive
load can be manipulated and intrinsic cognitive loads cannot
[17]. However, according to Mayer, extraneous, generative,
and essential processing can be managed [18]. Furthermore,
unnecessary and greater loads that stem from the design of
instruction may impose extraneous cognitive loads [19].
Ineffectively searching for information may increase ex-
traneous cognitive load and disturb essential processing.
erefore, the reasonable reduction of redundant in-
formation is an important way to reduce cognitive load and,
further enhance cognitive learning. e measures include
reducing extraneous processing, such as highlighting crucial
materials with colors, managing essential processing, such as
decomposing learning materials into several parts, and
fostering generative processing [16].
AR is a useful technology with which to improve
learning, as explained by the CTML [20]. It allows visual
information to be registered to the real world [21]. e visual
information, as instructional materials in this paper, can be
designed following the CTML. Although the materials can
be designed and displayed using 3D model design software,
AR technology differs in that it provides immersive envi-
ronments and has been developed as an immersive language
learning framework that was motivated by the CTML [22].
Many scholars contend that different learning tools lead to
different learning outcomes as shown in Table 1. Few re-
searchers have paid attention to arguments of the design
of AR models, the instructional material in this case.
A confounding question arises: Does AR facilitate or inhibit
learning efficiency by highlighting partial but critical
information?
2.2. Manipulation of Extraneous Information with Various
Learning Materials. AR has been proven to be a more ef-
ficient way of learning in various studies as shown in Table 1.
Nonetheless, the evaluations of, compared to, conventional
learning environments were basically limited to learning
outcomes and, using questionnaires to examine students’
subjective motivations and satisfaction [23, 24]. Because the
major function of AR rests in highlighting critical in-
formation and labeling extra information as a reference for
learning purposes, AR can be perceived as a measure that
manipulates extraneous information processing, potentially
enhancing the generative process of learning. From this
perspective, previous researchers did not answer why and
how AR foster learning in construction. In the educational
domain, AR appears to be a smart technology with which to
create attractive and motivating content. It improves the
time spent on acquired learnings [25]. Moreover, an ex-
periment revealed higher learning achievement and lower
cognitive load by utilizing mobile AR application [26]. For
construction education, applying AR can create a realistic
learning environment without health and safety risks and
enhance students’ comprehensive understandings of con-
struction equipment and operational safety [8, 10, 27]. As
shown in the “control group” column of Table 1, generally,
the advantages above of AR mainly come to conclusion after
comparisons with conventional learning type, especially TG-
based. However, the comparisons ignore the contrast with
real PM-based learning materials. Besides, some of the TG-
based learning material is colored as extraneous information
in the experiments of Table 1, but in this paper, the TG-based
model is designed according to Chinese Drawing Collection
for National Building Standard Design which is not high-
lighted with color. e PM-based learning material is
modeled as well.
2.3. Eye Tracking for Cognitive Processing Measures.
Although the effect is proposed that the AR design feature
leads to better learning outcomes, there is little substantive
evidence that shows how this occurs in the cognitive pro-
cessing. Fortunately, the AR material is designed based on
the CTML, and many researchers have studied how to
measure its cognitive activity. Eye tracking, combined with
measures of learning performance, provides information
about the focus of cognitive activity [31]. Consequently, to
identify how learners behave in AR-based and other con-
ventional learning environments, the use of an eye-tracking
device is an effective way to provide cognitive processing
measures.
Eye-tracking techniques can be utilized to record eye
movement which can show how people behave while they
are engaged in cognitive processing such as fixation count,
total fixation time, and average fixation duration [32, 33].
However, the use and interpretation of eye-tracking
measures are different and depend on research questions.
2 Advances in Civil Engineering
A summary of relevant studies in which eye tracking was
used to conduct eye-movement measures in multimedia
learning and cognition is listed in Table 2. Fixation duration
and fixation count are the most prevalently used eye-
tracking measures [34]. Generally, for the learning pro-
cess, both longer fixation duration and lower fixation rates
indicate higher cognitive load, and more fixation counts
mean less efficient information processing. Moreover, a long
average fixation duration means that deeper information
processing is led by the complexity of the background
information [32, 35, 36]. Besides, the attentional guidance
hypothesis proposes that participants pay more attention to
salient elements than other elements, which leads to longer
fixation times [37].
In summary, three eye-movement measures, including
total fixation time, fixation count, and average fixation
duration are utilized in this study to demonstrate how
learners behaved during the entire formal experimental
process for the following reasons: (1) e higher the values of
fixation count and fixation time, the more the cognitive load
Table 1: Overview of experimental studies on AR for teaching and learning.
Reference Domain Setting Participants AR treatment
Control group
treatment
Evaluation content
[24] Biology Classroom 72 fifth-grade children AR graphic book
A picture book or
physical interactions
Error; retention;
satisfaction
[26] Anatomy Classroom
171 students: 78 with
medicine degree, 48 with
physiotherapy degree, and
45 with podiatry degree
AR software Notes; videos
Acquisition of anatomy
contents
[28]
Chinese
writing
Classroom
and field
30 12th grade students
AR-based writing
support system
Text-graph writing
support materials
Writing performance
(subject, content control,
article structure, and
wording)
[21] Mathematics
Field
experiment
101 participants: 40 from
primary school, 34 from
secondary school, and 27
from university
AR mobile
application
Physical
information
Knowledge retention
[29] Physics Classroom 64 high-school students
AR-learning
application
An educational
website
Knowledge acquisition;
flow experience
[30] Architecture Classroom 57 university students
AR mobile
application
Text-graph materials Academic performance
[23]
Natural
science
Field
experiment
57 4th grade students
AR-based mobile
learning approach
Inquiry-based
mobile learning
approach
Learning achievement and
motivation
Table 2: Overview of multimedia learning and cognition studies with eye tracking.
Reference Materials Eye tracker Eye-movement metrics
[38] Construction scenario images EyeLink II
Fixation count; run count; dwell-time
percentage
[39] Construction site images EyeLink II First fixation time; dwell percentage; run count
[40] Virtual building construction site
ViewPoint EyeTracker
GIG160
Fixation count; scan path
[41] Construction site Tobii Pro Glasses2
Visit count; fixation count; total dwell time;
time to the first fixation
[42]
Static (text and picture) and dynamic(text and
video) recipe
FaceLab 4.6
Total fixation count; total fixation time;
interscanning count
[43] Web-based multimedia package SMI iView X 2.4
Total fixation count; gaze sequence; dwell time
in AOI (area of interest)
[44] Images and texts with and without coloring Tobii X60
Time to the first fixation; fixation numbers to
the first fixation; total fixation count; fixation
count percent
[45]
A digital learning environment with and
without visual cues
Tobii T60 Total fixation time
[46] Webpage SMI iView X Fixation duration; fixation count
[47] Webpage FaceLab 4
Total fixation count; fixation duration; average
fixation duration; scan path
[48] Text and picture ASL 504 Total fixation time; transition count
[37]
Color-coded and conventional format of
multimedia instruction
Tobii 1750
Average fixation duration; total fixation time;
first fixation time
Advances in Civil Engineering 3
in extraneous processing and the more the distributions in
essential processing. (2) e longer the average fixation
duration, the deeper the comprehension of the learning
material, the more the complex information generated by
various information sources, and the more the focus on
essential processing. e relationship between the eye-
movement metrics and the CTML cognitive processing is
shown in Figure 1.
3. Research Questions and Methodology
e literature review shows that many related studies explain
the effects of AR by comparing AR-based and TG-based
(Table 1). ese studies demonstrate the effectiveness of AR.
However, they do not reveal the gap with PM-based edu-
cation, which is also a common teaching method in con-
struction education. e differences in effectiveness between
AR and PM need to be examined to leverage the application
of AR. erefore, it is necessary to compare AR-based to TG-
based and PM-based to provide convincing evidence with
which to explore the effects of AR. On the contrary, although
it has been proven that AR has a positive effect on learning
outcomes, there is a lack of research works on the explo-
ration and evaluation of AR in the cognitive process.
Consequently, the researcher aims to prove the following
hypotheses:
(1) Compared to TG- and PM-based materials, AR-
based materials promote learning outcomes.
(2) Compared to the use of TG-based materials and PM-
based materials, the use of AR-based materials that
are designed using the CTML can lower learners’
cognitive loads and foster deep information pro-
cessing, which means that AR-based groups will have
lower fixation counts and fixation times but higher
levels of average fixation duration than TG- and PM-
based groups.
To achieve the results, an experiment that involved
learning and testing was developed. ere were three groups
of people who were exposed to three different learning
environments: TG-based, AR-based, and PM-based learning
environments. Each participant was separately given the
same questions. e questions were answered by referring to
the learning material provided in the TG-based, AR-based,
or PM-based learning environments.
Figure 2 shows the experimental flow. Before the test,
learning content and corresponding test questions were
prepared. We randomly divided participants into the three
groups (AR, TG, and PM). In the cognitive testing process,
we recorded the participants’ answers and answer times as
their learning outcomes to comparatively analyze the three
groups. During the whole testing process, participants’ eye
movements were recorded using an eye tracker (SMI iView
XTM HED at 50 Hz). e fixation time and fixation count
data were obtained using Begaze (iView software). We
defined one area of interest (AOI) for each question, and
total fixation time, fixation count, and average fixation
duration values for each AOI were recorded and calculated.
3.1. Participants. A total of 40 senior undergraduate stu-
dents majoring in construction management at Chongqing
University were invited to participate. Because the samples
of eye-tracking-related studies range from less than ten
samples for qualitative studies to 30 for quantitative studies
[49], a total of 40 samples are robust enough for a quanti-
tative eye-tracking study.
Chongqing University is one of the top 10 research
universities in the field of construction management in
China. In this study, we use two approaches to invite par-
ticipants: (1) students of one class were assigned to par-
ticipate in the study as their final project; (2) an invitation
flyer was posted in the laboratory of Chongqing University
to invite volunteers to participate in the experiment. Finally,
we selected 23 students from the class and 17 volunteers who
were attracted by the flyer. To maximally avoid the differ-
ences between individuals, we choose participants with the
same major (construction management), same grade (forth
year), and similar age (21 to 23 years). ere were 22 males
and 18 females among the participants, and they all took the
same courses in college. e students were trained with 32
credit hours of reinforcement arrangement courses in the
third year of college, but they all lacked practical experience
in construction, meaning that they did not receive any on-
site training or have any injury experience in construction.
Based on their academic and practical backgrounds, we
assumed that these students had similar intrinsic learning
abilities. e vision of all participants was either normal or
corrected-to-normal.
3.2. Learning and Test Materials. Learning materials were
about the detailing of longitudinal bars at the tops of
antiseismic corner columns from one Chinese Drawing
Collection for National Building Standard Design, 11G101-1
(drawing rules and standard detailing drawings of an ich-
nographic representing method for construction drawings
of RC structure). According to our previous research and
interviews with experts with engineering and construction
majors in Chongqing University, this is quite an important
and basic section of professional knowledge for construction
workers. Meanwhile, it is difficult to understand for students
who do not have any practical experience. erefore, we
Extraneous processingEssential processing
Generative processing
Fixation time
Fixation count
(i)
(ii)
Fixation time
Fixation count
Average xation duration
(i)
(ii)
(iii)
Figure 1: Relationship between the eye-movement metrics and the
CTML cognitive processing.
4 Advances in Civil Engineering
designed three forms of instructional materials based on this
content with the guidance of a teacher in the field of con-
struction techniques.
For the TG-based learning environment, the learning
material was abstracted from 11G101-1 (Figure 3) and
shown on a computer screen for learners.
Figure 4 shows the design of the AR model. e key steel
bars are highlighted and distinguished based on their
binding methods with various colors. e others are pro-
cessed with gray to reduce its recognition. us, according to
multimedia theory, this design could attract attention and
help learners reduce extraneous processing. Besides, the key
information can be easily selected to manage the essential
processing and learners should have a better comprehensive
understanding of learning contents with more effective
generative processing. If one adopts the CTML, it can be
supposed that AR-based learning environments may be
more attractive than others, helping learners pay attention to
key information.
e AR-based learning environment consisted of
a computer with ARToolkit software, a camera, and a paper
label. As shown in Figure 4, before the experiment, a virtual
model based on the learning content was made with two
software programs: Revit Structure, and 3D Max. en, the
ARToolkit was used to connect the model to a paper label. In
the learning process, utilizing a plug-in installed in ARToolkit,
which was developed in our previous research, put the paper
label in front of the camera. e AR model would then appear
on the label. e users could observe the model from different
angles by rotating the label. Figure 5 shows the workflow of
the AR-based learning environment, and the final practical
AR-based environment is shown in Figure 6.
As for the PM-based learning environment, a solid
model was made with mini-steel bars based on the actual
situation on construction site, as shown in Figure 7.
Correspondingly, a test was designed to evaluate
learning outcomes within the three different environments,
and the test consisted of six questions in total, which in
detail, included three true or false and three short-answer
questions (Table 3). During the testing process, both
learning material and text material were given on the same
screen. Learning material was on the left and text material
was on the right, with one question on each page. As shown
in Figure 8, a cross-sectional drawing was given in the test
Learning content
selection and test
questions design
Learning
environment
preparation
Experimental
grouping
Calibration
Cognitive process
and test
Data analysis
Figure 2: Experimental flow.
e first layer bar of the column top stretches to the
edge of the column and then bends downward
Chinese characters were used in the formal experiment.
e second layer bar of the column top
Inner side longitudinal bar of the column
(Apply to outer side longitudinal bars of the
column in node B which do not anchor in
the beam)
D
Inner side longitudinal
bar of the column
Top bar of the beam
B
Figure 3: Paper-based learning material.
Advances in Civil Engineering 5
material, and the configuration of each numbered longi-
tudinal bar was arranged using one of the various ways
shown in learning materials. Learners could reference the
learning materials based on the questions, and they were
asked to figure out the arrangement of each bar and their
spatial relationships to give the correct answers. For each
question, there was one corresponding AOI in learning
material that showed the most important information that
learners need to notice and process.
When answering true or false questions, learners were
asked to make a judgment about a description associated
with the spatial configuration and then answer with “yes” or
“no.” For the short-answer questions, on the basis of each
question, learners were required to give the correct number
of the 12.
3.3. Experimental Procedure. Every participant was ran-
domly assigned to one of three groups. Each participant was
provided training materials in TG-based, AR-based, or PM-
based form. Referring to these training materials, the par-
ticipants sequentially answered predesigned questions.
Details about the experimental procedure are listed as
follows.
3.3.1. Preexperiment Calibration. Participants were told
about the purpose of the experiment. en, they were asked
to identify their dominant eye using the facilitator’s in-
strument so that participants could be fit with the eye tracker
(SMI iView XTM HED) with the proper eyeglass—with
a sampling rate of 200 Hz. Participants were seated ap-
proximately 50 cm away from the front of the screen in
which the learning materials were demonstrated. A five-
point calibration screen was used to assess the calibration for
each participant before each cognitive process. If the ac-
curacy exceeded 1
°
in the x or y direction, then the cali-
bration was repeated.
3.3.2. Formal Experiment. Every participant was given two
minutes to familiarize themselves with the learning content.
Six questions were then sequentially demonstrated on the
screen (Figure 9). After the participant answered, the re-
search facilitator immediately switched slides to the next
question and recorded the participant’s answer. No auxiliary
verbal instructions were provided during the entire formal
experiment in any group.
During the whole process, participants in the AR and PM
groups could ask the research facilitator to rotate the paper
label or model according to their own requirements if they
wanted to observe from different angles. ey were not given
opportunities to change their answers.
3.4. Data Analysis. Every participant’s answers and the
completion times for every single question were recorded by
the facilitator, and learners’ eye movements were recorded
by the eye tracker (SMI iView XTM HED) and the associated
software (Begaze), which was utilized to build AOI. e total
fixation time and fixation count of each AOI could be then
calculated and exported.
Table 4 gives a brief definition of each measure. All data
were imported into Excel and SPSS for statistical analysis. To
identify if there were statistically significant differences
among three groups, ANOVA was used to conduct group
comparisons. If statistically significant results existed, then
further Bonferroni multiple comparisons to identify the
significant differences were conducted between the two
groups.
4. Results
A total of 40 students participated in this study. However,
because the eye-tracking data were missing for six partici-
pants, we finally had 34 subjects for analysis in this study, 11
for the TG group, 11 for the AR group, and 12 for the PM
group. us, 204 (34 6 204) data points for each index
were recorded or calculated. Before mathematical calcula-
tion was conducted, all data were checked with SPSS to
identify outliers, and the result showed that five completion
time data points, eight fixation time data points, six fixation
count data points, and three average fixation duration data
points were thought of as outliers and excluded during the
following statistical analysis.
4.1. Learning Outcomes. As seen in Table 5, generally, the
mean scores of the PM group were the highest, with min-
imum average completion times for both question forms. A
significant difference of scores in short-answer questions
(p <0.05) was found among three groups, and multiple
comparisons (Table 6) showed that the AR group and the
PM group scored significantly higher than the TG group on
the short-answer questions. No significant differences in
(i) Red frame: essential processing, that is, selection of key information
(ii) Yellow frame: extraneous processing, that is, highlighting the key steel
bars and distinguishing them by their binding ways with colors
Figure 4: Design of the model for the AR-based learning environment.
6 Advances in Civil Engineering
scores among the three groups were found in the true or false
questions. ere were no signicant completion time dif-
ferences among the three groups for either form of question.
People in the AR and PM groups performed better than
those in the TG group. e increase in scores was much
more signicant for the short-answer questions. Contra-
dictory to the rst hypothesis, our ndings showed that
people in the PM group exhibited the same degree of
learning performance as those in the AR group.
4.2. Eye-Tracking Measures. e eye-movement data were
analyzed using ANOVA to explore learners’ cognitive
processes with regard to key information in AOIs.
Tables 7 and 8 show that for xation time, people in the
TG group spent signicantly more xation time on AOI
compared to those in the PM group for true or false questions,
and there were no signicant dierences regarding other
comparisons between the two groups. e results of xation
count show that for true or false questions, people in the TG
Figure 7: PM-based learning material.
Table 3: Test questions.
Question
type
Question
True or false
(1) Do the longitudinal bars distribute in four
layers in the node?
(2) Does the no. 10 bar located in the second layer?
(3) Do no. 1 and no. 12 bars anchor in the same
way?
Short answer
(4) Please write down the number of bars which
anchor in the beam.
(5) Please write down the number of bars which
anchor in the way of “bending towards the inside
the column.”
(6) Please write down the number of bars which
anchor in the way of “stretching to the edge of the
column, then bending downward.”
Figure 6: AR-based learning material.
Draw node
artifacts based on
learning content
Model of RVT
format
Transfer format in
Revit soware
Model of WRL
format
Render model and
transfer format in
3D Max soware
Model of FBX
format
Associate
AR label Model of WRL
format
Figure 5: e workow of AR-based learning environment preparation.
Advances in Civil Engineering 7
group signicantly xed AOI more frequently than the other
two groups. However, the result was dierent for the short-
answer questions. Multiple comparisons showed that there
were no signicant dierences between any two groups.
e average xation duration result showed that signif-
icant dierences were found in both question forms among
three groups. Multiple comparisons determined that for true
or false questions, people in the AR group showed a signi-
cantly higher level of average xation duration than those in
the TG group. For the short-answer questions, people in both
the AR and PM groups showed a signicantly higher level of
average xation time than those in the TG group.
e result of all eye-movement measures showed that
AR-based learning material did not reduce learners’ xation
counts or xation times in all conditions. Moreover, no
signicant dierence between AR-based and PM-based
learning material was identied. People in the TG group
spent signicantly less xation time on the true or false
questions than those in the PM and AR groups, which could
not fully prove the second experiment hypothesis.
However, the results demonstrate that the eects of AR
and PM teaching were dierent for the two question forms.
Although people in the TG group had similar scores on
the true or false questions as people in the other two groups
(Table 5), they had signicantly longer xation times and
xation counts. Long xation times indicate that diculty
was faced in extracting information or that the object is more
engaging in some way. Moreover, a high xation count on
AOI indicates ineciency in identifying relevant in-
formation [34, 36, 50]. For the same learning outcomes, the
result demonstrated that compared to the TG-based envi-
ronment, both the AR-based and PM-based environments
reduced learners’ cognitive load sand improved their
searching eciency in the learning and test processes.
For the short-answer questions, people in the TG-based
group exhibited the same level of xation time and xation
count as those in the other two groups. However, it should
be noticed that on the short-answer questions, participants
in the AR and PM groups scored signicantly higher than
those in the TG group. Consequently, both AR-based and
PM-based teaching considerably improved learners’ an-
swering accuracy, but it cannot be determined that which
environment means lower cognitive load and searching
eciency by comparing eye-tracking data.
e anchorage rules of longitudinal bars at the top of the corner column:
e first layer bar of the column
top stretches to the edge of the
column and then bends downward
e longitudinal bars cross section of a
corner column:
Bar nos. 2 and 3 at the top layer
Bar nos. 8 and 9 at the bottom layer
Referring the anchorage rules in the learning
material, please answer the following question:
Q4. Please write down the number of bars
which anchor in the beam: _______
Chinese characters were used in the formal experiment.
e second layer bar of the column top
Inner side longitudinal
bar of the column
Test materialLearning material
(Apply to outer side longitudinal
bars of the column in node B
which do not anchor in the beam)
D
Inner side longitudinal
bar of the column
Top bar of the beam
B
1
12
11
10 9
8
7
6
5
234
AOI 004
Figure 8: Test interface.
Figure 9: Formal experiment.
8 Advances in Civil Engineering
Unlike the two indicators of fixation time and fixation
count, the result of average fixation duration showed that for
both question types, the AR-based group had the highest
level while the TG-based group had the lowest (Table 6). A
long average fixation duration is thought to be an indication
of deep processing [32]. When related information is easy to
target and integrate, learners can likely engage in the deep
processing of key information required for meaningful
learning [37, 51, 52]. is result indicates that the AR-based
learning environment helped learners more easily find and
focus on key information for each question, which then lead
to deep understanding of the content.
5. Discussion
e main purpose of the study is to understand how AR-
based teaching impacts college students’ learning outcomes
and learning processes compared to TG-based and PM-based
teaching about construction. e result showed that AR-
based environments lead to better learning outcomes than
TG-based environments, but not compared to PM-based
environments. However, the difference on eye-tracking
data did not keep the same gap during the whole process.
5.1. Effect of Question Form. Participants in the TG group
scored significantly lower on the short-answer questions than
those in the AR and PM groups. People in the three groups
had similar scores for the true or false questions. In this study,
to answer the true or false questions, learners just had to say
“yes” or “no.” However, they had given precise and com-
prehensive numbers of steel bars in the short-answer ques-
tions, which required more exact information processing.
is result suggests that for some limited tasks, learners with
TG-based learning or training environments can achieve ideal
performance, despite the high cognitive load and inefficiency
of doing so compared to when it is done in AR-based and PM-
based environments. Moreover, TG-based teaching has the
advantages of low cost and easy implementation. erefore,
for some learning tasks and practical work, TG-based edu-
cation is the most economical option.
Table 5: Descriptive statistics of score and completion time.
Item Question form
TG-based AR-based PM-based
F
Mean SD Mean SD Mean SD
Score
True or false 0.52 0.51 0.67 0.48 0.72 0.45 1.70
Short answer 0.06 0.24 0.61 0.5 0.67 0.48 20.90
Completion time (s)
True or false 30.81 16.96 26.61 11.62 23.19 14.93 2.32
Short answer 36.91 26.09 39.76 20.74 34.09 24.38 0.48
e mean difference is significant at the 0.05 level.
Table 6: Multiple comparisons of items with significant differences.
Item
TG- and AR-based TG- and PM-based AR- and PM-based
Mean difference Sig. Mean difference Sig. Mean difference Sig.
Score of short-answer questions 0.55
0.000 0.61
0.000 0.06 1.00
e mean difference is significant at the 0.05 level.
Table 7: Descriptive statistics of score and completion time.
Item Question form
TG-based AR-based PM-based
F
Mean SD Mean SD Mean SD
Fixation time (ms)
True or false 9.95 8.03 7.38 7.73 5.21 6.58 3.28
Short answer 5.35 7.61 9.97 9.69 10.21 9.17 3.17
Fixation count (time)
True or false 21.68 20.55 8.79 8.28 7.36 8.28 11.27
∗∗
Short answer 10.00 13.50 13.18 11.38 15.24 13.30 1.43
Average fixation duration (ms)
True or false 0.55 0.17 0.84 0.45 0.64 0.36 5.82
∗∗
Short answer 0.41 0.23 0.69 0.28 0.66 0.27 10.87
∗∗
Note.
∗∗
p <0.01;
p < 0.05.
Table 4: Definition of measures used in this study.
Measures Definition
Test score e score of learners’ answers; one point for each right answer
Completion time (s) Total time spent on answering questions
Total fixation time (ms) Total time fixated on an AOI
Total fixation count (time) Total number of fixations counted within an AOI
Average fixation duration (ms)
Average duration of time of every fixation count on an AOI:
the ratio of total fixation time and total fixation count
Advances in Civil Engineering 9
5.2. Effect of Cognitive Load and Emotion. Another reason
why the participants in the TG-based group scored signif-
icantly worse on the second question form is related to
cognitive load and motivation. As a positive emotion in
cognitive processing, interest is closely related to motivation
and attention, and those who with interest show greater
persistence on subsequent tasks. Cognitive load may affect
emotional state and further hamper effective visual search
[53–55].
Before they started to learn, all learners in the three groups
were thought to have positive emotions and motivations.
eir performances at the beginning were based on the same
emotion. In this study, the sequence of the test was three true
or false questions followed by three short-answer questions.
e TG-based group scored at the same level as the other two
groups with significantly more fixations in the first three
questions. We supposed that learners in the TG-based group
experienced excessive cognitive load at the beginning, which
further had a negative impact on their motivation, so they
were not motivated enough to pay adequate attention to
information processing. us, it led to the increasingly worse
learning outcomes on the final three questions.
5.3. Effect of AR. Compared to the PM-based learning en-
vironment, the AR-based learning environment did not
show a competitive advantage in learning performance or
significant difference in eye-movement data with the ex-
ception of average fixation duration. e result showed that
although the result of longer average fixation duration in-
dicated that learners in the AR-based group more easily
found and focused on key information and then had a better
understanding of the learning content than others, this did
not translate into superior learning outcomes. After the
experiment, a few students were invited to experience all
three learning tools. ey generally thought that compared
to the traditional TG-based learning method, both AR and
PM are obviously helpful for them to understand the
learning material. However, they did not indicate that there
were significant differences between the effects of AR and
PM. eir subjective is in agreement with our experimental
result. It further indicates that the features and advantages of
AR were not sufficiently utilized.
In practical application, AR has superiority in flexibility
and convenience. In contrast to PM-based education, users
can build AR-based learning or training environments with
no limit on time, and the displayed objects can be repeatedly
modified and utilized. us, AR has great potential and
prospects. However, efficiently utilizing the features of AR to
help learners or trainers achieve improved performance is not
only the key to maximize its value but also the most persuasive
reason for its application, which calls for further studies. It is
worth exploring for which tasks AR is the most suitable
environment or whether other ways need to be combined
with AR to improve teaching and training efficiency.
6. Conclusion
In this study, we applied TG-based, AR-based, and PM-based
learning environments for construction learning. We com-
pared learners’ learning outcomes and utilized eye tracking to
explore the cognitive processes of the three groups.
For learning outcomes, our research suggests that the
effects of learning environments are different for various
forms of tasks. e three-dimensional display should have
the advantage of showing objects more comprehensive and
intuitively than other displays, but our study showed that, in
terms of outcome, conventional TG-based training ways can
achieve the same degree of AR-based and PM-based in some
specific tasks, such as answering true or false questions. In
practical application, the content and demand for learning
and training are diverse for different majors and posts.
AR and PM are not as effective in all cases. One should be
careful and selective on the application and popularization
of the new method.
Eye-tracking data provided quantitative evidence about
the cognitive process. Both AR-based and PM-based envi-
ronments helped learners reduce their cognitive loads
compared to those in the TG-based group. However, lower
cognitive loads did not transform into significantly higher
test scores or quicker completion times compared to other
groups. Similarly, eye-tracking data showed that AR has the
potential for learners’ key information focus and deeper
understanding, but learners in the AR-based group did not
show better learning performance than those in the other
groups. is result suggests that to achieve improved out-
comes, maybe we should combine other materials, such as
2D drawings and text, or perform more reasonable ad-
justments when modeling. To explore how to take full ad-
vantage of AR or other similar technology in practical
application, additional research needs to be developed and
integrated to provide an in-depth understanding of learners’
mental models and cognitive processes.
In summary, this study illustrates the effects of TG-
based, AR-based, and PM-based environments on con-
struction learning outcomes and learners’ cognitive
Table 8: Multiple comparisons of items with significant differences.
Item Question form
TG- and AR-based TG- and PM-based AR- and PM-based
Mean difference Sig. Mean difference Sig. Mean difference Sig.
Fixation time (ms)
True or false 2.57 0.532 4.74
0.036 2.17 0.681
Short answer 4.62 0.111 4.86 0.082 0.24 1.000
Fixation count (time) True or false 12.89
0.001 14.32
0.000 1.43 1.000
Average fixation duration (ms)
True or false 0.29
0.004 0.10 0.940 0.20 0.061
Short answer 0.28
0.000 0.25
0.001 0.03 1.000
e mean difference is significant at the 0.05 level.
10 Advances in Civil Engineering
processes. However, it remains limited by learning the
single material and a few independent test questions.
Future researchers should apply AR to systematized tasks
and perform comprehensive tests to evaluate the effects of
doing so.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Acknowledgments
e authors would like to extend their appreciation to the
Fundamental Research Funds for the Central Universities of
China (no. 106112016CDJSK03XK06) and the Natural Sci-
ence Foundation of China (no. 51578317) for vital support.
References
[1] A. Z. Sampaio, D. P. Ros
´
ario, A. R. Gomes, and J. P. Santos,
“Virtual reality applied on Civil Engineering education:
construction activity supported on interactive models,” In-
ternational Journal of Engineering Education, vol. 29, no. 6,
pp. 1331–1347, 2013.
[2] K. Ku and P. S. Mahabaleshwarkar, “Building interactive mod-
eling for construction education in virtual worlds,” Electronic
Journal of Information Technology in Construction, vol. 16, 2011.
[3] H.-K. Wu, S. W.-Y. Lee, H.-Y. Chang, and J.-C. Liang,
“Current status, opportunities and challenges of augmented
reality in education,” Computers and Education, vol. 62,
pp. 41–49, 2013.
[4] S. Nivedha and S. Hemalatha, “A survey on augmented re-
ality,” International Research Journal of Engineering and
Technology, vol. 2, no. 2, pp. 87–96, 2015.
[5] T. N. Arvanitis, A. Petrou, J. F. Knight et al., “Human factors
and qualitative pedagogical evaluation of a mobile augmented
reality system for science education used by learners with
physical disabilities,” Personal and Ubiquitous Computing,
vol. 13, no. 3, pp. 243–250, 2009.
[6] K. Lee, “Augmented reality in education and training,”
Techtrends, vol. 56, no. 2, pp. 13–21, 2012.
[7] C. Bilginsoy, “e hazards of training: attrition and retention
in construction industry apprenticeship programs,” Industrial
and Labor Relations Review, vol. 57, no. 1, pp. 54–67, 2003.
[8] L. Carozza, F. Bosche, and M. Abdel-Wahab, “Image-based
localization for an indoor VR/AR construction training
system,” in Paper Presented at 13th International Conference
on Construction Applications of Virtual Reality, pp. 363–372,
London, UK, October 2013.
[9] A. H. Behzadan and V. R. Kamat, “A framework for utilizing
context-aware augmented reality visualization in engineering
education,” in Proceedings of the International Conference on
Construction Application of Virtual Reality, p. 8, Taipei, Taiwan,
November 2012.
[10] A. H. Behzadan, A. Iqbal, and V. R. Kamat, “A collaborative
augmented reality based modeling environment for con-
struction engineering and management education,” in Pro-
ceedings of the 2011 Winter Simulation Conference (WSC),
pp. 3568–3576, Phoenix, AZ, USA, December 2011.
[11] I. Mutis and R. R. A. Issa, “Enhancing spatial and temporal
cognitive ability in construction education through aug-
mented reality and artificial visualizations,” in Proceedings of
International Conference on Computing in Civil and Building
Engineering, pp. 2079–2086, Orlando, FL, USA, June 2014.
[12] B. A. Knight, M. Horsley, and M. Eliot, Eye Tracking and the
Learning System: An Overview, Current Trends in Eye Tracking
Research, Springer International Publishing, Berlin, Germany, 2014.
[13] H. R. Chennamma and X. Yuan, “A survey on eye-gaze
tracking techniques,” Indian Journal of Computer Science
and Engineering, vol. 4, no. 5, 2013.
[14] J. M. Harley, E. G. Poitras, A. Jarrell, M. C. Duffy, and
S. P. Lajoie, “Comparing virtual and location-based aug-
mented reality mobile learning: emotions and learning out-
comes,” Educational Technology Research and Development,
vol. 64, no. 3, pp. 359–388, 2016.
[15] L. Han-Chin, “Investigating the impact of cognitive style on
multimedia learners’ understanding and visual search pat-
terns: an eye-tracking approach,” Journal of Educational
Computing Research, vol. 55, no. 8, pp. 1053–1068, 2017.
[16] R. E. Mayer, “Incorporating motivation into multimedia
learning,” Learning and Instruction, vol. 29, pp. 171–173, 2014.
[17] J. Sweller, J. J. G. V. Merrienboer, and F. G. W. C. Paas,
“Cognitive architecture and instructional design,” Educa-
tional Psychology Review, vol. 10, no. 3, pp. 251–296, 1998.
[18] R. E. Mayer, Multimedia Learning, Cambridge University
Press, Cambridge, UK, 2nd edition, 2009.
[19] R. Moreno, “Does the modality principle hold for different
media? A test of the method-affects-learning,” Journal of
Computer Assisted Learning, vol. 22, pp. 149–158, 2006.
[20] P. Sommerauer and O. M¨uller, “Augmented reality in in-
formal learning environments: a field experiment in a math-
ematics exhibition,” Computers and Education, vol. 79,
pp. 59–68, 2014.
[21] S. Zollmann, C. Hoppe, S. Kluckner, C. Poglitsch, H. Bischof,
and G. Reitmayr, “Augmented reality for construction site
monitoring and documentation,” Proceedings of the IEEE,
vol. 102, no. 2, pp. 137–154, 2014.
[22] A. Ibrahim, B. Huynh, J. Downey, T. H
¨
ollerer, D. Chun, and
J. O’Donovan, “ARbis Pictus: a study of language learning with
augmented reality,” 2017, http://arxiv.org/abs/1711.11243.
[23] T. H. C. Chiang, S. J. H. Yang, and G. J. Hwang, “An aug-
mented reality-based mobile learning system to improve
students’ learning achievements and motivations in natural
science inquiry activities,” Journal of Educational Technology
and Society, vol. 17, no. 4, pp. 352–365, 2014.
[24] Y. H. Hung, C. H. Chen, and S. W. Huang, “Applying aug-
mented reality to enhance learning: a study of different
teaching materials,” Journal of Computer Assisted Learning,
vol. 33, no. 3, pp. 252–266, 2017.
[25] J. Ferrer-Torregrosa, M.
´
A. Jim´enez-Rodr
´
ıguez, J. Torralba-
Estelles, F. Garz´on-Farin´os, M. P´erez-Bermejo, and N. Fern´andez-
Ehrling, “Distance learning ects and flipped classroom in the
anatomy learning: comparative study of the use of augmented
reality, video and notes,” BMC Medical Education, vol. 16, no. 1,
p. 230, 2016.
[26] S. K¨¨uk, S. Kapakin, and Y. G¨oktas¸, “Learning anatomy via
mobile augmented reality: Effects on achievement and cog-
nitive load,” Anatomical Sciences Education, vol. 9, no. 5,
pp. 411–421, 2016.
[27] Q. T. Le, A. Pedro, C. R. Lim, H. T. Park, S. P. Chan, and
K. K. Hong, “A framework for using mobile based virtual
reality and augmented reality for experiential construction
safety education,” International Journal of Engineering Edu-
cation, vol. 31, no. 3, pp. 713–725, 2015.
Advances in Civil Engineering 11
[28] Y.-H. Wang, “Exploring the effectiveness of integrating
augmented reality-based materials to support writing activ-
ities,” Computers and Education, vol. 113, pp. 162–176, 2017.
[29] M. B. Ibanez, A. Di Serio, D. Villaran, and C. D. Kloos,
“Experimenting with electromagnetism using augmented re-
ality: impact on flow student experience and educational ef-
fectiveness,” Computers and Education, vol. 71, pp. 113, 2014.
[30] D. Fonseca, N. Mart
´
ı, E. Redondo, I. Navarro, and A. anchez,
“Relationship between student profile, tool use, participation,
and academic performance with the use of Augmented Reality
technology for visualized architecture models,” Computers in
Human Behavior, vol. 31, pp. 434–445, 2014.
[31] R. E. Mayer, “Unique contributions of eye-tracking research
to the study of learning with graphics,” Learning and In-
struction, vol. 20, no. 2, pp. 167–171, 2010.
[32] K. Rayner, “Eye movements in reading and information
processing,” Psychological Bulletin, vol. 124, no. 3, pp. 372–
422, 1998.
[33] H. Liu and I. Heynderickx, “Visual attention in objective
image quality assessment: based on eye-tracking data,” IEEE
Transactions on Circuits and Systems for Video Technology,
vol. 21, no. 7, pp. 971–982, 2011.
[34] M. A. Just and P. A. Carpenter, Eye Fixations and Cognitive
Processes, Aldine Publishing, London, UK, 1976.
[35] J. Zagermann, U. Pfeil, and H. Reiterer, “Measuring cognitive
load using eye tracking technology in visual computing,” in
Proceedings of the Workshop on Beyond Time and Errors on
Novel Evaluation Methods for Visualization, pp. 78–85, Bal-
timore, MD, USA, October 2016.
[36] R. J. Jacob and K. S. Karn, “Eye tracking in human-computer
interaction and usability research Ready to deliver the
promises,” in e Mind’s Eye: Cognitive and Applied Aspects of
Eye Movement Research, pp. 573–605, Elsevier, New York,
NY, USA, 2003.
[37] E. Ozcelik, T. Karakus, E. Kursun, and K. Cagiltay, “An eye-
tracking study of how color coding affects multimedia
learning,” Computers and Education, vol. 53, no. 2, pp. 445–
453, 2009.
[38] S. Hasanzadeh, B. Esmaeili, and M. D. Dodd, “Impact of
construction workers’ hazard identification skills on their
visual attention,” Journal of Construction Engineering and
Management, vol. 143, no. 10, article 04017070, 2017.
[39] B. Esmaeili, “Measuring the impacts of safety knowledge on
construction workers’ attentional allocation and hazard de-
tection using remote eye-tracking technology,” Journal of
Management in Engineering, vol. 33, no. 5, article 04017024,
2017.
[40] R.-J. Dzeng, C.-T. Lin, and Y.-C. Fang, “Using eye-tracker to
compare search patterns between experienced and novice
workers for site hazard identification,” Safety Science, vol. 82,
pp. 56–67, 2016.
[41] S. Hasanzadeh, B. Esmaeili, and M. D. Dodd, “Measuring
construction workers’ real-time situation awareness using mobile
eye-tracking,” in Proceedings of the Construction Research
Congress, pp. 28942904, San Juan, Puerto Rico, June 2016.
[42] C. Y. Wang, M. J. Tsai, and C. C. Tsai, “Multimedia recipe
reading: predicting learning outcomes and diagnosing
cooking interest using eye-tracking measures,” Computers in
Human Behavior, vol. 62, pp. 9–18, 2016.
[43] S. Yeni and E. Esgin, “Usability evaluation of web based
educational multimedia by eye-tracking technique,” In-
ternational Journal Social Sciences and Education, vol. 5, no. 4,
pp. 590–603, 2015.
[44] O. Navarro, A. I. Molina, M. Lacruz, and M. Ortega, “Eval-
uation of multimedia educational materials using eye track-
ing,” Procedia–Social and Behavioral Sciences, vol. 197,
pp. 2236–2243, 2015.
[45] E. Jamet, “An eye-tracking study of cueing effects in multi-
media learning,” Computers in Human Behavior, vol. 32, no. 1,
pp. 47–53, 2014.
[46] Q. Wang, S. Yang, M. Liu, Z. Cao, and Q. Ma, “An eye-
tracking study of website complexity from cognitive load
perspective,” Decision Support Systems, vol. 62, no. 1246,
pp. 1–10, 2014.
[47] H. C. Liu, M. L. Lai, and H. H. Chuang, “Using eye-tracking
technology to investigate the redundant effect of multimedia
web pages on viewers’ cognitive processes,” Computers in
Human Behavior, vol. 27, no. 6, pp. 2410–2417, 2011.
[48] F. Schmidt-Weigand, A. Kohnert, and U. Glowalla, “A closer
look at split visual attention in system- and self-paced in-
struction in multimedia learning,” Learning and Instruction,
vol. 20, no. 2, pp. 100–110, 2010.
[49] K. Pernice and J. Nielsen, How to Conduct Eyetracking Studies,
Nielsen Norman Group, Fremont, CA, USA, 2009.
[50] A. Poole, L. J. Ball, and P. Phillips, “In search of salience:
a response-time and eye-movement analysis of bookmark
recognition,” in People and Computers XVIII–Design for Life,
pp. 363–378, Leeds Metropolitan University, Leeds, UK, 2004.
[51] R. E. Mayer, “e promise of multimedia learning: using the
same instructional design methods across different media,”
Learning and Instruction, vol. 13, no. 2, pp. 125–139, 2003.
[52] T. Seufert, “Supporting coherence formation in learning from
multiple representations,” Learning and Instruction, vol. 13,
no. 2, pp. 227–237, 2003.
[53] N. Berggren, E. H. W. Koster, and N. Derakshan, “e effect of
cognitive load in emotional attention and trait anxiety: an eye
movement study,” Journal of Cognitive Psychology, vol. 24,
no. 1, pp. 79–91, 2012.
[54] X. Li, Z. Ouyang, and Y. J. Luo, “e effect of cognitive load on
interaction pattern of emotion and working memory: an ERP
study,” in Proceedings of the IEEE International Conference on
Cognitive Informatics, pp. 61–67, Beijing, China, July 2010.
[55] D. B. oman, J. L. Smith, and P. J. Silvia, “e resource
replenishment function of interest,” Social Psychological and
Personality Science, vol. 2, no. 6, pp. 592–599, 2011.
12 Advances in Civil Engineering
International Journal of
Aerospace
Engineering
Hindawi
www.hindawi.com
Volume 2018
Robotics
Journal of
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com Volume 2018
Active and Passive
Electronic Components
VLSI Design
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com Volume 2018
Shock and Vibration
Hindawi
www.hindawi.com Volume 2018
Civil Engineering
Advances in
Acoustics and Vibration
Advances in
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com Volume 2018
Electrical and Computer
Engineering
Journal of
Advances in
OptoElectronics
Hindawi
www.hindawi.com
Volume 2018
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2013
Hindawi
www.hindawi.com
The Scientic
World Journal
Volume 2018
Control Science
and Engineering
Journal of
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com
Journal of
Engineering
Volume 2018
Sensors
Journal of
Hindawi
www.hindawi.com Volume 2018
International Journal of
Rotating
Machinery
Hindawi
www.hindawi.com
Volume 2018
Modelling &
Simulation
in Engineering
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com Volume 2018
Chemical Engineering
International Journal of
Antennas and
Propagation
International Journal of
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com Volume 2018
Navigation and
Observation
International Journal of
Hindawi
www.hindawi.com Volume 2018
Advances in
Multimedia
Submit your manuscripts at
www.hindawi.com