Professional Documents
Culture Documents
2
NCSA Faculty Fellow and Assistant Professor of Civil Engineering, and of
Computer Science, Univ. of Illinois at Urbana-Champaign, Urbana, IL 61801; PH
(217) 300-5226; email: mgolpar@illinois.edu
ABSTRACT
Recent research efforts on improving construction progress monitoring have
mainly focused on model-based assessment methods. In these methods, the expected
performance is typically modeled with 4D BIM and the actual performance is sensed
through 3D image-based reconstruction method or laser scanning. Previous research
on 4-Dimensional Augmented Reality (D4AR) models– which fuse 4D BIM with
point clouds generated from daily site photologs– and also laser scan-vs.-BIM have
shown that it is possible to conduct occupancy-based assessments and as an indicator
of progress, detect whether or not BIM elements are present in the scene. However, to
detect deviations beyond typical Work Breakdown Structure (WBS) in 4D BIM,
these method also need to capture operation-level details (e.g., current stage of
concrete placement: formwork, rebars, concrete). To overcome current limitations,
this paper presents methods for sampling and recognizing construction material from
image-based point cloud data and using that information in a statistical form to infer
the state of progress. The proposed method is validated using the D4AR model
generated for a building construction site. The preliminary experimental results show
that it is feasible to sample and detect construction materials from the images that are
registered to a point cloud model and use frequency histograms of the detected
materials to infer the actual state of progress for BIM elements.
INTRODUCTION
Early detection of performance deviations in field construction activities is
critical as it provides an opportunity for project management to avoid them or
minimize their impacts. Despite its significance, traditional monitoring practice
includes manual site data collection (e.g., daily construction reports), extensive data
extraction from different construction plan documents, and non-systematic reporting.
To overcome these limitations, research efforts on improving construction
monitoring are mainly focused on model-based assessment methods. In these
methods, the expected performance is typically modeled with 4D Building
Information Models (BIMs) and the actual performance is sensed through 3D laser
scanning (Turkan et al. 2013, Kim et al. 2013, Turkan et al. 2012, Bosché and Haas
2008) or 3D image-based reconstruction methods (Golparvar-Fard et al. 2012, 2011,
2009, Wu et al. 2010). These 3D sensing techniques produce raw point cloud models
and thus their products do not contain additional semantics of object oriented data
(e.g., materials, element interconnectivity information).
Assessment of progress deviations using raw point cloud data and BIM has
primarily focused on techniques that can register BIM with the point cloud models in
a common 3D environment and then analyzing progress deviations based on
assessing the density of reconstructed points in 3D. To better deal with both static and
dynamic occlusions, Golparvar-Fard et al. (2012) also proposed a metric to measure
the expected visibility per element from the camera convex hull. The outcome of all
these methods –per element– is the assessment of 3D occupancy in a binary fashion,
Downloaded from ascelibrary.org by Universidad Militar Nueva Granada on 02/26/18. Copyright ASCE. For personal use only; all rights reserved.
whether the element is within the expected 3D volume as indicated in BIM or not.
Nonetheless, accurate monitoring still requires detecting construction progress
at the operation-level. For example, for construction of a concrete foundation wall,
the operational details may involve several stages of formwork, installing
reinforcement bars, placing concrete, finishing, waterproofing, insulation, and
backfilling of the excavated areas (See Figure 1).
As a step toward accomplishing this goal, Turkan et al. (2013) expanded the
Scan-vs.-BIM comparison method of Bosché and Haas (2008) by revising the point
matching metrics to detect secondary and temporary objects (rebar and formwork for
concrete structures) in highly accurate and dense laser scanning point cloud models.
Their work also proposed a method for the analysis of points contained within open
spaces for recognition of concrete shoring. Reducing false positives and false
negatives in these detections even where highly dense point clouds are available– as
the authors have also indicated as open research problems – would require:
(1) Understanding and modeling the appearance of elements; i.e., texture and
color of the 3D elements– Occupancy based methods cannot easily differentiate
between several types of progress: e.g., concrete vs. waterproofing.
(2) Reasoning interconnectivity among elements– by formalizing
construction sequencing and using that to infer progress under partial and full
occlusions.
(3) Assessing confidence in each measurement– estimating expected
visibility per BIM element as a confidence indicator for progress monitoring.
This paper builds on our prior work on automated progress monitoring using
D4AR- 4 dimensional augmented reality- models (Golparvar-Fard et al. 2012) and
proposes two methods for recognizing appearance of elements and inferring a state of
progress. In our prior work (Golparvar-Fard et al. 2012), the assessment of
confidence using image-based point cloud models as a way of capturing as-built was
addressed. Here we focus on the problem of recognizing an up-to-date state of
progress assuming the confidence based on BIM and the camera convex hull is
already assessed. The proposed technique is validated using actual BIM, construction
schedule, and imagery data obtained during construction of a reinforced concrete
building. In the following, we briefly review the prior work on the D4AR modeling,
and relevant material recognition methods. We then present our method and discuss
Downloaded from ascelibrary.org by Universidad Militar Nueva Granada on 02/26/18. Copyright ASCE. For personal use only; all rights reserved.
experimental results.
BACKGROUND
model enables the plan and as-built models to be jointly explored with an interactive,
image-based 3D viewer where deviations are color-coded over the BIM elements.
Figure 3-a and b show the as-built point cloud which is generated using 160
two mega-pixel images (Video demo can be found at: http://vimeo.com/16987971).
Subfigures c and d further illustrate an upgraded application where we jointly
visualize images with the BIM, which will be discussed later in this paper.
Downloaded from ascelibrary.org by Universidad Militar Nueva Granada on 02/26/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 3. (a-b) as-built point cloud model; (c-d) new joint visualization
BIM
1
Back Projection of Material
BIM on site images Recognition
Model
Schedule D4AR
2 3
Site Sample Image Recognize Construction
Images Patches per element material per patch
4
Infer State of Progress
Update and color-code per BIM element
Input: Camera parameters <f, k1, k2, R, T> for & BIM elements ( )
Downloaded from ascelibrary.org by Universidad Militar Nueva Granada on 02/26/18. Copyright ASCE. For personal use only; all rights reserved.
Output: Depth map and back-projections of max for all onto each image
1. for each element
2. for each
3. for each vertex
4. transform into camera coordinate &
perspective division
5. undo radial distortion
6. transform onto image plane
7.
face with max 2D back-projected area for element
8. , :center location of camera and :center of in 3D
9. sort(B) for all ( ) by for depth map
10. return depth map and plot only for all ( ) as in sort(B)
Figure 6. Back-projection and depth map of BIM onto an image
Once the back-projections are completed, for each image we identify the
element face that has produced the maximum 2D area and use those pixels that are
associated with only one face of the element for appearance-based assessment
purposes. The intuition here is to guarantee that the sampling for material
classification is conducted over flat surface. This strategy provide us with larger areas
for extracting the required patches, and also minimizes the risk of taking samples
from location where edges and corners of element might be present as artifacts.
Next, the back-projected elements are sorted in order of their distances to each
camera location. This information is then used for generating a depth map per image
by simply plotting the sorted elements, starting from the farthest (blue) and ending
with the closest (red) to the camera. The pseudo code (Figure 6) outlines the entire
process of back-projection and depth map generation. Figure 7 shows the outcome of
this process for one of the registered site imagery.
Due to the presence of static and dynamic occlusions presented in a photo
(static: backfilling in from of an element not modeled in BIM; dynamic: presence of
equipment in front of the BIM element) we expect some of these samplings to be
irrelevant to the surfaces that are under inspected. Nonetheless because we take a
large number of samples from the scene, we expect the highest frequency of sampling
return more image patches from the relevant surface material. Our results as shown
later in the experimental result section further validates this hypothesis.
Step 2 and 3: Sampling image patches per element and material classification
Once the depth maps are created for every image, the progress of each
element ) is assessed by 1) selecting several image patches from the relevant
parts of all images that observe the element, 2) classifying material per patches, and
3) inferring progress using statistical distribution of the material classes and the
scores from SVM classifiers. Thus, from the back-projected of each element
, in each image that observes each , sample patches of pixels– which may
have some overlap and all their pixels are within the boundaries of the – are
randomly stored. Each image patch is placed into the material classification (steps 1-3
in Figure 8) adopted from Dimitrov and Golparvar-Fard (2014). Figure 9 shows
examples of these material patches that are extracted from back-projected face of a
BIM element.
Input: Depth map and back-projections of max for all onto all images
Output: The observed material for each element
1. for each
2. for all from all images ( s) that observe
3. randomly extract sample patches of pixels within each
4. for each
5. classify Material and return the class with highest score
6. ; maximum frequency of material
Figure 8. Extraction of sample patches and material recognition
and BIM were registered using 15 controlled points with 5.6 cm accuracy. For each
image that observes an element, a total of 10 random 30×30 samples were collected.
Figure 12 further illustrates the results on material inference for all visible
BIM elements that are made only of concrete. In this Figure, the frequency of
observing “concrete” is plotted in blue. The second highest frequency for the detected
materials is also plotted in “orange” for comparison purposes. Our preliminary results
show that even in presence of static and dynamic occlusions, inaccuracies in
sampling due to BIM-vs.-point cloud registration and presence or edges and corners
as artifacts, the method successfully returns concrete as the correct material class for
all observed elements. In our method, the visibility to elements are maximized by
sampling a large number of image patches from different perspectives. This strategy
minimize the chances of having occlusions or artifacts from particular viewpoints
may affect the outcome of material classification. Thus, it also reduces the risk of
inferring incorrect states for element progress.
1
0.9
0.8
Frequency
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
Elements #
Material with highest freq Material with second highest freq.
Figure 12. Highest (concrete) and second highest (others) frequency observed
REFERENCES
Bosché F. and Haas C.T. (2008). “Automated retrieval of 3D CAD model objects in
construction range images.” Automation in Construction, 17 (4), 499-512.
Brilakis, I., Soibelman, L., and Shinagawa, Y. (2005). “Material-based construction
site image retrieval.” J. Comput. Civ. Eng., 19, 341-355.
Brilakis, I. and Soibelman, L. (2006). ”Multimodal Image Retrieval from
Construction Databases and Model-Based Systems.” J. Constr. Eng.
Manage., 132(7), 777–785.
Dimitrov, A., and Golparvar-Fard, M. (2014). “Vision-based material recognition for
automated monitoring of construction progress and generating building
information modeling from unordered site image collections.” Advanced
Engineering Informatics.
Golparvar-Fard, M., Peña-Mora, F., Savarese, S. (2009). “Application of D4AR – A
4-Dimensional augmented reality model for automating construction progress
monitoring data collection, processing and communication.” ITcon, 14, 129-153.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2011). ”Integrated Sequential
As-Built and As-Planned Representation with D4AR Tools in Support of
Decision-Making Tasks in the AEC/FM Industry.” J. Constr. Eng.
Manage., 137(12), 1099–1116.
Golparvar‐Fard, M., Peña‐Mora, F., and Savarese, S. (2012). “Automated Progress
Monitoring Using Unordered Daily Construction Photographs and IFC‐Based
Building Information Models.” J. Comput. Civ. Eng., 10.1061
Kim C., Son, H., and Kim C. (2013). “Automated construction progress measurement
using a 4D building information model and 3D data”, Auto. Constr., 31, 75-82.
Leung, T., and Jitendra, M. (2001). “Representing and recognizing the visual
appearance of materials using 3D textons.” Int. J. Comp. Vision, 43(1), 29-44.
Turkan Y., Bosché F., Haas C.T., and Haas R. (2012). “Automated progress tracking
using 4D models and 3D sensing technologies.” Auto. Constr., 22 (1), 414-421.
Wu Y., Kim H., Kim C., and Han S.H. (2010). “Object recognition in construction
site images using 3d cad-based filtering.” J. Comput. Civ. Eng., 24 (1), 56-64.