You are on page 1of 98

You​ ​can​ ​find​ ​us​ ​here:

metanautvr.com

 
 
 

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing


Photogrammetry​ ​For​ ​Unity

Written​ ​By:​ ​Andrew​ ​Yao-An​ ​Lee


Originally​ ​Published:​ ​2017.10.21
Last​ ​Updated:​ ​2017.10.23

License:​ ​Creative​ ​Commons​ ​Attribution​ ​4.0​ ​International​ ​(CC​ ​BY​ ​4.0)

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​1
About​ ​this​ ​Manual
This​ ​manual​ ​is​ ​written​ ​by​ ​Andrew​ ​Yao-An​ ​Lee​ ​of​ ​Metanaut.​ ​Metanaut​ ​is​ ​an​ ​indie​ ​VR​ ​studio
based​ ​in​ ​Vancouver,​ ​BC,​ ​Canada.​ ​The​ ​manual​ ​is​ ​conceived​ ​from​ ​a​ ​project​ ​made​ ​in
collaboration​ ​between​ ​the​ ​University​ ​of​ ​British​ ​Columbia​ ​and​ ​Metanaut​ ​to​ ​provide​ ​VR​ ​field​ ​trips
built​ ​using​ ​photogrammetry​ ​techniques,​ ​and​ ​showcased​ ​with​ ​the​ ​HTC​ ​Vive​ ​running​ ​in​ ​Unity.
Some​ ​examples​ ​in​ ​this​ ​manual​ ​are​ ​from​ ​the​ ​first​ ​field​ ​trip​ ​location​ ​made​ ​for​ ​the​ ​project,​ ​which​ ​is
at​ ​Prospect​ ​Point​ ​of​ ​Stanley​ ​Park,​ ​located​ ​in​ ​Vancouver,​ ​BC,​ ​Canada.

This​ ​manual​ ​will​ ​provide​ ​best​ ​practices​ ​and​ ​complete​ ​workflow​ ​from​ ​capturing​ ​to​ ​creating​ ​a
reasonably​ ​good​ ​photogrammetry​ ​model​ ​of​ ​an​ ​environment​ ​(versus​ ​an​ ​object)​ ​for​ ​viewing​ ​in
real-time​ ​in​ ​Unity.​ ​Our​ ​methodology​ ​is​ ​for​ ​presenting​ ​the​ ​photogrammetry​ ​mesh​ ​that​ ​is​ ​captured
and​ ​not​ ​about​ ​remodeling​ ​a​ ​scene​ ​based​ ​on​ ​photogrammetry​ ​meshes.

The​ ​manual​ ​is​ ​by​ ​no​ ​means​ ​fully​ ​complete​ ​nor​ ​tries​ ​to​ ​define​ ​the​ ​best​ ​way​ ​to​ ​approach​ ​creating
photogrammetry​ ​models.​ ​It​ ​does,​ ​however,​ ​present​ ​one​ ​workflow​ ​for​ ​achieving​ ​good
photogrammetry​ ​results.

This​ ​manual​ ​contains​ ​some​ ​insight​ ​to​ ​specific​ ​features​ ​or​ ​tools​ ​in​ ​Reality​ ​Capture​ ​and
workarounds​ ​to​ ​some​ ​quirks,​ ​as​ ​well​ ​as​ ​tips​ ​in​ ​optimizing​ ​photogrammetry​ ​meshes.

This​ ​manual​ ​assumes​ ​that​ ​you​ ​have​ ​some​ ​knowledge​ ​in​ ​using​ ​each​ ​of​ ​the​ ​software​ ​mentioned
(ex.​ ​Reality​ ​Capture,​ ​3DsMax,​ ​Lightroom),​ ​and​ ​will​ ​not​ ​explain​ ​the​ ​basic​ ​tools​ ​and​ ​workflows​ ​of
the​ ​program​ ​that​ ​the​ ​respective​ ​documentations​ ​will​ ​cover.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​2
Example​ ​of​ ​Results
Here​ ​are​ ​some​ ​screenshots​ ​taken​ ​from​ ​the​ ​Prospect​ ​Point​ ​scene,​ ​shown​ ​within​ ​Unity​ ​in
real-time:

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​3
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​4
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​5
License
This​ ​manual​ ​is​ ​provided​ ​under​ ​the​ ​Creative​ ​Commons​ ​Attribution​ ​4.0​ ​International​ ​(CC​ ​BY
4.0)​ ​license.

For​ ​more​ ​information​ ​and​ ​full​ ​license,​ ​please​ ​see:


https://creativecommons.org/licenses/by/4.0/legalcode

In​ ​summary,​ ​you​ ​may:

● Share​ ​—​ ​copy​ ​and​ ​redistribute​ ​the​ ​material​ ​in​ ​any​ ​medium​ ​or​ ​format
● Adapt​ ​—​ ​remix,​ ​transform,​ ​and​ ​build​ ​upon​ ​the​ ​material​ ​for​ ​any​ ​purpose,​ ​even
commercially.

The​ ​licensor​ ​cannot​ ​revoke​ ​these​ ​freedoms​ ​as​ ​long​ ​as​ ​you​ ​follow​ ​the​ ​license​ ​terms.
Under​ ​the​ ​following​ ​terms:

Attribution​ ​—​ ​You​ ​must​ ​give​ ​appropriate​ ​credit,​ ​provide​ ​a​ ​link​ ​to​ ​the​ ​license,​ ​and​ ​indicate​ ​if
changes​ ​were​ ​made.​ ​You​ ​may​ ​do​ ​so​ ​in​ ​any​ ​reasonable​ ​manner,​ ​but​ ​not​ ​in​ ​any​ ​way​ ​that
suggests​ ​the​ ​licensor​ ​endorses​ ​you​ ​or​ ​your​ ​use.

No​ ​additional​ ​restrictions​ ​—​ ​You​ ​may​ ​not​ ​apply​ ​legal​ ​terms​ ​or​ ​technological​ ​measures​ ​that
legally​ ​restrict​ ​others​ ​from​ ​doing​ ​anything​ ​the​ ​license​ ​permits.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​6
Viewing
This​ ​manual​ ​is​ ​best​ ​viewed​ ​in​ ​Google​ ​Docs​ ​(​http://bit.ly/2xYl6DX​)​ ​with​ ​Print​ ​Layout​ ​view​ ​mode
disabled​ ​so​ ​that​ ​the​ ​content​ ​isn’t​ ​split​ ​into​ ​pages.

Contact
Feel​ ​free​ ​to​ ​contact​ ​us​ ​about​ ​the​ ​manual​ ​or​ ​anything​ ​VR​ ​or​ ​photogrammetry​ ​related.​ ​If​ ​you​ ​have
any​ ​feedback​ ​or​ ​suggestions,​ ​please​ ​let​ ​us​ ​know.

Feel​ ​free​ ​to​ ​share​ ​or​ ​remix​ ​this​ ​manual​ ​but​ ​please​ ​remember​ ​to​ ​give​ ​credit​ ​to​ ​our​ ​studio,
Metanaut.

You​ ​may​ ​use​ ​the​ ​following​ ​when​ ​crediting:

Metanaut
Website:​ ​http://metanautvr.com
E-Mail:​ ​hello​ ​[at]​ ​metanautvr.com

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​7
About​ ​this​ ​Manual 2

Example​ ​of​ ​Results 3

License 6

Viewing 7

Contact 7

References 12

General​ ​Workflow 12

Preparation 14
General​ ​List​ ​of​ ​Equipment 14
Computer 14
Higher​ ​CPU​ ​Core​ ​count 14
Multi​ ​GPU 15
More​ ​RAM 15
Storage 15
Software 15
Capture​ ​Device 16
Camera​ ​Choice 16
Lens​ ​Choice 16
LIDAR​ ​Scanners 16
Camera​ ​Lens​ ​Properties​ ​That​ ​Affect​ ​Photogrammetry 17
Focal​ ​Length 17
Sharpness 18
Chromatic​ ​Aberration 19
Diffraction 19
Barrel​ ​Distortion 19
Image​ ​Stabilization 20
Lens​ ​Flares​ ​and​ ​Ghosting 20
Scouting​ ​Location​ ​and​ ​Planning 20
Photogrammetry​ ​Mesh​ ​vs​ ​Capture​ ​Area 20
Time​ ​of​ ​Day​ ​+​ ​Weather 21

Capturing 26
Non-Photogrammetry​ ​Related​ ​Shots 26
Skybox​ ​Panoramas 26
Length​ ​References 27
Signage​ ​or​ ​Information​ ​Boards 27
Camera​ ​Settings 28
ISO 28
Aperture 28
Shutter​ ​speed 29
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​8
Shoot​ ​RAW 29
Photogrammetry​ ​Level​ ​of​ ​Detail 29
Capture​ ​Strategies 32
General​ ​Tips 32
Mental​ ​Model:​ ​Flashlight​ ​analogy 33
Shooting​ ​Video​ ​(Not​ ​Recommended): 34
Ground​ ​Level​ ​Capture:​ ​360​ ​Approach 35
Ground​ ​Level​ ​Capture:​ ​Normal-To​ ​Approach 36
Objects​ ​In​ ​The​ ​Middle​ ​of​ ​the​ ​Environment 40
Tall​ ​Objects​ ​Or​ ​Walls 41
Offsetting​ ​Normal-To​ ​Paths 42
Keeping​ ​Context​ ​In​ ​All​ ​Shots 43
Concave​ ​Corners​ ​Using​ ​The​ ​Normal-To​ ​Method 43
Drone​ ​Approach 44
Breaking​ ​Down​ ​Large​ ​Environments 45

Organization 45
File​ ​Structure 45

Photo​ ​Corrections 47
Lightroom​ ​Workflow 47
Metadata​ ​Filters 48
Synchronizing​ ​White​ ​Balance 48
Synchronizing​ ​Lens​ ​Profile​ ​Corrections 49
Export​ ​Settings 49

Initial​ ​Photogrammetry​ ​Processing​ ​Using​ ​Reality​ ​Capture 51


Setup:​ ​Project​ ​Cache​ ​folder 51
Aligning 51
Single​ ​Component​ ​Goal 51
Set​ ​Distance​ ​Constraints 52
Manual​ ​Control​ ​Points 52
Downscale​ ​&​ ​Alignment​ ​Technique 54
Mesh​ ​Reconstruction 56
Set​ ​Reconstruction​ ​Region 56
Simplification 56
Defining​ ​Overall​ ​Triangle​ ​Budget 56
First​ ​Pass​ ​Simplification 56
Reducing​ ​Project​ ​Size 57
First​ ​Pass​ ​Texturing 58
Draft​ ​Textures 58
Exporting 58
Include​ ​Texture​ ​Data 59
.rcinfo​ ​File​ ​and​ ​Grouping​ ​Files​ ​For​ ​Working​ ​Between​ ​Different​ ​Programs 59

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​9
Cleanup​ ​With​ ​3DsMax 60
Importing​ ​into​ ​3DsMax​ ​from​ ​Reality​ ​Capture 60
Units​ ​Setup​ ​+​ ​System​ ​Units​ ​setup​ ​For​ ​3DsMax 60
FBX,​ ​OBJ​ ​and​ ​Texture​ ​Assignment​ ​Issues​ ​For​ ​3DsMax 61
Trimming 63
Enable​ ​Viewport​ ​Textures 63
Using​ ​Mesh​ ​or​ ​Poly​ ​Editing​ ​Modes 64
Enabling​ ​Edged​ ​Faces 64
Using​ ​The​ ​Lasso​ ​Selection​ ​Tool 65
Auto​ ​Window/Crossing​ ​By​ ​Direction 66
Enabling​ ​Viewport​ ​Stats 67
Visualizing​ ​Problematic​ ​Parts​ ​of​ ​Mesh​ ​in​ ​3DsMax 67
Simplification 67
Over​ ​Simplification​ ​May​ ​Affect​ ​Texture​ ​Quality 68
ProOptimizer 69
Detaching​ ​The​ ​Mesh​ ​To​ ​Apply​ ​Different​ ​ProOptimizer​ ​Settings 70
Exporting​ ​from​ ​3DsMax​ ​Back​ ​to​ ​Reality​ ​Capture 72

Photogrammetry​ ​Re-Processing​ ​With​ ​Reality​ ​Capture 74


Importing 74
.rcinfo​ ​File 74
Import​ ​to​ ​Same​ ​Project 74
Invalid​ ​Function​ ​Call 74
Texturing 74
Defining​ ​the​ ​Texture​ ​Budget 74
Setting​ ​some​ ​photos​ ​to​ ​not​ ​texture 74
Unwrapping​ ​Tool 74
Large​ ​Triangle​ ​Removal​ ​Threshold 75
Texture​ ​Quality 76
Maximal​ ​Textures​ ​Count 76
Fixed​ ​Texel​ ​Size 77
Adaptive​ ​Texel​ ​Size 78
Exporting 78
Final​ ​Export​ ​for​ ​Unity​ ​or​ ​Final​ ​Project​ ​Alignment 78

Aligning​ ​And​ ​Merging​ ​Individual​ ​Meshes​ ​In​ ​3DsMax 78


Importing 78
Aligning 78
Zeroing​ ​All​ ​Pivot​ ​Points 87
Trimming 87
Reconciling 88
LOD​ ​Generation 88
Exporting 90
Export​ ​Options 90
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​10
Setting​ ​Up​ ​For​ ​VR​ ​Presentation​ ​In​ ​Unity 90
Importing 90
Textures 90
Materials 92
LODs​ ​Setup 92
Importing​ ​A​ ​Model​ ​With​ ​Multiple​ ​Sets​ ​of​ ​LODs 92
LOD​ ​Issues​ ​with​ ​VR 94
Lighting 94
Static​ ​Objects 95
Development 95
Locomotion 95
“Navmesh” 95
“Armswinger​ ​Blocker​ ​Mesh” 96
SpeedTrees 97

Suggestions​ ​for​ ​Further​ ​Exploration 97

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​11
References
Here​ ​are​ ​links​ ​to​ ​good​ ​references​ ​to​ ​read​ ​or​ ​watch​ ​to​ ​get​ ​familiarized​ ​with​ ​the​ ​photogrammetry
process.​ ​Some​ ​of​ ​these​ ​links​ ​may​ ​provide​ ​deeper​ ​insights​ ​and​ ​more​ ​advanced​ ​workflows​ ​in
certain​ ​areas​ ​compared​ ​to​ ​this​ ​manual.

Basics​ ​of​ ​Camera​ ​settings


https://www.youtube.com/watch?v=F8T94sdiNjc

Good​ ​Techniques
https://developer.valvesoftware.com/wiki/Destinations/Advanced_Outdoors_Photogrammetry

Good​ ​reference​ ​site​ ​of​ ​anything​ ​3D​ ​Scanning​ ​Related​ ​(May​ ​include​ ​photogrammetry​ ​in​ ​future)
http://3dscanexpert.com/

Skybox​ ​Midground​ ​Foreground


https://developer.valvesoftware.com/wiki/File:Advanced-photogrammetry-tutorial-overview.jpg

DICE​ ​photogrammetry​ ​for​ ​Battlefront


https://www.youtube.com/watch?v=U_WaqCBp9zo

Wet​ ​road​ ​and​ ​broken​ ​car​ ​reconstruction


https://steamcommunity.com/sharedfiles/filedetails/?id=814783536

Photogrammetry​ ​Workflow​ ​Tools


https://80.lv/articles/capturing-british-beauty-with-photogrammetry/

Unity​ ​Photogrammetry​ ​Guide


https://unity3d.com/solutions/photogrammetry

Reality​ ​Capture​ ​Help​ ​File


(Found​ ​within​ ​the​ ​Reality​ ​Capture​ ​program.​ ​It’s​ ​moderately​ ​detailed​ ​and​ ​extremely​ ​helpful)

General​ ​Workflow
The​ ​workflow​ ​graphic​ ​below​ ​describes​ ​the​ ​steps​ ​taken​ ​to​ ​create​ ​and​ ​prepare​ ​a​ ​photogrammetry
environment​ ​to​ ​be​ ​used​ ​in​ ​Unity​ ​for​ ​presenting​ ​in​ ​VR.​ ​The​ ​sections​ ​in​ ​the​ ​rest​ ​of​ ​the​ ​manual​ ​that
follow​ ​this​ ​graphic​ ​describe​ ​some​ ​of​ ​the​ ​steps​ ​in​ ​further​ ​detail.

For​ ​the​ ​downloadable​ ​PDF​ ​of​ ​the​ ​Workflow​ ​Outline,​ ​click​ ​here:
http://bit.ly/MNPGW

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​12
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​13
Preparation

General​ ​List​ ​of​ ​Equipment


Equipment Example​ ​For​ ​This​ ​Project

Camera​ ​with​ ​appropriate​ ​lens(es) Canon​ ​5D​ ​Mark​ ​III​ ​+​ ​Canon​ ​16-35mm​ ​F4​ ​L​ ​+
Canon​ ​24mm​ ​1.4​ ​L

Lots​ ​of​ ​memory​ ​cards​ ​and​ ​batteries 4​ ​x​ ​64GB​ ​CF​ ​Cards​ ​+​ ​2​ ​x​ ​500GB​ ​External
HDs,​ ​3​ ​x​ ​Canon​ ​5D​ ​Mark​ ​III​ ​batteries​ ​for​ ​one
trip

Lots​ ​of​ ​storage​ ​space 2TB​ ​HDD​ ​on​ ​computer

Computer Intel​ ​i7-7700K,​ ​nVidia​ ​GTX​ ​1080,​ ​64GB


DDR4​ ​RAM

Length​ ​Reference Meter​ ​stick​ ​or​ ​tape​ ​measure

Computer

Higher​ ​CPU​ ​Core​ ​count


Half​ ​or​ ​more​ ​of​ ​the​ ​processes​ ​in​ ​Reality​ ​Capture​ ​use​ ​the​ ​CPU.​ ​The​ ​primary​ ​factor​ ​in​ ​increasing
performance​ ​in​ ​CPU​ ​related​ ​processes​ ​is​ ​to​ ​have​ ​more​ ​cores—next​ ​is​ ​clock​ ​speed.
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​14
Multi​ ​GPU
Multiple​ ​GPUs​ ​can​ ​help​ ​Reality​ ​Capture​ ​performance.​ ​Each​ ​of​ ​the​ ​GPUs​ ​in​ ​a​ ​multi-GPU​ ​setup
does​ ​not​ ​need​ ​to​ ​be​ ​identical​ ​and​ ​should​ ​not​ ​use​ ​the​ ​SLI​ ​bridge.​ ​Overall,​ ​the​ ​speed
improvement​ ​seems​ ​to​ ​be​ ​between​ ​20-40%​ ​with​ ​a​ ​second​ ​card​ ​of​ ​near​ ​equal​ ​power​ ​(tested​ ​with
a​ ​980Ti​ ​+​ ​1070.)

At​ ​the​ ​time​ ​of​ ​writing,​ ​multi-GPU​ ​setups​ ​only​ ​increases​ ​performance​ ​the​ ​first​ ​time​ ​model
generation​ ​is​ ​run,​ ​which​ ​is​ ​likely​ ​when​ ​depth​ ​maps​ ​are​ ​created.​ ​(GPU​ ​selection​ ​is​ ​under​ ​depth
maps​ ​section​ ​too​ ​in​ ​settings).​ ​The​ ​depth​ ​map​ ​will​ ​likely​ ​not​ ​be​ ​recalculated​ ​unless​ ​the​ ​downsize
setting​ ​is​ ​changed.​ ​Multiple​ ​GPUs​ ​also​ ​seems​ ​to​ ​increase​ ​performance​ ​in​ ​the​ ​texture​ ​generation
process.

Strangely,​ ​even​ ​when​ ​it’s​ ​set​ ​to​ ​not​ ​use​ ​the​ ​2nd​ ​GPU​ ​in​ ​Reality​ ​Capture,​ ​it​ ​will​ ​still​ ​help​ ​in​ ​some
calculations,​ ​as​ ​long​ ​as​ ​the​ ​2nd​ ​GPU​ ​plugged​ ​into​ ​motherboard​ ​and​ ​recognized​ ​by​ ​Windows
and​ ​by​ ​NVIDIA​ ​drivers.

More​ ​RAM
32GB​ ​should​ ​be​ ​the​ ​absolute​ ​minimum​ ​amount​ ​of​ ​RAM,​ ​as​ ​it​ ​still​ ​runs​ ​out​ ​of​ ​memory​ ​for​ ​scenes
with​ ​around​ ​2000​ ​images.​ ​64GB​ ​should​ ​be​ ​the​ ​recommended​ ​minimum.​ ​128GB​ ​or​ ​more​ ​would
be​ ​very​ ​good.​ ​Faster​ ​RAM​ ​frequencies​ ​should​ ​provide​ ​faster​ ​result​ ​as​ ​well.

Storage
Use​ ​drives​ ​with​ ​the​ ​fastest​ ​random​ ​read​ ​and​ ​write​ ​speeds​ ​for​ ​the​ ​cache​ ​folder​ ​for​ ​Reality
Capture.​ ​SSDs​ ​tend​ ​to​ ​be​ ​superior​ ​to​ ​HDDs.​ ​It’s​ ​ok​ ​to​ ​put​ ​the​ ​project​ ​files​ ​in​ ​a​ ​non-SSD​ ​or
slower​ ​drive​ ​because​ ​they​ ​are​ ​only​ ​loaded​ ​at​ ​the​ ​beginning​ ​and​ ​when​ ​the​ ​project​ ​is​ ​saved.​ ​Try
to​ ​at​ ​least​ ​set​ ​the​ ​Cache​ ​Location​ ​on​ ​the​ ​fastest​ ​drive.​ ​A​ ​large​ ​storage​ ​capacity​ ​is​ ​also​ ​important
for​ ​large​ ​photogrammetry​ ​projects.​ ​For​ ​our​ ​Prospect​ ​Point​ ​with​ ​20,000​ ​images​ ​for​ ​Prospect
Point,​ ​we​ ​needed​ ​about​ ​2TB​ ​of​ ​space​ ​to​ ​process​ ​it​ ​all.

Software
Here​ ​is​ ​a​ ​list​ ​of​ ​software​ ​that​ ​we​ ​used​ ​for​ ​our​ ​photogrammetry​ ​workflow:

● Lightroom​ ​-​ ​Processing​ ​and​ ​preparing​ ​RAW​ ​photos


● Reality​ ​Capture​ ​-​ ​Creating​ ​photogrammetry​ ​meshes
● 3DsMax,​ ​[or​ ​Blender,​ ​or​ ​Maya,​ ​etc]​ ​-​ ​Optimizing​ ​and​ ​aligning​ ​photogrammetry​ ​meshes
● Unity​ ​-​ ​For​ ​presenting​ ​photogrammetry​ ​in​ ​VR
● Granite​ ​for​ ​Unity,​ ​[or​ ​Amplify​ ​Texture]​ ​-​ ​For​ ​texture​ ​streaming
● Photoshop​ ​-​ ​Fixing​ ​textures​ ​and​ ​stitching​ ​panoramas
● Microsoft​ ​Image​ ​Composite​ ​Editor,​ ​[or​ ​PTGUI,​ ​or​ ​Photoshop]​ ​-​ ​For​ ​stitching​ ​panoramas

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​15
Capture​ ​Device

Camera​ ​Choice
Ideally,​ ​the​ ​best​ ​camera​ ​for​ ​photogrammetry​ ​would​ ​be​ ​one​ ​with​ ​a​ ​sensor​ ​that​ ​has​ ​the​ ​highest
resolution​ ​and​ ​produces​ ​the​ ​least​ ​amount​ ​of​ ​image​ ​noise.​ ​Generally,​ ​sensors​ ​that​ ​exhibit
favourable​ ​resolution​ ​and​ ​give​ ​lower​ ​amounts​ ​of​ ​image​ ​noise​ ​are​ ​cameras​ ​with​ ​full​ ​frame
sensors.​ ​At​ ​the​ ​time​ ​of​ ​writing,​ ​here​ ​are​ ​some​ ​of​ ​the​ ​better​ ​camera​ ​bodies​ ​that​ ​are​ ​good​ ​for
photogrammetry:

Sony​ ​A7S2
Sony​ ​A7R2
Nikon​ ​D850
Canon​ ​1DX​ ​Mark​ ​II
Canon​ ​5D​ ​Mark​ ​IV

Lens​ ​Choice
It​ ​is​ ​best​ ​to​ ​use​ ​camera​ ​bodies​ ​that​ ​support​ ​interchangeable​ ​lenses,​ ​such​ ​as​ ​DSLRs​ ​or
mirrorless​ ​cameras,​ ​so​ ​that​ ​better​ ​lenses​ ​can​ ​be​ ​used.​ ​These​ ​such​ ​cameras​ ​are​ ​also​ ​more​ ​likely
to​ ​house​ ​better​ ​sensors​ ​than​ ​cameras​ ​that​ ​don’t​ ​have​ ​interchangeable​ ​lenses.

At​ ​the​ ​time​ ​of​ ​writing,​ ​here​ ​are​ ​some​ ​good​ ​lenses​ ​that​ ​may​ ​be​ ​good​ ​for​ ​photogrammetry:

Sigma​ ​12-24mm​ ​f/4​ ​DG​ ​HSM​ ​Art


Canon​ ​EF​ ​16-35mm​ ​f/4L​ ​IS​ ​USM
Sigma​ ​24mm​ ​f/1.4​ ​DG​ ​HSM​ ​Art
Canon​ ​EF​ ​35mm​ ​f/1.4L​ ​II​ ​USM
Sigma​ ​50mm​ ​f/1.4​ ​DG​ ​HSM​ ​Art

LIDAR​ ​Scanners
Capturing​ ​the​ ​scenes​ ​with​ ​LIDAR​ ​scanners​ ​to​ ​complement​ ​regular​ ​photo​ ​cameras​ ​is​ ​one​ ​of​ ​the
best​ ​ways​ ​to​ ​get​ ​an​ ​accurate​ ​and​ ​detailed​ ​model​ ​of​ ​the​ ​environment.​ ​We​ ​have​ ​not​ ​yet​ ​had
access​ ​to​ ​a​ ​LIDAR​ ​scanner​ ​so​ ​we​ ​can’t​ ​comment​ ​on​ ​what​ ​to​ ​look​ ​out​ ​for.​ ​Reality​ ​Capture​ ​will​ ​be
able​ ​to​ ​merge​ ​data​ ​from​ ​LIDAR​ ​and​ ​photography​ ​together.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​16
Camera​ ​Lens​ ​Properties​ ​That​ ​Affect​ ​Photogrammetry

Focal​ ​Length

The​ ​wider​ ​the​ ​lens​ ​is,​ ​the​ ​more​ ​content​ ​can​ ​be​ ​captured​ ​per​ ​shot,​ ​and​ ​the​ ​more​ ​safety​ ​is​ ​built
into​ ​the​ ​photos​ ​because​ ​they​ ​will​ ​more​ ​likely​ ​have​ ​more​ c​ ontext​ ​and​ ​overlapping​ ​areas​,​ ​which​ ​is
preferred​ ​for​ ​the​ ​photogrammetry​ ​software.

If​ ​you​ ​are​ ​short​ ​on​ ​time​ ​and​ ​you​ ​are​ ​capturing​ ​large​ ​environments,​ ​use​ ​a​ ​wider​ ​lens.​ ​For​ ​much
of​ ​the​ ​photos​ ​for​ ​the​ ​Stanley​ ​Park​ ​Prospect​ ​Point​ ​scene,​ ​we​ ​used​ ​a​ ​Canon​ ​16-35mm​ ​F4​ ​L​ ​lens
at​ ​16mm​ ​for​ ​the​ ​majority​ ​of​ ​the​ ​photos.​ ​While​ ​this​ ​is​ ​not​ ​the​ ​most​ ​ideal​ ​lens​ ​for​ ​a​ ​few​ ​reasons,​ ​it
provided​ ​the​ ​fastest​ ​turnaround​ ​because​ ​we​ ​were​ ​extremely​ ​time-limited.

For​ ​ultra-wide​ ​lenses,​ ​only​ ​use​ ​rectilinear​​ ​lenses,​ ​not​ ​fisheye​ ​lenses.​ ​Fisheye​ ​lenses​ ​are​ ​not
recommended​ ​because​ ​they​ ​have​ ​too​ ​much​ ​distortion.​ ​It​ ​is​ ​possible​ ​to​ ​undistort​ ​the​ ​images​ ​but
that​ ​results​ ​in​ ​extreme​ ​loss​ ​of​ ​texel​ ​quality​ ​in​ ​the​ ​corners,​ ​which​ ​makes​ ​only​ ​the​ ​center​ ​area​ ​of
the​ ​photo​ ​useful.​ ​Other​ ​issues​ ​with​ ​fisheye​ ​lenses​ ​are​ ​severe​ ​chromatic​ ​aberration​ ​and
diffraction.​ ​Rectilinear​ ​lenses​ ​come​ ​with​ ​much​ ​of​ ​the​ ​distortion​ ​already​ ​corrected​ ​for​ ​and
generally​ ​have​ ​chromatic​ ​aberration​ ​more​ ​minimized.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​17
The​ ​above​ ​example​ ​shows​ ​the​ ​tree​ ​taken​ ​at​ ​2​ ​different​ ​focal​ ​lengths​ ​and​ ​different​ ​distances​ ​so
that​ ​the​ ​tree​ ​fills​ ​in​ ​the​ ​same​ ​amount​ ​of​ ​space​ ​in​ ​the​ ​frame.​ ​Notice​ ​how​ ​in​ ​the​ ​lower​ ​focal​ ​length
photo,​ ​the​ ​curves​ ​of​ ​the​ ​tree​ ​stump​ ​are​ ​accentuated,​ ​and​ ​the​ ​little​ ​branch​ ​seems​ ​to​ ​be​ ​more
distorted​ ​and​ ​longer.

Generally,​ ​prime​ ​lenses​ ​will​ ​provide​ ​the​ ​best​ ​results​ ​because​ ​they​ ​don’t​ ​need​ ​to​ ​account​ ​for
many​ ​shifting​ ​optical​ ​variables​ ​caused​ ​by​ ​zooming.

Sharpness

The​ ​higher​ ​quality​ ​the​ ​lens​ ​glass,​ ​the​ ​sharper​ ​image​ ​it​ ​can​ ​provide​ ​for​ ​the​ ​sensor,​ ​which
provides​ ​more​ ​details​ ​for​ ​the​ ​photogrammetry​ ​software​ ​to​ ​work​ ​with.​ ​Sharpness​ ​not​ ​only​ ​needs
to​ ​exist​ ​in​ ​the​ ​center​ ​of​ ​the​ ​image,​ ​but​ ​also​ ​in​ ​the​ ​corners.​ ​Cheaper​ ​lenses​ ​tend​ ​to​ ​have​ ​corners
that​ ​are​ ​not​ ​very​ ​sharp,​ ​which​ ​defeats​ ​the​ ​purpose​ ​of​ ​having​ ​a​ ​wider​ ​lens​ ​because​ ​the​ ​data​ ​from
the​ ​corners​ ​become​ ​unusable.​ ​Refer​ ​to​ ​MTF​ ​charts​ ​for​ ​lenses​ ​to​ ​aid​ ​in​ ​choosing​ ​a​ ​lens​ ​to​ ​use.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​18
Chromatic​ ​Aberration

Chromatic​ ​Aberration​ ​changes​ ​based​ ​on​ ​the​ ​location​ ​of​ ​the​ ​subject​ ​in​ ​the​ ​frame.​ ​The
photogrammetry​ ​software​ ​may​ ​falsely​ ​track​ ​chromatic​ ​aberration​ ​as​ ​features,​ ​which​ ​causes​ ​a
less​ ​accurate​ ​model,​ ​or​ ​on​ ​the​ ​flipside,​ ​chromatic​ ​aberration​ ​may​ ​make​ ​some​ ​features
untrackable.​ ​Use​ ​lenses​ ​with​ ​the​ ​lowest​ ​amounts​ ​of​ ​chromatic​ ​aberration.

Diffraction

Similar​ ​to​ ​chromatic​ ​aberration,​ ​diffraction​ ​from​ ​lenses​ ​is​ ​also​ ​unwanted​ ​because​ ​it​ ​may​ ​cause
inaccuracies​ ​in​ ​the​ ​photogrammetry​ ​model.​ ​Use​ ​a​ ​lens​ ​with​ ​minimal​ ​diffraction.​ ​Diffraction​ ​also
causes​ ​contrasty​ ​edges​ ​to​ ​be​ ​blurred,​ ​which​ ​is​ ​less​ ​likely​ ​for​ ​the​ ​photogrammetry​ ​software​ ​to
detect​ ​trackable​ ​features​ ​from.

Barrel​ ​Distortion

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​19
It​ ​is​ ​important​ ​for​ ​the​ ​lens​ ​to​ ​have​ ​as​ ​little​ ​distortion​ ​as​ ​possible,​ ​as​ ​distortion​ ​across​ ​images​ ​will
cause​ ​objects​ ​on​ ​the​ ​sides​ ​of​ ​the​ ​frame​ ​to​ ​“move”​ ​in​ ​an​ ​inconsistent​ ​manner.​ ​This​ ​can​ ​reduce
model​ ​accuracy​ ​in​ ​the​ ​photogrammetry​ ​software.

Image​ ​Stabilization
While​ ​image​ ​stabilization​ ​in​ ​lenses​ ​or​ ​in​ ​cameras​ ​will​ ​help​ ​reduce​ ​the​ ​chance​ ​of​ ​motion​ ​blur​ ​in
images​ ​and​ ​improve​ ​feature​ ​trackability,​ ​the​ ​use​ ​of​ ​image​ ​stabilization​ ​may​ ​cause​ ​unwanted​ ​and
irreparable​ ​micro​ ​image​ ​distortion​ ​due​ ​to​ ​the​ ​shifting​ ​of​ ​lens​ ​elements.​ ​It​ ​is​ ​technically​ ​best​ ​to
have​ ​image​ ​stabilization​ ​off​ ​for​ ​photogrammetry.

Lens​ ​Flares​ ​and​ ​Ghosting

Higher​ ​quality​ ​lenses​ ​will​ ​greatly​ ​reduce​ ​the​ ​amount​ ​of​ ​lens​ ​flares​ ​and​ ​ghosting​ ​when​ ​used
under​ ​bright​ ​light​ ​sources.​ ​Lens​ ​flares​ ​and​ ​ghosting​ ​will​ ​negatively​ ​impact​ ​photogrammetry
because​ ​they​ ​can​ ​be​ ​falsely​ ​tracked​ ​and​ ​may​ ​be​ ​projected​ ​onto​ ​the​ ​final​ ​model​ ​as​ ​a​ ​texture.

Scouting​ ​Location​ ​and​ ​Planning


It’s​ ​a​ ​good​ ​idea​ ​to​ ​scout​ ​the​ ​location​ ​before​ ​the​ ​shoot​ ​and​ ​figure​ ​out​ ​what​ ​areas​ ​need​ ​special
attention,​ ​how​ ​to​ ​get​ ​better​ ​angles​ ​of​ ​the​ ​subject​ ​(such​ ​as​ ​getting​ ​a​ ​ladder​ ​to​ ​enable​ ​shooting
from​ ​a​ ​higher​ ​perspective),​ ​and​ ​possible​ ​obstructions,​ ​such​ ​as​ ​other​ ​people​ ​or​ ​random
constructions.​ ​Lighting​ ​is​ ​a​ ​key​ ​factor​ ​in​ ​photogrammetry​ ​as​ ​well,​ ​and​ ​whatever​ ​light​ ​you​ ​capture
from​ ​during​ ​the​ ​capture​ ​time​ ​will​ ​be​ ​baked​ ​into​ ​the​ ​textures​ ​of​ ​the​ ​final​ ​model.

Photogrammetry​ ​Mesh​ ​vs​ ​Capture​ ​Area


Keep​ ​in​ ​mind​ ​that​ ​in​ ​order​ ​to​ ​present​ ​a​ ​believable​ ​rendition​ ​of​ ​the​ ​environment​ ​for​ ​exploration​ ​in
VR,​ ​the​ ​area​ ​of​ ​the​ ​site​ ​that​ ​is​ ​captured​ ​will​ ​need​ ​to​ ​be​ ​much​ ​bigger​ ​than​ ​the​ ​area​ ​in​ ​which​ ​the
user​ ​can​ ​virtually​ ​walk​ ​around​ ​in.​ ​It​ ​is​ ​important​ ​to​ ​have​ ​areas​ ​in​ ​the​ ​background,​ ​which​ ​are
unreachable,​ ​to​ ​be​ ​represented​ ​in​ ​3D​ ​as​ ​well​ ​so​ ​they​ ​have​ ​proper​ ​and​ ​believable​ ​depth,
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​20
otherwise​ ​a​ ​flat​ ​2D​ ​image​ ​as​ ​a​ ​backdrop​ ​will​ ​easily​ ​stand​ ​out​ ​against​ ​a​ ​3D​ ​photogrammetry
mesh​ ​in​ ​VR.​ ​As​ ​an​ ​example,​ ​for​ ​our​ ​Prospect​ ​Point​ ​scene,​ ​the​ ​3D​ ​mesh​ ​extends​ ​out​ ​at​ ​least
20-50​ ​meters​ ​away​ ​from​ ​the​ ​explorable​ ​area.

Example​ ​of​ ​extended​ ​photogrammetry​ ​meshes​ ​for​ ​use​ ​as​ ​background​ ​seen​ ​outlined​ ​in​ ​blue.​ ​The​ ​area​ ​where​ ​the​ ​user​ ​can​ ​walk​ ​in​ ​is
outlined​ ​in​ ​green.

Time​ ​of​ ​Day​ ​+​ ​Weather


Time​ ​of​ ​day​ ​greatly​ ​affects​ ​the​ ​way​ ​the​ ​scene​ ​is​ ​presented,​ ​in​ ​terms​ ​of​ ​lighting.​ ​If​ ​you​ ​capture
while​ ​there​ ​is​ ​sunlight​ ​in​ ​the​ ​scene​ ​then​ ​that​ ​sunlight​ ​will​ ​be​ ​represented​ ​in​ ​the​ ​photogrammetry
model.​ ​For​ ​public​ ​touristy​ ​spots,​ ​there​ ​will​ ​generally​ ​be​ ​more​ ​people​ ​during​ ​the​ ​middle​ ​of​ ​the​ ​day
versus​ ​the​ ​early​ ​morning.​ ​It​ ​is​ ​exponentially​ ​harder​ ​to​ ​capture​ ​and​ ​produce​ ​a​ ​clean
photogrammetry​ ​result​ ​when​ ​there​ ​are​ ​unwanted​ ​people​ ​in​ ​the​ ​site.

You​ ​can​ ​choose​ ​to​ ​include​ ​sunlight​ ​in​ ​the​ ​capture,​ ​but​ ​the​ ​whole​ ​environment’s​ ​lighting​ ​will​ ​be
locked​ ​in.​ ​Shooting​ ​during​ ​an​ ​overcast​ ​day​ ​will​ ​result​ ​in​ ​the​ ​most​ ​neutral​ ​lighting​ ​situation.​ ​The
advantage​ ​of​ ​neutral​ ​and​ ​flat​ ​lighting​ ​is​ ​that​ ​you​ ​may​ ​add​ ​artificial​ ​lighting​ ​later​ ​and​ ​present​ ​it​ ​in
any​ ​way​ ​you​ ​want.​ ​The​ ​disadvantage​ ​of​ ​adding​ ​lighting​ ​post-capture​ ​is​ ​that​ ​some​ ​surfaces​ ​may
not​ ​react​ ​realistically​ ​to​ ​light​ ​the​ ​way​ ​it​ ​does​ ​in​ ​nature​ ​without​ ​special​ ​shaders,​ ​such​ ​as
translucent​ ​tree​ ​leaves.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​21
Original​ ​scene​ ​with​ ​overcast​ ​lighting

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​22
Photogrammetry​ ​model​ ​re-lit​ ​with​ ​hard​ ​sunlight

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​23
Photogrammetry​ ​model​ ​re-lit​ ​with​ ​soft​ ​close-up​ ​lights

Example​ ​of​ ​half​ ​overcast​ ​and​ ​half​ ​direct​ ​sunlight​ ​morning​ ​lighting

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​24
Example​ ​of​ ​morning​ ​overcast​ ​lighting

Flaring,​ ​which​ ​usually​ ​happens​ ​when​ ​the​ ​sun​ ​or​ ​a​ ​bright​ ​light​ ​source​ ​is​ ​in​ ​the​ ​image​ ​or​ ​close​ ​to
the​ ​outer​ ​edges​ ​of​ ​the​ ​frame,​ ​should​ ​be​ ​avoided​ ​at​ ​all​ ​costs​ ​because​ ​it​ ​will​ ​not​ ​be​ ​processed
well​ ​in​ ​the​ ​photogrammetry​ ​model.​ ​One​ ​disadvantage​ ​of​ ​shooting​ ​at​ ​a​ ​time​ ​when​ ​there​ ​is​ ​direct
sunlight​ ​is​ ​flaring.​ ​The​ ​photogrammetry​ ​program​ ​may​ ​have​ ​trouble​ ​discerning​ ​the​ ​details​ ​that
are​ ​affected​ ​by​ ​the​ ​sun​ ​flare,​ ​and​ ​even​ ​if​ ​it​ ​is​ ​able​ ​to​ ​render​ ​a​ ​model​ ​with​ ​the​ ​details​ ​near​ ​the
flare,​ ​textures​ ​that​ ​are​ ​projected​ ​onto​ ​the​ ​model​ ​will​ ​have​ ​that​ ​flare​ ​baked​ ​into​ ​it,​ ​which​ ​may​ ​also
be​ ​undesirable.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​25
Close-up​ ​example​ ​of​ ​flaring Close-up​ ​example​ ​of​ ​blown​ ​highlights​ ​around​ ​the​ ​edges​ ​of
leaves

Capturing

Non-Photogrammetry​ ​Related​ ​Shots


In​ ​addition​ ​to​ ​the​ ​photos​ ​that​ ​are​ ​captured​ ​for​ ​use​ ​in​ ​photogrammetry,​ ​there​ ​are​ ​some​ ​other
photos​ ​that​ ​should​ ​be​ ​taken​ ​at​ ​the​ ​site​ ​on​ ​the​ ​same​ ​day.

Skybox​ ​Panoramas
Take​ ​360​ ​panoramas​ ​of​ ​the​ ​background​ ​in​ ​the​ ​far​ ​distance​ ​with​ ​as​ ​little​ ​foreground​ ​as​ ​possible
so​ ​that​ ​it​ ​can​ ​be​ ​stitched​ ​together​ ​and​ ​be​ ​used​ ​as​ ​a​ ​skybox​ ​in​ ​the​ ​VR​ ​experience.​ ​Use​ ​stitching
software​ ​like​ ​Adobe​ ​Photoshop,​ ​PTGUI,​ ​Microsoft​ ​Image​ ​Composite​ ​Editor​ ​to​ ​make​ ​a​ ​spherical
panorama.​ ​This​ ​can​ ​be​ ​applied​ ​to​ ​the​ ​skybox​ ​in​ ​Unity​ ​or​ ​onto​ ​a​ ​very​ ​large​ ​sphere​ ​mesh.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​26
Length​ ​References
Take​ ​photos​ ​of​ ​the​ ​environment​ ​with​ ​an​ ​object​ ​with​ ​known​ ​length​ ​like​ ​a​ ​meter​ ​stick​ ​so​ ​it​ ​can​ ​be
used​ ​to​ ​define​ ​the​ ​correct​ ​scale​ ​in​ ​Reality​ ​Capture.

Signage​ ​or​ ​Information​ ​Boards


It’s​ ​a​ ​good​ ​idea​ ​to​ ​take​ ​photos​ ​of​ ​objects​ ​of​ ​interest​ ​that​ ​have​ ​a​ ​flat​ ​surface​ ​with​ ​graphics​ ​or​ ​are
text​ ​heavy​ ​that​ ​are​ ​meant​ ​to​ ​be​ ​read​ ​from​ ​in​ ​VR.​ ​It​ ​may​ ​be​ ​too​ ​difficult​ ​to​ ​reproduce​ ​clearly​ ​using
photogrammetry​ ​so​ ​these​ ​object​ ​surfaces​ ​need​ ​to​ ​be​ ​recreated​ ​using​ ​separate​ ​geometry​ ​and
separate​ ​textures.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​27
Camera​ ​Settings
Here​ ​are​ ​a​ ​few​ ​tips​ ​about​ ​how​ ​camera​ ​settings​ ​will​ ​affect​ ​photogrammetry​ ​quality.

ISO
The​ ​lower​ ​the​ ​ISO,​ ​the​ ​less​ ​image​ ​noise​ ​will​ ​result​ ​in​ ​the​ ​photos.​ ​Less​ ​noise​ ​will​ ​help​ ​the
photogrammetry​ ​model​ ​to​ ​be​ ​more​ ​accurate​ ​and​ ​have​ ​smoother​ ​surfaces.​ ​Image​ ​noise​ ​may​ ​be
misread​ ​by​ ​the​ ​photogrammetry​ ​software​ ​as​ ​features,​ ​thus​ ​causing​ ​more​ ​error,​ ​or​ ​it​ ​may
obscure​ ​fine​ ​details​ ​in​ ​the​ ​image.

Aperture

Generally​ ​with​ ​DSLR​ ​lenses,​ ​an​ ​aperture​ ​between​ ​f5.6-8.0​ ​would​ ​be​ ​the​ ​most​ ​optimal​ ​to​ ​provide
the​ ​best​ ​sharpness,​ ​a​ ​decent​ ​depth​ ​of​ ​field,​ ​reduced​ ​vignetting,​ ​reduced​ ​chromatic​ ​aberration,
least​ ​diffraction.​ ​Read​ ​in-depth​ ​reviews​ ​on​ ​lenses​ ​to​ ​find​ ​out​ ​the​ ​most​ ​optimal​ ​aperture​ ​settings
for​ ​the​ ​lens​ ​you​ ​are​ ​using,​ ​such​ ​as​ ​DPReview​​ ​and​ ​The​ ​Digital​ ​Picture​.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​28
Shutter​ ​speed
Ideally​ ​the​ ​shutter​ ​speed​ ​needs​ ​to​ ​be​ ​quick​ ​enough​ ​to​ ​completely​ ​prevent​ ​any​ ​form​ ​of​ ​motion
blur.​ ​If​ ​you​ ​are​ ​shooting​ ​on​ ​tripod​ ​then​ ​that​ ​affords​ ​using​ ​lower​ ​shutter​ ​speeds​ ​(which​ ​is​ ​to
compensate​ ​for​ ​low​ ​amounts​ ​of​ ​light,​ ​lower​ ​ISO​ ​settings,​ ​smaller​ ​apertures).​ ​Constantly​ ​zoom​ ​in
to​ ​your​ ​photos​ ​on​ ​the​ ​site​ ​to​ ​check​ ​whether​ ​or​ ​not​ ​it’s​ ​blurry.

Shoot​ ​RAW
RAW​ ​images​ ​provide​ ​the​ ​best​ ​quality​ ​of​ ​images​ ​to​ ​be​ ​used​ ​for​ ​photogrammetry.​ ​JPGS​ ​on​ ​the
other​ ​hand​ ​will​ ​cause​ ​compression​ ​artifacts​ ​and​ ​also​ ​have​ ​other​ ​artifacts​ ​from​ ​in-camera
processing​ ​algorithms.​ ​RAW​ ​files​ ​can​ ​yield​ ​much​ ​better​ ​images​ ​when​ ​handled​ ​by​ ​programs
such​ ​as​ ​Lightroom.​ ​A​ ​JPG​ ​produced​ ​by​ ​from​ ​a​ ​RAW​ ​image​ ​using​ ​Lightroom​ ​can​ ​be​ ​far​ ​superior
to​ ​a​ ​JPG​ ​produced​ ​straight​ ​from​ ​the​ ​camera.

Photogrammetry​ ​Level​ ​of​ ​Detail


The​ ​photogrammetry​ ​model​ ​will​ ​only​ ​be​ ​as​ ​good​ ​as​ ​the​ ​data​ ​you​ ​provide​ ​it.​ ​For​ ​example,​ ​if​ ​you
shoot​ ​the​ ​scene​ ​30​ ​meters​ ​above​ ​the​ ​ground​ ​with​ ​a​ ​drone,​ ​it​ ​will​ ​likely​ ​only​ ​be​ ​viewable​ ​from​ ​a
distance​ ​of​ ​30​ ​meters​ ​away.​ ​If​ ​you​ ​shoot​ ​from​ ​the​ ​ground-level,​ ​then​ ​the​ ​models​ ​will​ ​be​ ​more
suited​ ​for​ ​viewing​ ​from​ ​the​ ​ground​ ​level.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​29
An​ ​example​ ​of​ ​a​ ​photogrammetry​ ​model​ ​that​ ​was​ ​generated​ ​entirely​ ​from​ ​drone​ ​footage​ ​that​ ​flew​ ​about​ ​36​ ​meters​ ​above​ ​ground.
Note​ ​the​ ​incorrectly​ ​rendered​ ​shapes​ ​of​ ​the​ ​trees​ ​and​ ​lack​ ​of​ ​detail.

Close​ ​up​ ​screenshot​ ​of​ ​the​ ​same​ ​mode​ ​as​ ​above.

100%​ ​crop​ ​from​ ​the​ ​source​ ​drone​ ​photo​ ​that​ ​was​ ​used​ ​to​ ​generate​ ​the​ ​model.​ ​The​ ​texel​ ​density​ ​provided​ ​by​ ​the​ ​photo​ ​would​ ​be
completely​ ​unsuitable​ ​for​ ​ground-level​ ​viewing

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​30
Example​ ​of​ ​the​ ​same​ ​corner​ ​of​ ​the​ ​deck​ ​taken​ ​from​ ​ground​ ​level.​ ​See​ ​the​ ​drastic​ ​amount​ ​of​ ​increase​ ​in​ ​detail​ ​provided​ ​by​ ​the
ground​ ​level​ ​photo.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​31
100%​ ​crop​ ​of​ ​the​ ​same​ ​image​ ​above​ ​shows​ ​detail​ ​to​ ​the​ ​millimeter​ ​level,​ ​which​ ​will​ ​be​ ​suitable​ ​for​ ​viewing​ ​close​ ​up​ ​in​ ​VR.

Example​ ​of​ ​photogrammetry​ ​created​ ​from​ ​ground​ ​level​ ​capture​ ​with​ ​photos​ ​like​ ​from​ ​above.​ ​Much​ ​more​ ​detail​ ​is​ ​shown​ ​compared​ ​to
purely​ ​from​ ​drone​ ​footage.​ ​(In​ ​this​ ​screenshot,​ ​the​ ​railings​ ​are​ ​modeled​ ​separately​ ​modeled​ ​rather​ ​than​ ​generated​ ​from
photogrammetry)

Capture​ ​Strategies

General​ ​Tips
Photogrammetry​ ​is​ ​based​ ​on​ ​parallax.​ ​That’s​ ​how​ ​a​ ​3D​ ​model​ ​can​ ​be​ ​generated​ ​from​ ​multiple
2D​ ​images.​ ​The​ ​program​ ​detects​ ​common​ ​features​ ​among​ ​multiple​ ​images​ ​and​ ​sees​ ​how​ ​these
features​ ​shift​ ​between​ ​each​ ​of​ ​the​ ​images.​ ​By​ ​seeing​ ​how​ ​they​ ​shift,​ ​it’s​ ​able​ ​to​ ​tell​ ​it’s​ ​position
in​ ​3D​ ​space.​ ​Generally,​ ​the​ ​more​ ​photos​ ​you​ ​feed​ ​Reality​ ​Capture,​ ​the​ ​better.​ ​Every​ ​photo​ ​you
take​ ​should​ ​always​ ​have​ ​at​ ​least​ ​half​ ​of​ ​the​ ​frame​ ​sharing​ ​overlapping​ ​content​ ​with​ ​at​ ​least​ ​2
other​ ​photos.

The​ ​photogrammetry​ ​software​ ​doesn’t​ ​know​ ​what​ ​a​ ​leaf​ ​is,​ ​or​ ​what​ ​a​ ​brick​ ​wall​ ​is,​ ​it​ ​only​ ​looks​ ​at
the​ ​shape​ ​and​ ​arrangement​ ​of​ ​pixels​ ​that​ ​makes​ ​up​ ​what​ ​we​ ​call​ ​a​ ​leaf​ ​or​ ​a​ ​brick​ ​wall.​ ​If​ ​it​ ​can’t
find​ ​the​ ​same​ ​arrangement​ ​of​ ​pixels​ ​within​ ​a​ ​certain​ ​margin​ ​of​ ​error,​ ​it​ ​won’t​ ​know​ ​that​ ​it’s​ ​the
same​ ​leaf​ ​or​ ​brick​ ​wall.​ ​Therefore,​ ​for​ ​example​ ​if​ ​you​ ​walk​ ​around​ ​a​ ​leaf​ ​and​ ​take​ ​photos​ ​all​ ​45
degrees​ ​apart,​ ​the​ ​drastic​ ​change​ ​in​ ​arrangement​ ​of​ ​pixels​ ​in​ ​the​ ​image​ ​may​ ​cause​ ​the
photogrammetry​ ​software​ ​to​ ​not​ ​realize​ ​that​ ​it’s​ ​the​ ​same​ ​object.​ ​In​ ​Reality​ ​Capture,​ ​this​ ​is​ ​the
main​ ​reason​ ​that​ ​your​ ​alignment​ ​results​ ​in​ ​multiple​ ​components.​ ​The​ ​program​ ​simply​ ​can’t​ ​find
the​ ​commonalities​ ​between​ ​these​ ​components.

A​ ​good​ ​general​ ​tip​ ​for​ ​capturing​ ​an​ ​area​ ​or​ ​an​ ​object​ ​is​ ​to​ ​“connect​ ​all​ ​your​ ​angles​ ​together”.​ ​If
you​ ​are​ ​shooting​ ​from​ ​two​ ​drastically​ ​different​ ​angles,​ ​you​ ​need​ ​shots​ ​in​ ​between​ ​to​ ​smoothly
transition​ ​from​ ​one​ ​angle​ ​to​ ​the​ ​other.​ ​If​ ​you​ ​are​ ​shooting​ ​in​ ​one​ ​corner​ ​of​ ​the​ ​environment​ ​then
decide​ ​to​ ​shoot​ ​at​ ​the​ ​opposite​ ​corner,​ ​you​ ​should​ ​take​ ​photos​ ​along​ ​the​ ​way​ ​while​ ​you​ ​are
moving​ ​to​ ​the​ ​opposite​ ​corner.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​32
Try​ ​to​ ​avoid​ ​shooting​ ​in​ ​an​ ​area​ ​or​ ​objects​ ​with​ ​reflective​ ​surfaces.​ ​While​ ​humans​ ​can​ ​tell​ ​the
difference​ ​between​ ​the​ ​object’s​ ​texture​ ​and​ ​the​ ​reflections​ ​on​ ​the​ ​surface​ ​of​ ​it,​ ​the
photogrammetry​ ​software​ ​gets​ ​confused​ ​by​ ​the​ ​apparent​ ​two​ ​different​ ​depths​ ​on​ ​every​ ​pixel​ ​that
composes​ ​the​ ​reflective​ ​object.​ ​This​ ​will​ ​result​ ​in​ ​an​ ​extremely​ ​uneven​ ​or​ ​even​ ​broken​ ​surface.

Try​ ​to​ ​also​ ​avoid​ ​shooting​ ​flat​ ​and​ ​untextured​ ​surfaces​ ​with​ ​no​ ​detail.​ ​As​ ​mentioned​ ​earlier,
photogrammetry​ ​software​ ​works​ ​with​ ​features​ ​in​ ​an​ ​image.​ ​Features​ ​with​ ​more​ ​contrast,​ ​are
tracked​ ​better​ ​than​ ​features​ ​with​ ​low​ ​contrast.​ ​A​ ​flat​ ​untextured​ ​surface​ ​has​ ​no​ ​features​ ​for​ ​the
program​ ​to​ ​track,​ ​so​ ​that​ ​surface​ ​will​ ​result​ ​in​ ​an​ ​extremely​ ​uneven​ ​or​ ​even​ ​broken​ ​surface.

Mental​ ​Model:​ ​Flashlight​ ​analogy


One​ ​way​ ​of​ ​visualizing​ ​how​ ​photos​ ​contribute​ ​to​ ​the​ ​final​ ​photogrammetry​ ​mesh​ ​is​ ​to​ ​imagine
using​ ​a​ ​flashlight​ ​in​ ​a​ ​pitch​ ​black​ ​room​ ​or​ ​environment.​ ​When​ ​you​ ​project​ ​a​ ​light​ ​on​ ​a​ ​subject,
there​ ​will​ ​be​ ​a​ ​lit​ ​area​ ​and​ ​a​ ​shadowy​ ​area​ ​where​ ​the​ ​flashlight’s​ ​light​ ​can’t​ ​reach.​ ​Similarly,​ ​for
any​ ​given​ ​camera​ ​angle,​ ​there​ ​are​ ​areas​ ​visible​ ​to​ ​the​ ​camera​ ​and​ ​areas​ ​that​ ​are​ ​not.​ ​The
flashlight-lit​ ​areas​ ​are​ ​analogous​ ​to​ ​the​ ​areas​ ​seen​ ​by​ ​the​ ​camera.​ ​To​ ​gather​ ​a​ ​complete​ ​set​ ​of
data​ ​of​ ​an​ ​area,​ ​you​ ​need​ ​enough​ ​angles​ ​to​ ​cover​ ​it​ ​so​ ​that​ ​there​ ​are​ ​no​ ​longer​ ​any​ ​“shadowy”
areas.

Individual​ ​“flashlight”​ ​angles.

All​ ​the​ ​“flashlight”​ ​angles​ ​stacked​ ​on​ ​top​ ​of​ ​each​ ​other.​ ​Most​ ​areas​ ​are​ ​now​ ​“lit”

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​33
Example​ ​screenshot​ ​of​ ​a​ ​point​ ​cloud​ ​resulting​ ​from​ ​a​ ​LIDAR​ ​scan​ ​from​ ​a​ ​single​ ​angle.​ ​Notice
the​ ​long​ ​“shadows”​ ​behind​ ​the​ ​bush​ ​wall​ ​and​ ​trees.​ ​Those​ ​areas​ ​have​ ​no​ ​data,​ ​and​ ​may​ ​either
be​ ​missing​ ​or​ ​completely​ ​flat​ ​and​ ​void​ ​of​ ​detail​ ​when​ ​a​ ​mesh​ ​is​ ​generated.

Shooting​ ​Video​ ​(Not​ ​Recommended):


Do​ ​not​ ​shoot​ ​using​ ​video​ ​cameras​ ​unless​ ​you​ ​know​ ​exactly​ ​what​ ​you​ ​are​ ​doing.

In​ ​essence,​ ​videos​ ​are​ ​comprised​ ​of​ ​several​ ​images​ ​chained​ ​together​ ​and​ ​can​ ​theoretically
provide​ ​a​ ​very​ ​high​ ​amount​ ​of​ ​overlapping​ ​images,​ ​and​ ​all​ ​you​ ​have​ ​to​ ​do​ ​is​ ​to​ ​keep​ ​the​ ​camera
rolling​ ​and​ ​walk​ ​around.​ ​That​ ​sounds​ ​like​ ​a​ ​great​ ​idea​ ​right?​ ​Not​ ​exactly.

There​ ​are​ ​many​ ​factors​ ​that​ ​will​ ​affect​ ​usability​ ​of​ ​video​ ​frames​ ​for​ ​photogrammetry,​ ​such​ ​as:
sensor​ ​type,​ ​resolution,​ ​image​ ​noise,​ ​pixel​ ​binning,​ ​image​ ​scaling,​ ​aliasing,​ ​dynamic​ ​range,
compression​ ​artifacts​ ​etc...

In​ ​general,​ ​a​ ​single​ ​frame​ ​from​ ​a​ ​video​ ​is​ ​far​ ​inferior​ ​to​ ​a​ ​photo​ ​in​ ​terms​ ​of​ ​image​ ​quality.​ ​Even
frames​ ​from​ ​high​ ​end​ ​4K​ ​video​ ​cameras​ ​only​ ​provide​ ​8.3​ ​megapixels​ ​of​ ​pixels​ ​per​ ​frame.​ ​In
comparison,​ ​photo​ ​cameras​ ​can​ ​provide​ ​up​ ​to​ ​42​ ​megapixels​ ​per​ ​frame​ ​for​ ​a​ ​fraction​ ​of​ ​the
price.​ ​The​ ​frames​ ​provided​ ​from​ ​a​ ​video​ ​often​ ​have​ ​too​ ​much​ ​overlap​ ​and​ ​may​ ​waste
processing​ ​time.

DSLR​ ​cameras​ ​and​ ​mirrorless​ ​cameras​ ​and​ ​action​ ​cameras​ ​also​ ​typically​ ​exhibit​ ​rolling​ ​shutter
(or​ ​“jello​ ​effect”)​ ​phenomenon​ ​in​ ​video​ ​mode.​ ​Rolling​ ​shutter​ ​will​ ​severely​ ​disrupt
photogrammetry​ ​integrity​ ​(theoretically​ ​causes​ ​misalignment​ ​and​ ​fake​ ​parallax).​ ​At​ ​the​ ​very
minimum,​ ​use​ ​a​ ​video​ ​camera​ ​with​ ​a​ ​global​ ​shutter.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​34
Ground​ ​Level​ ​Capture:​ ​360​ ​Approach
This​ ​technique​ ​can​ ​be​ ​quite​ ​difficult​ ​to​ ​pull​ ​off​ ​without​ ​a​ ​good​ ​rig​ ​but​ ​it​ ​may​ ​result​ ​in​ ​more
complete​ ​captures​ ​because​ ​you​ ​don’t​ ​have​ ​to​ ​keep​ ​track​ ​of​ ​as​ ​many​ ​things​ ​(such​ ​as​ ​what​ ​you
captured,​ ​and​ ​from​ ​what​ ​angle,​ ​and​ ​what​ ​height)​ ​compared​ ​to​ ​the​ ​“Normal-To”​ ​capture​ ​method
mentioned​ ​later.

At​ ​this​ ​time​ ​of​ ​writing,​ ​Reality​ ​Capture​ ​does​ ​not​ ​support​ ​spherically​ ​projected​ ​360​ ​images​ ​taken
with​ ​360​ ​cameras.​ ​On​ ​a​ ​side​ ​note,​ ​image​ ​quality​ ​from​ ​consumer​ ​grade​ ​to​ ​prosumer​ ​grade​ ​360
cameras​ ​are​ ​generally​ ​too​ ​low​ ​to​ ​be​ ​used​ ​for​ ​high​ ​quality​ ​photogrammetry.

The​ ​“360”​ ​mentioned​ ​here​ ​instead​ ​means​ ​taking​ ​several​ ​images​ ​shot​ ​from​ ​the​ ​same​ ​place​ ​in
every​ ​angle​ ​with​ ​some​ ​overlap​ ​between​ ​each​ ​photo.​ ​You​ ​then​ ​take​ ​multiple​ ​360s​ ​at​ ​various
points​ ​and​ ​heights​ ​around​ ​the​ ​entire​ ​environment.​ ​It​ ​may​ ​be​ ​harder​ ​to​ ​get​ ​a​ ​consistent​ ​level​ ​of
detail​ ​along​ ​walls​ ​or​ ​flat​ ​long​ ​surfaces​ ​using​ ​this​ ​method​ ​because​ ​the​ ​the​ ​closer​ ​the​ ​mesh​ ​is​ ​to
the​ ​360​ ​capture​ ​area,​ ​the​ ​higher​ ​quality​ ​it​ ​will​ ​be.​ ​If​ ​you​ ​have​ ​a​ ​rig​ ​for​ ​shooting​ ​360​ ​with​ ​multiple
cameras​ ​simultaneously​ ​then​ ​it​ ​will​ ​be​ ​very​ ​easy​ ​to​ ​use​ ​this​ ​method.​ ​You​ ​just​ ​have​ ​to​ ​make​ ​sure
that​ ​you​ ​avoid​ ​capturing​ ​yourself​ ​or​ ​the​ ​rig​ ​as​ ​much​ ​as​ ​possible.​ ​You​ ​will​ ​have​ ​to​ ​mask​ ​it​ ​out​ ​or
they​ ​will​ ​end​ ​up​ ​in​ ​the​ ​photogrammetry​ ​model.​ ​Other​ ​nearby​ ​points​ ​of​ ​capture​ ​can​ ​cover​ ​that
missing​ ​information​ ​if​ ​you​ ​capture​ ​close​ ​enough​ ​(within​ ​2​ ​meters).

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​35
Ground​ ​Level​ ​Capture:​ ​Normal-To​ ​Approach

This​ ​method​ ​may​ ​be​ ​the​ ​fastest​ ​and​ ​most​ ​efficient​ ​approach​ ​with​ ​moderately​ ​to​ ​high​ ​quality
results.​ ​It’s​ ​also​ ​more​ ​more​ ​likely​ ​to​ ​produce​ ​consistent​ ​levels​ ​of​ ​detail​ ​because​ ​it​ ​involves
moving​ ​in​ ​parallel​ ​to​ ​significant​ ​features​ ​of​ ​the​ ​environment​ ​at​ ​a​ ​consistent​ ​distance,​ ​which​ ​also
makes​ ​the​ ​resulting​ ​texel​ ​density​ ​consistent.

Here​ ​is​ ​an​ ​example​ ​of​ ​how​ ​to​ ​take​ ​photos​ ​using​ ​this​ ​method.​ ​If​ ​you​ ​are​ ​in​ ​an​ ​enclosed​ ​space
with​ ​4​ ​walls,​ ​here​ ​are​ ​3​ ​general​ ​paths​​ ​that​ ​you​ ​should​ ​take​ ​to​ ​capture​ ​everything​ ​(in​ ​no
particular​ ​order):

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​36
1​ ​-​ ​Shoot​ ​with​ ​the​ ​camera​ ​facing​ ​normal​ ​to​ ​the​ ​wall,​ ​at​ ​a​ ​distance​ ​where​ ​the​ ​camera​ ​frame​ ​can
fit​ ​the​ ​entire​ ​height​ ​of​ ​the​ ​wall,​ ​and​ ​move​ ​in​ ​a​ ​direction​ ​parallel​ ​to​ ​the​ ​wall,​ ​taking​ ​photos​ ​every
few​ ​steps​ ​(or​ ​dense​ ​enough​ ​so​ ​more​ ​than​ ​half​ ​of​ ​the​ ​image​ ​is​ ​overlapping​ ​with​ ​the​ ​previous
image)​ ​all​ ​around​ ​the​ ​space​ ​in​ ​a​ ​full​ ​loop​ ​(clockwise​ ​or​ ​counter-clockwise)​ ​so​ ​that​ ​you​ ​end​ ​up
where​ ​you​ ​started.

2​ ​-​ ​Have​ ​your​ ​back​ ​against​ ​the​ ​wall​ ​and​ ​follow​ ​the​ ​perimeter​ ​while​ ​pointing​ ​the​ ​camera​ ​at​ ​the
center​ ​of​ ​the​ ​space.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​37
3​ ​-​ ​Break​ ​down​ ​the​ ​ground​ ​into​ ​parallel​ ​strips​ ​of​ ​3-5​ ​meters​ ​and​ ​shoot​ ​along​ ​it,​ ​pointing​ ​the
camera​ ​around​ ​30-45​ ​degrees​ ​down​ ​and​ ​perpendicular​ ​to​ ​your​ ​movement.​ ​Be​ ​sure​ ​to​ ​include
some​ ​of​ ​the​ ​background​ ​and​ ​some​ ​of​ ​the​ ​immediate​ ​floor​ ​in​ ​front​ ​of​ ​you​ ​to​ ​capture​ ​its​ ​details.
Keeping​ ​some​ ​of​ ​the​ ​background​ ​in​ ​the​ ​photo​ ​ensures​ ​that​ ​there’s​ ​something​ ​for​ ​the
photogrammetry​ ​software​ ​to​ ​feature-match​ ​to.

4​ ​-​ ​For​ ​each​ ​path,​ ​keep​ ​the​ ​camera​ ​at​ ​a​ ​consistent​ ​height.​ ​Then​ ​repeat​ ​all​ ​paths​ ​with​ ​at​ ​least​ ​2
other​ ​heights.​ ​One​ ​higher​ ​than​ ​all​ ​the​ ​objects​ ​in​ ​the​ ​environment,​ ​and​ ​one​ ​that’s​ ​very​ ​close​ ​to
the​ ​ground​ ​to​ ​get​ ​all​ ​the​ ​bottom​ ​surfaces​ ​of​ ​overhanging​ ​objects.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​38
With​ ​all​ ​the​ ​photos​ ​from​ ​all​ ​the​ ​paths​ ​added​ ​up,​ ​it​ ​should​ ​provide​ ​a​ ​complete​ ​capture​ ​for​ ​the
photogrammetry​ ​software​ ​to​ ​work​ ​with.

Here​ ​are​ ​some​ ​examples​ ​of​ ​scenes​ ​using​ ​the​ ​Normal-To​ ​capture​ ​method:

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​39
Objects​ ​In​ ​The​ ​Middle​ ​of​ ​the​ ​Environment
For​ ​isolated​ ​objects​ ​of​ ​interest​ ​in​ ​the​ ​scene,​ ​shoot​ ​in​ ​a​ ​circular​ ​path​ ​around​ ​it.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​40
Tall​ ​Objects​ ​Or​ ​Walls
For​ ​really​ ​tall​ ​walls​ ​that​ ​can’t​ ​fit​ ​in​ ​a​ ​single​ ​frame,​ ​shoot​ ​pointing​ ​at​ ​the​ ​base​ ​of​ ​the​ ​wall​ ​with
some​ ​ground,​ ​then​ ​repeat​ ​the​ ​same​ ​path​ ​with​ ​the​ ​camera​ ​pointing​ ​higher​ ​with​ ​about​ ​half​ ​of​ ​the
vertical​ ​portion​ ​of​ ​wall​ ​that​ ​was​ ​previously​ ​captured.​ ​Repeat​ ​until​ ​the​ ​whole​ ​wall​ ​is​ ​covered.
Don’t​ ​forget​ ​to​ ​also​ ​shoot​ ​at​ ​different​ ​heights.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​41
Offsetting​ ​Normal-To​ ​Paths
Remember​ ​to​ ​repeat​ ​the​ ​normal-to​ ​path​ ​at​ ​various​ ​distances,​ ​from​ ​very​ ​close​ ​to​ ​very​ ​far​ ​so​ ​that
images​ ​can​ ​be​ ​more​ ​easily​ ​aligned​ ​together​ ​because​ ​there​ ​is​ ​more​ ​overlap

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​42
Keeping​ ​Context​ ​In​ ​All​ ​Shots
When​ ​shooting​ ​very​ ​close​ ​to​ ​something​ ​that’s​ ​part​ ​of​ ​a​ ​larger​ ​environmental​ ​capture,​ ​always
make​ ​sure​ ​to​ ​keep​ ​some​ ​context​ ​in​ ​the​ ​background​ ​for​ ​the​ ​software​ ​to​ ​identify,​ ​otherwise​ ​the
group​ ​of​ ​close-up​ ​photos​ ​will​ ​get​ ​“lost”​ ​and​ ​the​ ​software​ ​won’t​ ​be​ ​able​ ​to​ ​link​ ​the​ ​component​ ​that
contains​ ​the​ ​object​ ​you​ ​took​ ​close-up​ ​photos​ ​of​ ​with​ ​the​ ​component​ ​that​ ​contains​ ​the​ ​rest​ ​of​ ​the
environment.

Concave​ ​Corners​ ​Using​ ​The​ ​Normal-To​ ​Method


For​ ​capturing​ ​acute​ ​angles​ ​around​ ​the​ ​perimeter,​ ​shoot​ ​along​ ​the​ ​first​ ​wall​ ​until​ ​you​ ​touch​ ​the
adjacent​ ​wall,​ ​then​ ​shoot​ ​along​ ​an​ ​arc​ ​with​ ​the​ ​camera​ ​pointed​ ​at​ ​the​ ​location​ ​where​ ​the​ ​two
walls​ ​intersect​ ​until​ ​you​ ​touch​ ​the​ ​first​ ​wall.​ ​Then​ ​shoot​ ​normal​ ​to​ ​the​ ​second​ ​wall​ ​and​ ​move
along​ ​it.​ ​This​ ​method​ ​will​ ​ensure​ ​that​ ​the​ ​photos​ ​from​ ​the​ ​two​ ​perpendicular​ ​paths​ ​can​ ​be
connected​ ​to​ ​each​ ​other,​ ​and​ ​that​ ​the​ ​corner​ ​has​ ​enough​ ​parallax​ ​detail​ ​for​ ​the​ ​photogrammetry
software​ ​to​ ​work​ ​with.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​43
Drone​ ​Approach
Shooting​ ​the​ ​area​ ​with​ ​drones​ ​from​ ​a​ ​top-down​ ​bird’s​ ​eye​ ​view​ ​may​ ​result​ ​in​ ​a​ ​model​ ​that’s
usable​ ​for​ ​viewing​ ​from​ ​an​ ​overview​ ​perspective.​ ​At​ ​most,​ ​the​ ​drone’s​ ​top​ ​view​ ​of​ ​the​ ​ground
may​ ​help​ ​align​ ​some​ ​of​ ​the​ ​photos​ ​together​ ​and​ ​will​ ​reduce​ ​the​ ​chance​ ​of​ ​getting​ ​separated

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​44
components​ ​when​ ​aligning​ ​the​ ​photos​ ​in​ ​Reality​ ​Capture.​ ​Drone​ ​photos​ ​may​ ​also​ ​help​ ​fill​ ​in​ ​for
the​ ​tops​ ​of​ ​objects​ ​that​ ​can’t​ ​be​ ​reached​ ​or​ ​captured​ ​by​ ​ground​ ​level​ ​photography.

Be​ ​aware​ ​of​ ​the​ ​drone’s​ ​camera​ ​quality.​ ​The​ ​best​ ​case​ ​scenario​ ​is​ ​to​ ​fly​ ​a​ ​drone​ ​with​ ​the​ ​same
camera​ ​and​ ​lenses​ ​that​ ​are​ ​used​ ​for​ ​ground​ ​level​ ​photogrammetry​ ​to​ ​provide​ ​a​ ​consistent​ ​result
and​ ​quality.​ ​At​ ​the​ ​time​ ​of​ ​writing,​ ​drones​ ​with​ ​a​ ​built​ ​in​ ​camera​ ​to​ ​shoot​ ​from​ ​birds​ ​eye​ ​level​ ​are
generally​ ​not​ ​sufficient​ ​for​ ​ground​ ​level​ ​viewing,​ ​such​ ​as​ ​Phantom​ ​drones​ ​from​ ​DJI.

Breaking​ ​Down​ ​Large​ ​Environments

Each​ ​coloured​ ​piece​ ​represents​ ​a​ ​mesh​ ​that’s​ ​exported​ ​from​ ​a​ ​separate​ ​Reality​ ​Capture​ ​project​ ​due​ ​to​ ​the​ ​2500​ ​photo​ ​limitation​ ​for
each​ ​project.

At​ ​the​ ​time​ ​of​ ​writing,​ ​the​ ​Promo​ ​license​ ​from​ ​Reality​ ​Capture​ ​only​ ​allows​ ​2500​ ​photos​ ​per
project.​ ​We​ ​only​ ​had​ ​this​ ​license​ ​when​ ​working​ ​on​ ​our​ ​project.​ ​If​ ​the​ ​environment​ ​you​ ​want​ ​to
capture​ ​is​ ​large,​ ​and​ ​you​ ​want​ ​a​ ​very​ ​high​ ​level​ ​of​ ​detail​ ​that​ ​will​ ​take​ ​more​ ​than​ ​2500​ ​photos​ ​to
cover​ ​the​ ​entire​ ​site,​ ​then​ ​you​ ​may​ ​have​ ​to​ ​break​ ​down​ ​the​ ​environment​ ​into​ ​smaller​ ​areas.​ ​You
will​ ​have​ ​to​ ​create​ ​arbitrary​ ​perimeters.​ ​Generally​ ​it’s​ ​good​ ​to​ ​set​ ​these​ ​arbitrary​ ​perimeters
along​ ​visible​ ​seams​ ​in​ ​the​ ​environment,​ ​so​ ​that​ ​when​ ​you​ ​combine​ ​the​ ​individual
photogrammetry​ ​meshes​ ​together​ ​you​ ​can​ ​transition​ ​between​ ​those​ ​meshes​ ​at​ ​the​ ​very​ ​same
seams​ ​so​ ​that​ ​it​ ​looks​ ​more​ ​natural​ ​and​ ​less​ ​jarring.​ ​Be​ ​sure​ ​to​ ​use​ ​some​ ​of​ ​the​ ​same​ ​photos
between​ ​different​ ​projects​ ​at​ ​places​ ​where​ ​the​ ​meshes​ ​are​ ​to​ ​overlap.​ ​This​ ​will​ ​greatly​ ​help​ ​with
alignment​ ​and​ ​give​ ​more​ ​freedom​ ​on​ ​where​ ​to​ ​set​ ​seams.

Organization

File​ ​Structure
It’s​ ​important​ ​to​ ​keep​ ​your​ ​files​ ​organized​ ​in​ ​any​ ​project.​ ​It’s​ ​also​ ​a​ ​good​ ​idea​ ​to​ ​keep​ ​all​ ​your
work​ ​in​ ​the​ ​case​ ​that​ ​you​ ​need​ ​to​ ​go​ ​back​ ​or​ ​revisit​ ​it.
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​45
Here’s​ ​an​ ​example​ ​structure​ ​that​ ​we​ ​used​ ​for​ ​our​ ​project.​ ​Note​ ​that​ ​it’s​ ​more​ ​complicated​ ​due​ ​to
the​ ​limitation​ ​of​ ​having​ ​2500​ ​photos​ ​per​ ​Reality​ ​Capture​ ​project:

● [<Project​ ​Name>​ ​Folder]


○ [0-Photos​ ​Photogrammetry​ ​Folder]
■ [<Area​ ​1​ ​Name>​ ​Folder]
● [Original​ ​photos​ ​folder]
○ Raw​ ​photos
● Corrected​ ​photos
■ [<Area​ ​2​ ​Name>​ ​Folder]
■ [<Area​ ​3​ ​Name>​ ​Folder]
■ etc...
○ [1-Reality​ ​Capture​ ​Project​ ​Folder]
■ .rcproj​ ​files​ ​and​ ​their​ ​working​ ​folders
○ [2-Export​ ​Folder]
■ Initial​ ​exported​ ​model​ ​from​ ​Reality​ ​Capture​ ​+​ ​its​ ​textures​ ​(obj/fbx,​ ​png/jpg)
■ Simplified​ ​and​ ​edited​ ​models​ ​using​ ​3DsMax​ ​(max,​ ​obj/fbx)
○ [3-Export​ ​Merging​ ​Folder]
■ Re-imported,​ ​re-textured,​ ​then​ ​re-exported​ ​models​ ​from​ ​Reality​ ​Capture
■ 3DsMax​ ​file​ ​that​ ​contains​ ​the​ ​aligned​ ​models​ ​from​ ​Reality​ ​Capture
○ [4-Final​ ​Export​ ​Folder]
■ Final​ ​exported​ ​model​ ​from​ ​the​ ​3DsMax​ ​file​ ​containing​ ​all​ ​the​ ​final
textures.

When​ ​you​ ​first​ ​copy​ ​your​ ​photos​ ​to​ ​your​ ​computer​ ​or​ ​hard​ ​drive,​ ​keep​ ​them​ ​in​ ​the​ ​same​ ​folder.

For​ ​the​ ​example​ ​above,​ ​all​ ​photos​ ​that​ ​will​ ​be​ ​used​ ​for​ ​photogrammetry​ ​are​ ​placed​ ​in​ ​the
“0-Photos​ ​Photogrammetry”​ ​folder.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​46
Within​ ​the​ ​“0-Photos​ ​Photogrammetry”​ ​folder,​ ​photos​ ​are​ ​split​ ​into​ ​more​ ​sub-folders​ ​based​ ​on
physical​ ​location.​ ​This​ ​was​ ​done​ ​only​ ​because​ ​of​ ​using​ ​the​ ​Promo​ ​license,​ ​which​ ​limited​ ​any
Reality​ ​Capture​ ​project​ ​to​ ​only​ ​use​ ​a​ ​max​ ​of​ ​2500​ ​photos.

In​ ​each​ ​of​ ​the​ ​“location​ ​sub-folders,”​ ​the​ ​corrected​ ​photos​ ​are​ ​directly​ ​in​ ​there.​ ​A​ ​folder​ ​called
“Original”​ ​contains​ ​the​ ​original​ ​photos​ ​in​ ​RAW​ ​format​ ​which​ ​was​ ​used​ ​by​ ​Lightroom​ ​to​ ​create​ ​the
corrected​ ​photos.

Photo​ ​Corrections

Lightroom​ ​Workflow
Lightroom​ ​is​ ​a​ ​good​ ​tool​ ​for​ ​pre-processing​ ​photos​ ​to​ ​be​ ​used​ ​for​ ​photogrammetry.​ ​The​ ​goal​ ​of
pre-processing​ ​is​ ​to​ ​minimize​ ​artifacts​ ​and​ ​distortions​ ​to​ ​minimize​ ​the​ ​amount​ ​of​ ​error​ ​that
results​ ​in​ ​the​ ​photogrammetry​ ​models.​ ​In​ ​Lightroom​ ​you​ ​can​ ​do​ ​the​ ​following​ ​to​ ​ensure​ ​that​ ​the
photos​ ​are​ ​in​ ​best​ ​condition​ ​when​ ​imported​ ​to​ ​Reality​ ​Capture:

● Reduce​ ​image​ ​noise


● Slightly​ ​increase​ ​image​ ​sharpness
● Balance​ ​out​ ​overall​ ​exposure​ ​throughout​ ​all​ ​photos

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​47
● Balance​ ​out​ ​white​ ​balance​ ​throughout​ ​all​ ​photos
● Remove​ ​lens​ ​distortion
● Remove​ ​lens​ ​vignetting
● Remove​ ​chromatic​ ​aberration
● Remove​ ​fringing
● Reduce​ ​blown​ ​out​ ​areas
● Increase​ ​brightness​ ​in​ ​dark​ ​areas

Here​ ​are​ ​some​ ​other​ ​tips​ ​for​ ​Lightroom,​ ​with​ ​screenshots​ ​included​ ​(Lightroom​ ​6​ ​(CC​ ​2015)

Metadata​ ​Filters
Using​ ​metadata​ ​filters​ ​in​ ​the​ ​Library​ ​view​ ​can​ ​aid​ ​in​ ​isolating​ ​photos​ ​taken​ ​from​ ​a​ ​specific​ ​lens​ ​or
camera.​ ​This​ ​is​ ​may​ ​be​ ​useful​ ​when​ ​trying​ ​to​ ​apply​ ​lens​ ​corrections​ ​to​ ​specific​ ​lens​ ​and​ ​camera
combinations.​ ​You​ ​can​ ​also​ ​filter​ ​by​ ​ISO​ ​to​ ​increase​ ​noise​ ​reduction​ ​on​ ​higher​ ​ISO​ ​photos

Synchronizing​ ​White​ ​Balance


Before​ ​synchronizing​ ​White​ ​Balance​ ​between​ ​photos,​ ​it​ ​needs​ ​to​ ​be​ ​a​ ​custom​ ​value.​ ​Using​ ​Auto
will​ ​make​ ​all​ ​the​ ​photos​ ​have​ ​a​ ​different​ ​White​ ​Balance​ ​value.​ ​At​ ​the​ ​time​ ​of​ ​writing,​ s
​ imply
setting​ ​to​ ​custom​ ​will​ ​still​ ​not​ ​be​ ​synchronizable​,​ ​so​ ​the​ ​white​ ​balance​ ​of​ ​the​ ​photo​ ​with​ ​the
master​ ​settings​ ​needs​ ​to​ ​be​ ​adjusted​ ​by​ ​+1​ ​or​ ​-1​ ​to​ ​make​ ​it​ ​actually​ ​custom.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​48
Synchronizing​ ​Lens​ ​Profile​ ​Corrections
At​ ​the​ ​time​ ​of​ ​writing,​ ​if​ ​you​ ​synchronize​ ​“Enable​ ​Profile​ ​Corrections”​ ​and​ ​other​ ​lens​ ​correction
settings,​ ​it​ ​will​ ​copy​ ​the​ ​wrong​ ​lens​ ​and​ ​camera​ ​onto​ ​other​ ​photos​ ​that​ ​were​ ​taken​ ​with​ ​different
equipment.​ ​So​ ​use​ ​the​ ​metadata​ ​filter​ ​to​ ​only​ ​select​ ​photos​ ​from​ ​the​ ​same​ ​combination​ ​of
equipment.

Export​ ​Settings
These​ ​settings​ ​work​ ​for​ ​exporting​ ​photos​ ​from​ ​Lightroom​ ​for​ ​Reality​ ​Capture.​ ​Make​ ​sure​ ​to
include​ ​the​ ​correct​ ​amount​ ​of​ ​metadata​ ​you​ ​need.​ ​If​ ​you​ ​include​ ​all,​ ​there​ ​may​ ​be​ ​a​ ​chance​ ​that
Reality​ ​Capture​ ​will​ ​try​ ​to​ ​apply​ ​lens​ ​distortion​ ​correction​ ​over​ ​the​ ​corrections​ ​you​ ​already​ ​made
if​ ​it​ ​recognizes​ ​the​ ​EXIF​ ​data.​ ​GPS​ ​data,​ ​if​ ​included​ ​in​ ​the​ ​export​ ​may​ ​be​ ​useful​ ​for​ ​alignment.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​49
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​50
Initial​ ​Photogrammetry​ ​Processing​ ​Using​ ​Reality
Capture

Setup:​ ​Project​ ​Cache​ ​folder


It’s​ ​critical​ ​that​ ​your​ ​cache​ ​folder​ ​is​ ​set​ ​a​ ​location​ ​that​ ​has​ ​a​ ​lot​ ​of​ ​free​ ​space.
As​ ​a​ ​quick​ ​guide,​ ​a​ ​project​ ​using​ ​2500​ ​images​ ​may​ ​use​ ​up​ ​to​ ​1TB​ ​of​ ​cache​ ​space.

The​ ​best​ ​solution​ ​is​ ​to​ ​set​ ​a​ ​custom​ ​cache​ ​location​​ ​to​ ​a​ ​drive​ ​that​ ​has​ ​enough​ ​free​ ​space.
Setting​ ​the​ ​cache​ ​location​ ​to​ ​an​ ​internal​ ​SSD​ ​drive​ ​should​ ​greatly​ ​improve​ ​processing​ ​times.
Setting​ ​the​ ​cache​ ​location​ ​to​ ​the​ ​same​ ​physical​ ​drive​ ​as​ ​the​ ​project​ ​files​ ​will​ ​decrease
performance.

Warning:​​ ​At​ ​the​ ​time​ ​of​ ​writing,​ ​setting​ ​the​ ​cache​ ​location​ ​to​ ​Project​ ​Folder​ ​location​ ​does​ ​not
work.​ ​It​ ​will​ ​still​ ​end​ ​up​ ​caching​ ​to​ ​C​ ​drive.​ ​See​ ​Cache​ ​Drive​​ ​for​ ​more​ ​details​ ​on​ ​increasing
performance.​ ​You​ ​must​ ​restart​ ​Reality​ ​Capture​ ​after​ ​changing​ ​cache​ ​location.

Aligning

Single​ ​Component​ ​Goal


The​ ​best​ ​case​ ​scenario​ ​is​ ​to​ ​end​ ​up​ ​with​ ​one​ ​single​ ​component​ ​that​ ​contains​ ​all​ ​the​ ​aligned
cameras​ ​when​ ​using​ ​automatic​ ​alignment.​ ​Manual​ ​control​ ​points​ ​should​ ​only​ ​be​ ​used​ ​as​ ​a​ ​last
resort​ ​because​ ​they​ ​may​ ​cause​ ​cracks​ ​or​ ​offsets​ ​in​ ​the​ ​photogrammetry​ ​mesh​ ​as​ ​seen​ ​below:

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​51
Set​ ​Distance​ ​Constraints

Before​ ​the​ ​model​ ​reconstruction​ ​and​ ​after​ ​camera​ ​alignment​ ​is​ ​run,​ ​you​ ​should​ ​set​ ​two​ ​control
points​ ​and​ ​set​ ​a​ ​distance​ ​constraint​ ​on​ ​one​ ​of​ ​the​ ​photos​ ​belonging​ ​to​ ​the​ ​main​ ​component.
This​ ​is​ ​critical​ ​to​ ​ensure​ ​that​ ​texture​ ​generation​ ​/​ ​unwrap​ ​works​ ​properly​ ​using​ ​the​ ​“Texel
Quality”​ ​modes.

Manual​ ​Control​ ​Points


Manual​ ​control​ ​points​ ​should​ ​be​ ​avoided​ ​because​ ​manual​ ​placement​ ​of​ ​points​ ​has​ ​a​ ​margin​ ​of
error​ ​many​ ​times​ ​higher​ ​than​ ​automatically​ ​placed​ ​points.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​52
If​ ​you​ ​have​ ​to​ ​use​ ​manual​ ​control​ ​points,​ ​try​ ​to​ ​use​ ​more​ ​rather​ ​than​ ​less.​ ​With​ ​more​ ​control
points​ ​like​ ​above​ ​(5​ ​control​ ​points​ ​per​ ​photo​ ​on​ ​each​ ​of​ ​2​ ​photos​ ​from​ ​each​ ​of​ ​the​ ​2
components),​ ​the​ ​alignment​ ​of​ ​the​ ​model​ ​should​ ​be​ ​more​ ​accurate,​ ​and​ ​lesser​ ​chance​ ​of​ ​an
offset​ ​such​ ​as​ ​seen​ ​below:

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​53
The​ ​best​ ​way​ ​to​ ​avoid​ ​these​ ​occurrences​ ​is​ ​to​ ​take​ ​more​ ​photos​​ ​of​ ​the​ ​area​ ​to​ ​prevent​ ​the
need​ ​to​ ​use​ ​manual​ ​control​ ​points.

Tip​:​ ​If​ ​the​ ​model​ ​has​ ​extreme​ ​height​ ​offsets​ ​like​ ​above​ ​and​ ​really​ ​cannot​ ​be​ ​fixed​ ​in​ ​Reality
Capture,​ ​then​ ​try​ ​fixing​ ​it​ ​in​ ​a​ ​CAD​ ​tool​ ​that​ ​has​ ​a​ ​relax​ ​brush.

In​ ​3DsMax:​ ​Use​ ​the​ ​Relax/Soften​ ​brush​ ​found​ ​in​ ​Poly​ ​Edit​ ​>​ ​Graphite​ ​Tools​ ​>​ ​Freeform​ ​Tools​ ​>
Paint​ ​Deform​ ​Tools​ ​>​ ​Relax/Soften​ ​brush.

Downscale​ ​&​ ​Alignment​ ​Technique


In​ ​some​ ​situations​ ​Reality​ ​Capture​ ​may​ ​not​ ​be​ ​able​ ​to​ ​combine​ ​the​ ​components​ ​together​ ​even​ ​if
the​ ​components​ ​have​ ​many​ ​overlapping​ ​photos.​ ​Setting​ ​a​ ​very​ ​high​ ​downscale​ ​value​ ​may​ ​help
so​ ​that​ ​it​ ​forces​ ​it​ ​to​ ​look​ ​at​ ​the​ ​larger​ ​features.​ ​If​ ​alignment​ ​is​ ​re-run​ ​without​ ​removing​ ​existing
components​ ​Reality​ ​Capture​ ​will​ ​try​ ​to​ ​combine​ ​them​ ​together.​ ​This​ ​also​ ​takes​ ​less​ ​time​ ​than
running​ ​alignment​ ​for​ ​the​ ​first​ ​time.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​54
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​55
Mesh​ ​Reconstruction

Set​ ​Reconstruction​ ​Region


Setting​ ​up​ ​a​ ​reconstruction​ ​region​ ​(after​ ​alignment​ ​is​ ​done)​ ​to​ ​crop​ ​out​ ​areas​ ​that​ ​you​ ​know
won’t​ ​be​ ​used​ ​will​ ​save​ ​time​ ​in​ ​processing.​ ​If​ ​you​ ​know​ ​specific​ ​areas​ ​will​ ​be​ ​replaced​ ​by​ ​models
from​ ​other​ ​projects​ ​then​ ​leave​ ​a​ ​reasonable​ ​amount​ ​of​ ​model​ ​for​ ​overlap​ ​and​ ​cut​ ​off​ ​the​ ​rest.

Simplification

Defining​ ​Overall​ ​Triangle​ ​Budget


It’s​ ​important​ ​to​ ​have​ ​set​ ​a​ ​overall​ ​model​ ​triangle​ ​budget​ ​for​ ​your​ ​VR​ ​scene​ ​to​ ​ensure​ ​a​ ​smooth
framerate​ ​at​ ​runtime.​ ​At​ ​the​ ​time​ ​of​ ​writing​ ​this,​ ​a​ ​presentation​ ​computer​ ​with​ ​a​ ​NVIDIA​ ​GTX
1080​ ​(high​ ​end​ ​graphics​ ​card)​ ​should​ ​have​ ​a​ ​maximum​ ​of​ ​2-2.5​ ​million​ ​triangles​ ​in​ ​the​ ​scene.​ ​If
your​ ​scene​ ​consists​ ​of​ ​multiple​ ​Reality​ ​Capture​ ​projects​ ​and​ ​models​ ​then​ ​you​ ​have​ ​to​ ​distribute
the​ ​number​ ​of​ ​triangles​ ​accordingly.

First​ ​Pass​ ​Simplification


Initial​ ​models​ ​generated​ ​from​ ​Reality​ ​Capture​ ​are​ ​usually​ ​in​ ​tens​ ​to​ ​hundred​ ​millions.​ ​Before
texturing​ ​it’s​ ​a​ ​good​ ​idea​ ​to​ ​simplify​ ​the​ ​model​ ​down​ ​to​ ​5​ ​million​ ​or​ ​so​ ​triangles​ ​so​ ​that​ ​it​ ​can
process​ ​faster,​ ​and​ ​produce​ ​cleaner​ ​unwrapped​ ​textures.​ ​A​ ​model​ ​over​ ​10​ ​million​ ​triangles​ ​may
be​ ​difficult​ ​to​ ​handle​ ​in​ ​modeling​ ​programs​ ​such​ ​as​ ​3DsMax,​ ​Blender​ ​or​ ​Maya.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​56
Original​ ​model​ ​from​ ​mesh​ ​generation:​ ​16​ ​million​ ​triangles.

First​ ​pass​ ​simplification​ ​from​ ​Reality​ ​Capture:​ ​1​ ​million​ ​triangles,​ ​minimal​ ​or​ ​no​ ​visual​ ​difference.

Reducing​ ​Project​ ​Size

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​57
If​ ​some​ ​components​ ​are​ ​absolutely​ ​obsolete,​ ​they​ ​can​ ​be​ ​deleted.​ ​Doing​ ​so​ ​will​ ​decrease​ ​the
project​ ​size​ ​if​ ​storage​ ​is​ ​a​ ​concern.

First​ ​Pass​ ​Texturing

Draft​ ​Textures
A​ ​first​ ​pass​ ​of​ ​texturing​ ​will​ ​be​ ​used​ ​to​ ​aid​ ​in​ ​simplification​ ​and​ ​trimming​ ​of​ ​models​ ​in​ ​programs
like​ ​3Dsmax​ ​and​ ​not​ ​for​ ​the​ ​final​ ​presentation.​ ​Therefore​ ​texture​ ​details​ ​don’t​ ​need​ ​to​ ​be​ ​high.
For​ ​a​ ​project​ ​with​ ​2500​ ​photos,​ ​use​ ​the​ ​“Maximal​ ​Textures”​ ​unwrap​ ​mode​ ​with​ ​max​ ​16​ ​4K
textures​ ​to​ ​ensure​ ​that​ ​there​ ​won’t​ ​be​ ​too​ ​many​ ​textures​ ​exported.

Exporting
Exporting​ ​as​ ​FBX​ ​or​ ​OBJ​ ​can​ ​both​ ​work​ ​for​ ​3DsMax​ ​to​ ​read.​ ​Generally​ ​I​ ​preferred​ ​to​ ​use​ ​FBX
because​ ​it​ ​is​ ​an​ ​Autodesk​ ​format.​ ​Disable​ ​“Export​ ​texture​ ​alpha”​ ​if​ ​you​ ​are​ ​using​ ​PNGs.
However​ ​for​ ​this​ ​pass,​ ​just​ ​using​ ​JPG​ ​is​ ​fine.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​58
Include​ ​Texture​ ​Data
When​ ​exporting​ ​the​ ​first​ ​pass​ ​model,​ ​make​ ​sure​ ​to​ ​also​ ​export​ ​textures​ ​by​ ​setting​ ​“Export
Textures”​ ​to​ ​True.​ ​Using​ ​JPGs​ ​is​ ​fine​ ​for​ ​this​ ​pass​ ​as​ ​quality​ ​is​ ​not​ ​a​ ​concern.

.rcinfo​ ​File​ ​and​ ​Grouping​ ​Files​ ​For​ ​Working​ ​Between​ ​Different​ ​Programs
The​ ​.rcinfo​ ​file​ ​helps​ ​Reality​ ​Capture​ ​align​ ​the​ ​mesh​ ​component​ ​with​ ​the​ ​already-aligned
cameras​ ​and​ ​is​ ​essential​ ​for​ ​re-importing​ ​a​ ​modified​ ​mesh​ ​back​ ​into​ ​Reality​ ​Capture.​ ​Make​ ​sure
“Export​ ​Info​ ​File”​ ​is​ ​set​ ​to​ ​“True”​ ​when​ ​exporting​ ​from​ ​Reality​ ​Capture.

Before​ ​you​ ​Import​ ​the​ ​model​ ​with​ ​Reality​ ​Capture,​ ​make​ ​sure​ ​you​ ​duplicate​ ​the​ ​.rcinfo​ ​file​ ​that
came​ ​with​ ​the​ ​first​ ​export​ ​and​ ​rename​ ​it​ ​to​ ​be​ ​the​ ​same​ ​name​ ​as​ ​the​ ​blender​ ​model​ ​you
exported.

For​ ​example,​ ​a​ ​model​ ​with​ ​the​ ​name:


Modelname.fbx

Should​ ​have​ ​an​ ​accompanying​ ​.rcinfo​ ​file​ ​named:


Modelname.fbx.rcinfo

If​ ​that​ ​file​ ​is​ ​not​ ​present​ ​when​ ​you​ ​import​ ​a​ ​model,​ ​Reality​ ​Capture​ ​will​ ​give​ ​you​ ​a​ ​message
about​ ​missing​ ​an​ ​.rcinfo​ ​file.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​59
Always​ ​keep​ ​the​ ​textures​ ​and​ ​.rcinfo​ ​alongside​ ​the​ ​model​ ​in​ ​the​ ​level​ ​/​ ​same​ ​folder.​ ​Keep​ ​each
project’s​ ​model​ ​export​ ​in​ ​a​ ​separate​ ​folder.

Cleanup​ ​With​ ​3DsMax


For​ ​use​ ​in​ ​real-time​ ​engines​ ​and​ ​for​ ​general​ ​use,​ ​it’s​ ​a​ ​good​ ​idea​ ​to​ ​simplify/decimate​ ​before
texturing.​ ​A​ ​single​ ​model​ ​of​ ​1​ ​million​ ​triangles​ ​should​ ​be​ ​the​ ​max​ ​limit​ ​for​ ​texturing.​ ​Any​ ​more
triangles​ ​may​ ​make​ ​it​ ​difficult​ ​to​ ​unwrap​ ​cleanly,​ ​take​ ​too​ ​long​ ​to​ ​process,​ ​or​ ​cause​ ​memory
problems.​ ​The​ ​reason​ ​for​ ​simplifying​ ​the​ ​first-pass​ ​model​ ​to​ ​5-10​ ​million​ ​triangles​ ​within​ ​Reality
Capture​ ​is​ ​to​ ​make​ ​it​ ​easier​ ​to​ ​handle​ ​in​ ​3DsMax​ ​while​ ​retaining​ ​the​ ​detail,​ ​as​ ​Reality​ ​Capture’s
sweeping​ ​simplification​ ​feature​ ​is​ ​inferior​ ​to​ ​manually​ ​tweaked​ ​and​ ​targeted​ ​simplification​ ​in
mesh​ ​modeling​ ​programs.

Removing​ ​large,​ ​unused​ ​chunks​ ​of​ ​the​ ​mesh​ ​will​ ​greatly​ ​decrease​ ​the​ ​texture​ ​count,​ ​and​ ​help
conserve​ ​the​ ​precious​ ​capacity​ ​of​ ​the​ ​GPU’s​ ​VRAM.

Importing​ ​into​ ​3DsMax​ ​from​ ​Reality​ ​Capture

Units​ ​Setup​ ​+​ ​System​ ​Units​ ​setup​ ​For​ ​3DsMax


Before​ ​importing​ ​a​ ​model​ ​from​ ​Reality​ ​Capture,​ ​you​ ​must​ ​use​ ​these​ ​units​ ​settings​ ​to​ ​ensure
that​ ​the​ ​model​ ​is​ ​imported​ ​and​ ​exported​ ​at​ ​the​ ​correct​ ​scale:

Display​ ​Unit​ ​Scale​:​ ​Meters


System​ ​Unit​ ​Scale​:​ ​1​ ​Unit​ ​=​ ​1.0​ ​Meters

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​60
FBX,​ ​OBJ​ ​and​ ​Texture​ ​Assignment​ ​Issues​ ​For​ ​3DsMax
Exporting​ ​as​ ​FBX​ ​seems​ ​to​ ​provide​ ​correct​ ​orientation​ ​when​ ​first​ ​imported​ ​to​ ​3DsMax.​ ​If
exporting​ ​as​ ​OBJ,​ ​one​ ​of​ ​the​ ​axis​ ​may​ ​be​ ​flipped,​ ​so​ ​uncheck​ ​the​ ​“Flip​ ​ZY​ ​axis”​ ​setting​ ​in​ ​OBJ
import​ ​settings​ ​in​ ​3DsMax.

For​ ​3DsMax​ ​2016,​ ​importing​ ​of​ ​FBX​ ​files​ ​is​ ​works​ ​without​ ​problems.​ ​Using​ ​Ascii​ ​format​ ​for​ ​FBX
seems​ ​to​ ​import​ ​faster​ ​than​ ​Binary​ ​format.

For​ ​3DsMax​ ​2018​ ​(and​ ​possibly​ ​2017)​ ​when​ ​Multi​ ​Tile​ ​materials​ ​are​ ​automatically​ ​created​ ​upon
importing​ ​FBX​ ​files,​ ​the​ ​mapping​ ​gets​ ​messed​ ​up​ ​and​ ​texture​ ​assignments​ ​get​ ​corrupted.​ ​As​ ​a
workaround,​ ​export​ ​OBJ​ ​from​ ​Reality​ ​Capture​ ​and​ ​import​ ​OBJ.

Reality​ ​Capture​ ​Export​ ​Settings 3DsMax​ ​2018​ ​OBJ​ ​Import​ ​Settings

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​61
A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​62
Trimming

Enable​ ​Viewport​ ​Textures

It’s​ ​useful​ ​to​ ​have​ ​Viewport​ ​Textures​ ​enabled​ ​in​ ​3DsMax​ ​to​ ​aid​ ​in​ ​identifying​ ​features​ ​in​ ​the
model​ ​for​ ​trimming​ ​and​ ​fixing.​ ​Check​ ​the​ ​following​ ​to​ ​ensure​ ​that​ ​textures​ ​show:
- The​ ​viewport​ ​is​ ​configured​ ​to​ ​show​ ​textures
- The​ ​viewport​ ​shade​ ​mode​ ​includes​ ​showing​ ​of​ ​textures
- The​ ​object’s​ ​properties​ ​is​ ​set​ ​to​ ​show​ ​textures
- The​ ​material​ ​applied​ ​to​ ​the​ ​object​ ​has​ ​texture​ ​previews​ ​enabled

Using​ ​“Consistent​ ​Colors”​ ​shading​ ​mode​ ​may​ ​be​ ​the​ ​best​ ​way​ ​to​ ​see​ ​the​ ​textures.​ ​You​ ​may​ ​also
want​ ​to​ ​turn​ ​off​ ​viewport​ ​Ambient​ ​Occlusion​ ​and​ ​viewport​ ​Shadows.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​63
Using​ ​Mesh​ ​or​ ​Poly​ ​Editing​ ​Modes

It’s​ ​best​ ​to​ ​edit​ ​the​ ​photogrammetry​ ​mesh​ ​in​ ​Mesh​ ​or​ ​Poly​ ​mode.​ ​Other​ ​modes​ ​may​ ​be​ ​too​ ​slow
to​ ​operate​ ​on.​ ​Poly​ ​mode​ ​provides​ ​more​ ​tools​ ​for​ ​fixing​ ​and​ ​patching​ ​the​ ​mesh​ ​than​ ​Mesh​ ​mode
but​ ​may​ ​be​ ​less​ ​performant.

For​ ​poly​ ​edit​ ​mode,​ ​these​ ​are​ ​tools​ ​and​ ​features​ ​may​ ​be​ ​useful​ ​for​ ​cleanup​ ​and​ ​fixing:
- Selection​ ​modes:​ ​Vertex,​ ​Edge,​ ​Border​ ​Face,​ ​Element
- Cap:​ ​Fills​ ​up​ ​a​ ​hole​ ​after​ ​selecting​ ​the​ ​bordering​ ​edges
- Bridge:​ ​Useful​ ​for​ ​breaking​ ​a​ ​large​ ​hole​ ​into​ ​smaller​ ​holes
- Select​ ​By​ ​Angle:​ ​Selecting​ ​adjacent​ ​similarly-angled​ ​surfaces
- Ignore​ ​Backfacing:​ ​Prevents​ ​selecting​ ​a​ ​face​ ​when​ ​the​ ​back​ ​is​ ​facing​ ​you
- Soft​ ​Selection:​ ​Manipulations​ ​to​ ​selection​ ​will​ ​also​ ​affect​ ​its​ ​adjacent​ ​vertices
- Relax​ ​Brush:​ ​Smoothing​ ​out​ ​jagged​ ​edges​ ​or​ ​offset​ ​surfaces
- Shrink​ ​and​ ​grow​ ​selection
- Detach:​ ​Split​ ​mesh​ ​to​ ​separate​ ​objects​ ​so​ ​you​ ​can​ ​have​ ​different​ ​ProOptimizer​ ​settings
- Graphite​ ​Tools:​ ​Collection​ ​of​ ​useful​ ​modelling​ ​tools
- Weld​ ​Vertices:​ ​Can​ ​also​ ​fill​ ​holes
- Remove​ ​Isolated​ ​Vertices:​ ​General​ ​cleanup

Enabling​ ​Edged​ ​Faces


Enabling​ ​Edged​ ​Faces​ ​in​ ​the​ ​viewport​ ​can​ ​sometimes​ ​help​ ​as​ ​a​ ​visual​ ​aid​ ​when​ ​cleaning​ ​up​ ​of
the​ ​mesh​ ​or​ ​looking​ ​at​ ​its​ ​triangle​ ​density.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​64
Using​ ​The​ ​Lasso​ ​Selection​ ​Tool

Using​ ​the​ ​Lasso​ ​Selection​ ​Tool​ ​in​ ​3DsMax​ ​may​ ​be​ ​helpful​ ​in​ ​selecting​ ​more​ ​organic​ ​shapes
than​ ​using​ ​the​ ​rectangular​ ​selection​ ​tool.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​65
Auto​ ​Window/Crossing​ ​By​ ​Direction

Enabling​ ​this​ ​feature​ ​in​ ​3DsMax​ ​makes​ ​selection​ ​behave​ ​more​ ​like​ ​a​ ​traditional​ ​CAD​ ​tool.​ ​When
making​ ​a​ ​selection,​ ​clicking​ ​and​ ​dragging​ ​towards​ ​the​ ​left​ ​will​ ​select​ ​anything​ ​that​ ​the​ ​selection
bounds​ ​touches.​ ​Clicking​ ​and​ ​dragging​ ​towards​ ​the​ ​right​ ​will​ ​only​ ​select​ ​anything​ ​that
completely​ ​fits​ ​in​ ​the​ ​selection​ ​bounds.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​66
Enabling​ ​Viewport​ ​Stats

Enabling​ ​Viewport​ ​Stats​ ​helps​ ​during​ ​trimming​ ​and​ ​optimization​ ​so​ ​you​ ​can​ ​keep​ ​track​ ​of​ ​how
many​ ​triangles​ ​are​ ​being​ ​reduced,​ ​or​ ​how​ ​many​ ​triangles​ ​comprise​ ​of​ ​certain​ ​areas.

Visualizing​ ​Problematic​ ​Parts​ ​of​ ​Mesh​ ​in​ ​3DsMax


xView​ ​in​ ​3DsMax​ ​viewport​ ​may​ ​be​ ​able​ ​to​ ​show​ ​some​ ​problems​ ​with​ ​the​ ​mesh​ ​to​ ​help
troubleshooting,​ ​such​ ​as​ ​overlapping​ ​edges​ ​and​ ​incorrect​ ​face​ ​orientations.

Simplification
Because​ ​modern​ ​GPUs​ ​are​ ​still​ ​unable​ ​to​ ​render​ ​large​ ​amounts​ ​of​ ​textured​ ​triangles​ ​in​ ​VR​ ​at​ ​an
acceptable​ ​framerate,​ ​it’s​ ​critical​ ​to​ ​simplify​ ​the​ ​mesh​ ​so​ ​that​ ​it​ ​can​ ​run​ ​smoothly.​ ​Textures​ ​can
make​ ​up​ ​for​ ​lost​ ​detail​ ​in​ ​the​ ​mesh,​ ​and​ ​generally​ ​you​ ​won’t​ ​notice​ ​much​ ​difference​ ​on​ ​a
simplified​ ​model​ ​if​ ​the​ ​texture​ ​quality​ ​is​ ​high​ ​enough.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​67
Over​ ​Simplification​ ​May​ ​Affect​ ​Texture​ ​Quality
When​ ​the​ ​first​ ​model​ ​is​ ​generated​ ​in​ ​Reality​ ​Capture,​ ​a​ ​depth​ ​map​ ​is​ ​made​ ​internally.​ ​The​ ​depth
map​ ​is​ ​what​ ​helps​ ​Reality​ ​Capture​ ​map​ ​textures​ ​onto​ ​the​ ​model​ ​when​ ​texturization​ ​is​ ​run.​ ​When
a​ ​simplified​ ​model​ ​is​ ​re-imported​ ​and​ ​texturization​ ​is​ ​re-run,​ ​it​ ​is​ ​still​ ​based​ ​on​ ​the​ ​original​ ​depth
map.​ ​This​ ​means​ ​that​ ​if​ ​any​ ​given​ ​triangle​ ​of​ ​simplified​ ​model​ ​has​ ​shifted​ ​away​ ​from​ ​the​ ​position
of​ ​the​ ​original​ ​non-simplified​ ​mesh,​ ​the​ ​texture​ ​will​ ​turn​ ​out​ ​blurrier​ ​due​ ​to​ ​blending​ ​and
interpolation.​ ​See​ ​below​ ​for​ ​a​ ​comparison:

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​68
High​ ​poly​ ​mesh​ ​with​ ​high​ ​quality​ ​texturing

Over-simplified​ ​mesh​ ​with​ ​same​ ​high​ ​quality​ ​texturing.​ ​See​ ​how​ ​textures​ ​have​ ​become​ ​blurry​ ​due​ ​to​ ​the​ ​geometry​ ​shifting​ ​too​ ​much.

ProOptimizer

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​69
Comparison​ ​of​ ​a​ ​part​ ​of​ ​the​ ​model​ ​that​ ​is​ ​optimized​ ​with​ ​ProOptimizer.​ ​Left:​ ​Original​ ​model​ ​with​ ​832,947​ ​triangles.​ ​Right:​ ​Optimized
model​ ​with​ ​172,065​ ​triangles.

The​ ​ProOptimizer​ ​modifier​ ​in​ ​3DsMax​ ​is​ ​extremely​ ​helpful​ ​in​ ​reducing​ ​the​ ​number​ ​of​ ​triangles​ ​in
the​ ​mesh.​ ​Compared​ ​to​ ​the​ ​simplification​ ​tool​ ​in​ ​Reality​ ​Capture,​ ​this​ ​does​ ​a​ ​much​ ​better​ ​job​ ​of
reducing​ ​the​ ​mesh​ ​triangles​ ​and​ ​retaining​ ​its​ ​original​ ​shape.

Before​ ​calculating​ ​the​ ​ProOptimizer,​ ​setting​ ​the​ ​Optimizer​ ​Mode​ ​to​ ​“Exclude​ ​Borders”​ ​will​ ​make
sure​ ​that​ ​the​ ​outer​ ​edge​ ​of​ ​the​ ​mesh​ ​is​ ​not​ ​affected.​ ​This​ ​is​ ​very​ ​useful​ ​when​ ​the​ ​model​ ​is​ ​split
to​ ​separate​ ​pieces​ ​and​ ​each​ ​piece​ ​has​ ​the​ ​ProOptimizer​ ​modifier​ ​is​ ​applied​ ​to​ ​it.​ ​The​ ​borders​ ​of
all​ ​the​ ​meshes​ ​will​ ​stay​ ​in​ ​place​ ​and​ ​can​ ​be​ ​merged​ ​back​ ​together​ ​afterwards.

Detaching​ ​The​ ​Mesh​ ​To​ ​Apply​ ​Different​ ​ProOptimizer​ ​Settings

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​70
Sometimes​ ​your​ ​mesh​ ​may​ ​have​ ​many​ ​areas​ ​that​ ​need​ ​special​ ​attention​ ​and​ ​a​ ​different​ ​set​ ​of
ProOptimizer​ ​settings.​ ​To​ ​apply​ ​different​ ​settings​ ​you​ ​will​ ​have​ ​to​ ​select​ ​the​ ​part​ ​that​ ​you​ ​want
separated​ ​in​ ​Mesh​ ​or​ ​Poly​ ​edit,​ ​in​ ​Face​ ​selection​ ​mode,​ ​and​ ​Detach​ ​to​ ​Object.​ ​The​ ​detached
object​ ​can​ ​then​ ​have​ ​a​ ​new​ ​ProOptimizer​ ​modifier​ ​applied​ ​to​ ​it.

Caution​:​ ​It’s​ ​critical​ ​that​ ​you​ ​use​ ​the​ ​“Attach”​ ​tool​ ​in​ ​Mesh​ ​or​ ​Poly​ ​edit​ ​after​ ​you​ ​finish​ ​running
ProOptimizer​ ​on​ ​different​ ​meshes​ ​to​ ​combine​ ​all​ ​the​ ​meshes​ ​back​ ​together.​ ​You​ ​will​ ​also​ ​have
to​ ​select​ ​all​ ​the​ ​vertices​ ​and​ ​run​ ​“Weld”​ ​with​ ​the​ ​lowest​ ​tolerance​ ​possible​ ​(ex.​ ​0.0001)​ ​to​ ​make
sure​ ​that​ ​all​ ​overlapping​ ​vertices​ ​between​ ​the​ ​different​ ​mesh​ ​parts​ ​truly​ ​become​ ​one​ ​piece.​ ​A
single​ ​component​ ​in​ ​Reality​ ​Capture​ ​should​ ​only​ ​have​ ​one​ ​model​ ​in​ ​it.​ ​If​ ​you​ ​texture​ ​parts​ ​of​ ​one
mesh​ ​separately​ ​then​ ​the​ ​textures​ ​may​ ​not​ ​match​ ​up.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​71
Signs​ ​of​ ​a​ ​good​ ​unwrap,​ ​where​ ​large​ ​continuous​ ​pieces​ ​of​ ​textures​ ​are​ ​present.

Exporting​ ​from​ ​3DsMax​ ​Back​ ​to​ ​Reality​ ​Capture


Make​ ​sure​ ​the​ ​model​ ​inside​ ​the​ ​mesh​ ​editing​ ​program​ ​has​ ​the​ ​exact​ ​same​ ​transforms​ ​as​ ​it
came​ ​in.​ ​But​ ​if​ ​you​ ​didn't​ ​move,​ ​rotate,​ ​or​ ​scale​ ​the​ ​model​ ​while​ ​working​ ​on​ ​it,​ ​then​ ​it​ ​should​ ​be
fine.

If​ ​you​ ​have​ ​importing​ ​errors​ ​from​ ​3DsMax​ ​FBX​ ​to​ ​Reality​ ​Capture,​ ​try​ ​unchecking​ ​all​ ​the​ ​FBX
export​ ​settings.​ ​Adding​ ​the​ ​ProOptimizer​ ​modifier​ ​(Doesn’t​ ​need​ ​to​ ​reduce​ ​but​ ​needs​ ​to​ ​be
“calculated”)​ ​on​ ​the​ ​mesh​ ​first​ ​and​ ​then​ ​converting​ ​it​ ​to​ ​a​ ​mesh​ ​or​ ​poly​ ​again​ ​may​ ​also​ ​fix
importing​ ​issues.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​72
Warning:​ ​If​ ​multiple​ ​meshes​ ​are​ ​exported​ ​together​ ​from​ ​3DsMax​ ​as​ ​a​ ​FBX​ ​file,​ ​Reality​ ​Capture
may​ ​only​ ​import​ ​one​ ​of​ ​them.​ ​OBJ​ ​format​ ​may​ ​support​ ​export​ ​multiple​ ​objects​ ​together​ ​and​ ​may
all​ ​be​ ​importable​ ​by​ ​Reality​ ​Capture​ ​at​ ​the​ ​same​ ​time.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​73
Photogrammetry​ ​Re-Processing​ ​With​ ​Reality
Capture

Importing

.rcinfo​ ​File
Once​ ​again​ ​make​ ​sure​ ​the​ ​.rcinfo​ ​file​ ​is​ ​included​ ​with​ ​the​ ​model​ ​that​ ​you​ ​are​ ​about​ ​to​ ​import.

Import​ ​to​ ​Same​ ​Project


Make​ ​sure​ ​to​ ​import​ ​the​ ​simplified​ ​mesh​ ​into​ ​the​ ​same​ ​project​ ​that​ ​you​ ​exported​ ​the​ ​first​ ​pass
mesh​ ​from.​ ​You​ ​don’t​ ​need​ ​to​ ​delete​ ​any​ ​components​ ​or​ ​meshes​ ​in​ ​the​ ​project​ ​before​ ​importing.

Invalid​ ​Function​ ​Call


The​ ​project​ ​folder​ ​can​ ​easily​ ​be​ ​missing​ ​files​ ​if​ ​transferred​ ​to​ ​another​ ​computer​ ​or​ ​when​ ​moved
around.​ ​Make​ ​sure​ ​all​ ​files​ ​are​ ​copied​ ​over​ ​or​ ​make​ ​sure​ ​the​ ​project​ ​is​ ​never​ ​moved​ ​once
created.​ ​If​ ​it​ ​must​ ​be​ ​moved,​ ​try​ ​to​ ​use​ ​“Save​ ​As”​ ​in​ ​Reality​ ​Capture.​ ​One​ ​instance​ ​of​ ​invalid
function​ ​call​ ​showing​ ​up​ ​when​ ​importing​ ​back​ ​a​ ​file​ ​back​ ​into​ ​Reality​ ​Capture​ ​is​ ​when​ ​some
project​ ​files​ ​are​ ​missing​ ​and​ ​camera​ ​alignment​ ​locations​ ​don’t​ ​show​ ​in​ ​the​ ​3D​ ​view.

Texturing

Defining​ ​the​ ​Texture​ ​Budget


Before​ ​you​ ​start​ ​the​ ​final​ ​texture​ ​process,​ ​it’s​ ​good​ ​to​ ​know​ ​what​ ​GPUs​ ​will​ ​be​ ​running​ ​the​ ​VR
experience.​ ​The​ ​amount​ ​of​ ​textures​ ​that​ ​you​ ​can​ ​use​ ​may​ ​be​ ​limited​ ​by​ ​the​ ​VRAM​ ​capacity​ ​of
the​ ​GPUs.​ ​As​ ​a​ ​quick​ ​reference,​ ​a​ ​4K​ ​texture​ ​file​ ​may​ ​take​ ​up​ ​to​ ​64MB​ ​of​ ​VRAM​ ​during​ ​runtime.
This​ ​means​ ​if​ ​you​ ​are​ ​using​ ​a​ ​GPU​ ​with​ ​about​ ​8GB​ ​of​ ​VRAM,​ ​then​ ​you​ ​should​ ​only​ ​have​ ​about
100​ ​4K​ ​textures​ ​in​ ​the​ ​scene​ ​at​ ​any​ ​given​ ​moment.​ ​Also​ ​keep​ ​in​ ​mind​ ​that​ ​one​ ​4K​ ​texture​ ​takes
up​ ​the​ ​same​ ​amount​ ​of​ ​space​ ​as​ ​four​ ​2K​ ​textures.​ ​Meshes​ ​also​ ​take​ ​up​ ​space​ ​in​ ​VRAM.​ ​If​ ​you
are​ ​using​ ​texture​ ​streaming​ ​technology,​ ​then​ ​the​ ​texture​ ​limits​ ​don’t​ ​apply.

Setting​ ​some​ ​photos​ ​to​ ​not​ ​texture


If​ ​you​ ​know​ ​some​ ​photos​ ​should​ ​not​ ​be​ ​used​ ​for​ ​texturing​ ​due​ ​to​ ​poor​ ​lighting​ ​or​ ​colour
inconsistencies,​ ​select​ ​them​ ​with​ ​camera​ ​lasso​ ​tool​ ​in​ ​Reality​ ​Capture,​ ​and​ ​in​ ​the​ ​details​ ​panel,
you​ ​can​ ​set​ ​them​ ​to​ ​“false”​ ​for​ ​“Enable​ ​Texturing​ ​and​ ​Coloring”

Unwrapping​ ​Tool
Instead​ ​of​ ​running​ ​the​ ​Texturing​ ​feature​ ​in​ ​Reality​ ​Capture​ ​directly,​ ​it’s​ ​generally​ ​better​ ​to​ ​run
the​ ​Unwrap​ ​first​ ​using​ ​the​ ​Unwrap​ ​Tool​ ​so​ ​that​ ​you​ ​have​ ​more​ ​control​ ​over​ ​how​ ​the​ ​textures​ ​are
mapped.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​74
Caution​:​ ​The​ ​Unwrap​ ​tool’s​ ​settings​ ​are​ ​in​ ​the​ ​Unwrap​ ​Tool’s​ ​settings​ ​rollout,​ ​and​ ​not​ ​found​ ​in
the​ ​Reconstruction​ ​setting’s​ ​default​ ​texture​ ​unwrap​ ​settings​ ​rollout,​ ​even​ ​though​ ​they​ ​look​ ​the
same.

The​ ​below​ ​settings​ ​generally​ ​has​ ​worked​ ​well​ ​for​ ​our​ ​project​ ​so​ ​far:

Gutter​ ​size:​ ​1
Texture​ ​Size:​ ​4K

Use​ ​“Render”​ ​or​ ​“Sweet”​ ​mesh​ ​view​ ​to​ ​preview​ ​texel​ ​quality​ ​by​ ​looking​ ​at​ ​checkerboard​ ​texture.
One​ ​black​ ​or​ ​white​ ​square​ ​should​ ​theoretically​ ​represent​ ​one​ ​texel.

Large​ ​Triangle​ ​Removal​ ​Threshold

This​ ​setting​ ​will​ ​determine​ ​whether​ ​or​ ​not​ ​large​ ​triangles​ ​will​ ​be​ ​textured​ ​or​ ​not.​ ​If​ ​set​ ​so​ ​a​ ​high
number​ ​like​ ​500,​ ​then​ ​all​ ​triangles​ ​should​ ​be​ ​textured.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​75
Texture​ ​Quality

The​ ​Texture​ ​Quality​ ​%​ ​in​ ​the​ ​stats​ ​of​ ​a​ ​textured​ ​or​ ​unwrapped​ ​component​ ​means​ ​how​ ​much​ ​of
the​ ​imported​ ​photo’s​ ​resolution​ ​is​ ​utilized​ ​for​ ​the​ ​given​ ​area​ ​that’s​ ​being​ ​textured.​ ​The​ ​higher​ ​the
number​ ​means​ ​the​ ​more​ ​of​ ​the​ ​photo’s​ ​pixels​ ​are​ ​being​ ​used​ ​in​ ​the​ ​model’s​ ​unwrapped
textures.​ ​If​ ​the​ ​%​ ​is​ ​at​ ​100​ ​then​ ​that​ ​means​ ​that​ ​the​ ​texture​ ​resolution​ ​won’t​ ​increase​ ​on​ ​the
model​ ​even​ ​if​ ​you​ ​texture​ ​with​ ​a​ ​higher​ ​quantity​ ​of​ ​texture​ ​maps​ ​or​ ​denser​ ​texel​ ​size.

Optimal​ ​Texel​ ​size​ ​tells​ ​you​ ​the​ ​upper​ ​limit​ ​to​ ​theoretically​ ​reach​ ​100%​ ​texture​ ​quality​ ​/
utilization.

Optimal​ ​texel​ ​size​ ​is​ ​(for​ ​some​ ​odd​ ​reason)​ ​affected​ ​by​ ​the​ ​scale​ ​of​ ​the​ ​model.​ ​It’s​ ​critical​ ​to​ ​set
the​ ​world​ ​scale​ ​to​ ​match​ ​the​ ​real​ ​world.​ ​Make​ ​sure​ ​to​ ​use​ ​the​ ​distance​ ​constraint​ ​to​ ​a​ ​real​ ​world
value​ ​so​ ​that​ ​optimal​ ​texel​ ​size​ ​gets​ ​adjusted​ ​to​ ​a​ ​reasonable​ ​number,​ ​which​ ​seems​ ​to​ ​usually
be​ ​under​ ​0.00xxxxx.

If​ ​the​ ​scale​ ​is​ ​set​ ​wrong,​ ​the​ ​Optimal​ ​Texel​ ​Size​ ​may​ ​be​ ​quite​ ​high,​ ​such​ ​as​ ​0.5.​ ​It​ ​also​ ​may
cause​ ​the​ ​unwrap​ ​to​ ​complete​ ​in​ ​a​ ​few​ ​seconds​ ​and​ ​results​ ​in​ ​an​ ​extremely​ ​low​ ​resolution
texture.

Maximal​ ​Textures​ ​Count


Texturizing​ ​with​ ​Maximal​ ​Textures​ ​Count​ ​will​ ​build​ ​as​ ​many​ ​as​ ​you​ ​specify​ ​until​ ​the​ ​texel​ ​density
reaches​ ​it’s​ ​calculated​ ​optimal​ ​texel​ ​density.​ ​For​ ​example,​ ​if​ ​you​ ​set​ ​it​ ​to​ ​max​ ​50​ ​textures,​ ​it​ ​may
not​ ​build​ ​50​ ​textures​ ​because​ ​at​ ​20​ ​textures​ ​it​ ​has​ ​reached​ ​its​ ​optimal​ ​texel​ ​density.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​76
Theoretically​ ​that​ ​should​ ​also​ ​mean​ ​that​ ​if​ ​you​ ​build​ ​textures​ ​using​ ​the​ ​Fixed​ ​Texel​ ​Size​ ​setting
and​ ​set​ ​it​ ​at​ ​the​ ​optimal​ ​value,​ ​amount​ ​of​ ​textures​ ​that​ ​it​ ​produces​ ​would​ ​also​ ​be​ ​20,​ ​in​ ​this
example.

Fixed​ ​Texel​ ​Size


Fixed​ ​Texel​ ​size​ ​unwrapping​ ​is​ ​the​ ​most​ ​straightforward​ ​and​ ​blunt​ ​way​ ​of​ ​unwrapping​ ​a​ ​model.
This​ ​may​ ​cause​ ​a​ ​lot​ ​of​ ​wasted​ ​space,​ ​resulting​ ​in​ ​a​ ​higher​ ​texture​ ​count​ ​than​ ​necessary.​ ​For
example,​ ​an​ ​area​ ​may​ ​be​ ​unwrapped​ ​to​ ​hold​ ​2000​ ​texels,​ ​however​ ​the​ ​photos​ ​of​ ​that​ ​area​ ​may
only​ ​be​ ​able​ ​to​ ​provide​ ​20​ ​texels​ ​worth​ ​of​ ​data.​ ​That​ ​20​ ​texels​ ​will​ ​be​ ​stretched​ ​out​ ​and
interpolated​ ​to​ ​the​ ​2000​ ​texels.​ ​This​ ​is​ ​similar​ ​to​ ​taking​ ​a​ ​low​ ​res​ ​image​ ​and​ ​scaling​ ​it​ ​way​ ​up.​ ​It
gives​ ​you​ ​more​ ​pixels​ ​but​ ​it​ ​doesn’t​ ​give​ ​you​ ​a​ ​higher​ ​image​ ​resolution.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​77
Adaptive​ ​Texel​ ​Size
Theoretically​ ​Adaptive​ ​Texel​ ​size​ ​is​ ​supposed​ ​to​ ​prevent​ ​wasting​ ​texels​ ​at​ ​places​ ​where​ ​original
photos​ ​won’t​ ​be​ ​able​ ​to​ ​provide​ ​anyway.

You​ ​can​ ​set​ ​the​ ​Minimum​ ​Texel​ ​Size​ ​to​ ​equal​ ​Optimal​ ​Texel​ ​Size,​ ​and​ ​Maximal​ ​Texel​ ​Size​ ​to
equal​ ​[Optimal​ ​Texel​ ​Size​ ​X​ ​10].​ ​This​ ​will​ ​build​ ​textures​ ​to​ ​highest​ ​possible​ ​resolution​ ​at​ ​areas
where​ ​the​ ​photos​ ​can​ ​provide,​ ​and​ ​have​ ​lower​ ​resolutions​ ​at​ ​areas​ ​where​ ​the​ ​photos​ ​can’t
provide.

After​ ​many​ ​tests​ ​with​ ​this​ ​mode,​ ​it​ ​appears​ ​to​ ​not​ ​work​ ​as​ ​well​ ​as​ ​it​ ​should,​ ​or​ ​is​ ​very​ ​difficult​ ​to
control.​ ​If​ ​the​ ​range​ ​between​ ​Minimum​ ​Texel​ ​Size​ ​and​ ​Maximal​ ​Texel​ ​Size​ ​is​ ​set​ ​too​ ​high,​ ​it​ ​may
not​ ​texture​ ​any​ ​parts​ ​of​ ​the​ ​mesh​ ​at​ ​the​ ​quality​ ​matching​ ​Maximal​ ​Texel​ ​Size.​ ​If​ ​the​ ​range​ ​is​ ​set
too​ ​small,​ ​then​ ​too​ ​many​ ​parts​ ​that​ ​should​ ​be​ ​lower​ ​quality​ ​than​ ​Minimal​ ​Texel​ ​Size​ ​are​ ​wasting
texture​ ​space.

Exporting

Final​ ​Export​ ​for​ ​Unity​ ​or​ ​Final​ ​Project​ ​Alignment


Do​ ​not​ ​export​ ​PNGs​ ​with​ ​transparent​ ​textures,​ ​as​ ​it​ ​may​ ​create​ ​an​ ​ugly​ ​white​ ​glow​ ​effect​ ​around
all​ ​edges​ ​of​ ​the​ ​mesh​ ​when​ ​viewing​ ​in​ ​Unity​ ​at​ ​a​ ​distance​ ​or​ ​in​ ​a​ ​3D​ ​renderer.

Aligning​ ​And​ ​Merging​ ​Individual​ ​Meshes​ ​In​ ​3DsMax

Importing
Remember​ ​to​ ​import​ ​the​ ​model​ ​with​ ​the​ ​same​ ​unit​ ​settings​ ​as​ ​the​ ​first​ ​time​.

Aligning
At​ ​the​ ​time​ ​of​ ​writing,​ ​up​ ​to​ ​the​ ​2018​ ​version​ ​of​ ​3DsMax,​ ​there’s​ ​still​ ​no​ ​easy​ ​way​ ​to
automatically​ ​align​ ​different​ ​photogrammetry​ ​meshes​ ​together.​ ​There​ ​may​ ​be​ ​plugins​ ​that​ ​can
help​ ​with​ ​alignment​ ​for​ ​3DsMax​ ​and​ ​Blender.

Below​ ​outlines​ ​the​ ​steps​ ​to​ ​manually​ ​align​ ​individual​ ​photogrammetry​ ​meshes​ ​together​ ​in​ ​the
default​ ​version​ ​of​ ​3DsMax.

1​ ​-​ ​Bring​ ​the​ ​photogrammetry​ ​meshes​ ​that​ ​you​ ​want​ ​to​ ​align​ ​into​ ​the​ ​same​ ​3DsMax​ ​scene.​ ​They
should​ ​each​ ​have​ ​repeating​ ​geometry​ ​at​ ​the​ ​place​ ​of​ ​overlap​ ​to​ ​help​ ​with​ ​alignment.​ ​The​ ​larger
the​ ​overlap​ ​distance,​ ​the​ ​more​ ​accurate​ ​the​ ​alignment​ ​will​ ​be.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​78
2​ ​-​ ​Find​ ​two​ ​points​ ​that​ ​exist​ ​on​ ​both​ ​meshes.​ ​Ideally​ ​the​ ​further​ ​away​ ​they​ ​are​ ​the​ ​more
accurate​ ​the​ ​alignment​ ​will​ ​be.​ ​Also​ ​find​ ​points​ ​where​ ​there​ ​are​ ​more​ ​geometric​ ​detail​ ​and​ ​more
photogrammetry​ ​model​ ​accuracy.​ ​For​ ​this​ ​example​ ​we​ ​will​ ​call​ ​the​ ​turquoise​ ​coloured​ ​mesh
“Mesh​ ​A”,​ ​and​ ​the​ ​dark​ ​orange​ ​mesh​ ​“Mesh​ ​B”.​ ​The​ ​circled​ ​point​ ​on​ ​the​ ​left​ ​will​ ​be​ ​“Point​ ​A”,
and​ ​the​ ​circled​ ​point​ ​on​ ​the​ ​right​ ​will​ ​be​ ​“Point​ ​B”

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​79
3​ ​-​ ​Activate​ ​3D​ ​snap​ ​and​ ​make​ ​it​ ​only​ ​snap​ ​to​ ​vertices.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​80
4​ ​-​ ​Toggle​ ​on​ ​the​ ​creating​ ​of​ ​the​ ​“Tape”​ ​helper​ ​object

5​ ​-​ ​Create​ ​the​ ​Tape​ ​helper​ ​object​ ​by​ ​clicking​ ​and​ ​dragging,​ ​starting​ ​from​ ​Point​ ​A​ ​of​ ​Mesh​ ​A​ ​to
Point​ ​B​ ​of​ ​the​ ​same​ ​mesh.​ ​First​ ​creation​ ​does​ ​not​ ​have​ ​to​ ​be​ ​exact,​ ​as​ ​you​ ​can​ ​zoom​ ​in​ ​and​ ​use
the​ ​transform​ ​tool​ ​to​ ​move​ ​either​ ​point​ ​to​ ​an​ ​exact​ ​vertex​ ​you​ ​prefer.​ ​Make​ ​sure​ ​the
corresponding​ ​point​ ​exists​ ​on​ ​Mesh​ ​B​ ​as​ ​well.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​81
6​ ​-​ ​Move​ ​the​ ​target​ ​of​ ​the​ ​Tape​ ​helper​ ​object​ ​to​ ​Point​ ​B​ ​of​ ​Mesh​ ​A.​ ​Make​ ​sure​ ​the
corresponding​ ​point​ ​exists​ ​on​ ​Mesh​ ​B​ ​as​ ​well.​ ​When​ ​completed,​ ​the​ ​Tape​ ​object​ ​should​ ​look
like​ ​the​ ​shot​ ​below:

7​ ​-​ ​Repeat​ ​steps​ ​5​ ​and​ ​6​ ​to​ ​create​ ​another​ ​Tape​ ​object​ ​from​ ​Point​ ​A​ ​to​ ​B​ ​for​ ​Mesh​ ​B.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​82
8​ ​-​ ​Use​ ​the​ ​Select​ ​and​ ​Link​ ​tool​ ​to​ ​link​ ​Mesh​ ​A​ ​to​ ​the​ ​Tape​ ​object​ ​for​ ​Mesh​ ​A.​ ​While​ ​the​ ​tool​ ​is
active,​ ​drag​ ​from​ ​the​ ​Mesh​ ​to​ ​the​ ​Tape​ ​object​ ​(not​ ​the​ ​target​ ​object).​ ​If​ ​you​ ​succeeded,​ ​the
mesh​ ​should​ ​move​ ​and​ ​rotate​ ​with​ ​the​ ​Tape​ ​object​ ​if​ ​you​ ​move​ ​the​ ​Tape​ ​object​ ​around.

9​ ​-​ ​Activate​ ​“Affect​ ​Pivot​ ​Only”​ ​while​ ​Mesh​ ​A​ ​is​ ​selected​ ​to​ ​manipulate​ ​Mesh​ ​A’s​ ​pivot​ ​point.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​83
10​ ​-​ ​While​ ​“Affect​ ​Pivot​ ​Only”​ ​is​ ​activated​ ​and​ ​while​ ​Mesh​ ​A​ ​is​ ​still​ ​selected,​ ​use​ ​the​ ​Tools​ ​>
Align​ ​>​ ​Align...​ ​tool​ ​to​ ​align​ ​the​ ​pivot​ ​point​ ​of​ ​Mesh​ ​A​ ​to​ ​the​ ​pivot​ ​point​ ​of​ ​the​ ​Tape​ ​object​ ​on
Mesh​ ​A.​ ​This​ ​will​ ​help​ ​with​ ​scaling​ ​the​ ​mesh​ ​later​ ​on.

11​ ​-​ ​With​ ​the​ ​Tape​ ​object​ ​for​ ​Mesh​ ​A​ ​selected,​ ​select​ ​the​ ​Tools​ ​>​ ​Align​ ​>​ ​Align...​ ​tool​ ​and​ ​then
click​ ​on​ ​the​ ​Tape​ ​object​ ​for​ ​Mesh​ ​B.​ ​Also​ ​align​ ​the​ ​Tape​ ​A’s​ ​target​ ​object​ ​to​ ​the​ ​Tape​ ​B’s​ ​target
object.​ ​If​ ​done​ ​correctly,​ ​it​ ​should​ ​look​ ​similar​ ​to​ ​the​ ​shot​ ​below,​ ​where​ ​the​ ​meshes​ ​are
overlapping,​ ​aligned​ ​in​ ​the​ ​correct​ ​angles,​ ​but​ ​the​ ​scale​ ​may​ ​be​ ​off:

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​84
12​ ​-​ ​Select​ ​Mesh​ ​A

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​85
13​ ​-​ ​Select​ ​the​ ​Scale​ ​transform​ ​tool​ ​and​ ​make​ ​sure​ ​the​ ​scale​ ​mode​ ​is​ ​set​ ​to​ ​uniform.​ ​Use​ ​the
Scale​ ​transform​ ​gizmo​ ​to​ ​scale​ ​Mesh​ ​A​ ​uniformly​ ​until​ ​it​ ​matches​ ​and​ ​properly​ ​overlaps​ ​Mesh​ ​B.
Below​ ​is​ ​a​ ​comparison​ ​between​ ​before​ ​scaling​ ​and​ ​after​ ​scaling:

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​86
Before​ ​scaling​ ​Mesh​ ​A​ ​to​ ​match​ ​Mesh​ ​B

After​ ​scaling​ ​Mesh​ ​A​ ​to​ ​match​ ​Mesh​ ​B

Zeroing​ ​All​ ​Pivot​ ​Points


It​ ​is​ ​beneficial​ ​to​ ​set​ ​all​ ​the​ ​mesh’s​ ​pivot​ ​points’​ t​ ransform​ ​to​ ​the​ ​same​ ​point,​ ​such​ ​as​ ​0,0,0​​ ​as
well​ ​as​ ​their​ ​rotations​ ​to​ ​0,0,0​,​ ​so​ ​in​ ​the​ ​future​ ​if​ ​the​ ​pieces​ ​are​ ​accidentally​ ​shifted,​ ​they​ ​can​ ​be
easily​ ​moved​ ​back​ ​into​ ​position.​ ​If​ ​you​ ​want​ ​to​ ​set​ ​the​ ​scale​ ​to​ ​1,1,1​ ​then​ ​you​ ​may​ ​have​ ​to​ ​use
the​ ​Reset​ ​XForms​ ​function​ ​in​ ​3DsMax.

Trimming

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​87
After​ ​all​ ​photogrammetry​ ​meshes​ ​are​ ​aligned​ ​together,​ ​the​ ​inferior​ ​parts​ ​of​ ​the​ ​overlapping
mesh​ ​can​ ​be​ ​trimmed​ ​away.​ ​Use​ ​the​ ​Mesh​ ​or​ ​Poly​ ​edit​ ​mode​ ​to​ ​select​ ​unwanted​ ​faces​ ​or
vertices​ ​and​ ​delete​ ​them.

Reconciling

The​ ​soft​ ​selection​ ​feature​ ​in​ ​Mesh​ ​or​ ​Poly​ ​edit​ ​mode​ ​can​ ​be​ ​useful​ ​to​ ​adjust​ ​the​ ​edges​ ​of​ ​the
overlapping​ ​meshes​ ​so​ ​that​ ​they​ ​can​ ​blend​ ​together​ ​better.

LOD​ ​Generation
Optionally,​ ​LODs​ ​(Level​ ​of​ ​Details)​ ​of​ ​the​ ​photogrammetry​ ​meshes​ ​can​ ​be​ ​made​ ​so​ ​that​ ​it​ ​can
be​ ​more​ ​optimized​ ​for​ ​viewing​ ​in​ ​VR​ ​on​ ​lower​ ​end​ ​hardware,​ ​or​ ​if​ ​the​ ​scene​ ​is​ ​exceeding​ ​the
limit​ ​of​ ​the​ ​VR​ ​presentation​ ​computer.​ ​Creation​ ​of​ ​LODs​ ​start​ ​in​ ​3DsMax​ ​(or​ ​the​ ​mesh​ ​editing
software).

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​88
Unity​ ​will​ ​split​ ​the​ ​meshes​ ​that​ ​have​ ​over​ ​65535​ ​vertices​ ​into​ ​separate​ ​pieces​ ​automatically​ ​by​ ​arbitrarily​ ​picking​ ​out​ ​random
triangles​ ​throughout​ ​the​ ​mesh​ ​to​ ​split.

At​ ​the​ ​time​ ​of​ ​writing,​ ​Unity​ ​5.6​ ​can​ ​only​ ​handle​ ​meshes​ ​with​ ​a​ ​maximum​ ​of​ ​65535​ ​vertices.​ ​If
you​ ​import​ ​a​ ​mesh​ ​that​ ​has​ ​more​ ​than​ ​65535​ ​vertices​ ​then​ ​Unity​ ​will​ ​split​ ​the​ ​mesh​ ​into
separate​ ​pieces​ ​automatically​ ​by​ ​arbitrarily​ ​picking​ ​out​ ​random​ ​triangles​ ​throughout​ ​the​ ​mesh​ ​to
split.​ ​This​ ​is​ ​not​ ​a​ ​good​ ​thing​ ​because​ ​occlusion​ ​culling,​ ​another​ ​form​ ​of​ ​optimization,​ ​will​ ​be
less​ ​effective​ ​and​ ​may​ ​also​ ​cause​ ​more​ ​texture​ ​draw​ ​calls.​ ​Therefore​ ​it​ ​may​ ​be​ ​a​ ​better​ ​idea​ ​to
manually​ ​split​ ​the​ ​mesh​ ​into​ ​pieces​ ​that​ ​are​ ​less​ ​than​ ​65535​ ​in​ ​a​ ​grid-like​ ​fashion​ ​throughout​ ​the
scene.​ ​Since​ ​the​ ​following​ ​tutorial​ ​will​ ​be​ ​about​ ​how​ ​to​ ​set​ ​up​ ​meshes​ ​for​ ​Unity’s​ ​LOD​ ​system,
it’s​ ​recommended​ ​that​ ​you​ ​understand​ ​how​ ​Unity’s​ ​LOD​ ​system​ ​works​ ​or​ ​that​ ​you​ ​read​ ​Unity’s
official​ ​documentation​ ​for​ ​LODs​ ​before​ ​proceeding.

Having​ ​small​ ​pieces​ ​in​ ​the​ ​general​ ​shape​ ​of​ ​a​ ​square​ ​tile​ ​will​ ​also​ ​help​ ​in​ ​optimization​ ​by
frustrum​ ​culling.

Use​ ​the​ ​Viewport​ ​Statistics​ ​in​ ​conjunction​ ​with​ ​the​ ​Detach​ ​feature​ ​in​ ​Edit​ ​Poly​ ​or​ ​Edit​ ​Mesh
modes​ ​to​ ​break​ ​large​ ​meshes​ ​into​ ​smaller​ ​pieces.

1​ ​-​ ​For​ ​each​ ​small​ ​piece​ ​of​ ​the​ ​mesh,​ ​use​ ​the​ ​Clone​ ​function​ ​to​ ​duplicate​ ​it.​ ​Remember​ ​to​ ​use
“Copy”​ ​instead​ ​of​ ​“Instance”​ ​in​ ​the​ ​Clone​ ​options.

2​ ​-​ ​Add​ ​the​ ​suffix​ ​“_LOD0”​ ​at​ ​the​ ​end​ ​of​ ​one​ ​of​ ​the​ ​meshes,​ ​and​ ​the​ ​suffix​ ​“_LOD1”​ ​to​ ​the​ ​other
copy​ ​of​ ​the​ ​mesh.​ ​For​ ​example,​ ​the​ ​resulting​ ​meshes​ ​can​ ​be​ ​called​ ​something​ ​like​ ​this:

● FlowerBeds_LOD0
● FlowerBeds_LOD1

3​ ​-​ ​Keep​ ​the​ ​_LOD0​ ​version​ ​of​ ​the​ ​mesh​ ​untouched,​ ​but​ ​use​ ​ProOptimizer​ ​on​ ​the​ ​_LOD1​ ​mesh
with​ ​the​ ​following​ ​settings:

“Keep​ ​Material​ ​Boundaries”​ ​Unchecked

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​89
“Keep​ ​Textures”​ ​Checked
“Keep​ ​UV​ ​Boundaries”​ ​Unchecked

Exporting

Export​ ​Options
Export​ ​from​ ​3DsMax​ ​as​ ​an​ ​FBX​ ​or​ ​OBJ​ ​so​ ​that​ ​it​ ​can​ ​be​ ​used​ ​in​ ​Unity.​ ​Again,​ ​make​ ​sure​ ​that​ ​all
the​ ​texture​ ​files​ ​from​ ​Reality​ ​Capture​ ​are​ ​kept​ ​in​ ​the​ ​same​ ​folder​ ​as​ ​the​ ​final​ ​FBX​ ​or​ ​OBJ​ ​file.

If​ ​you​ ​can​ ​see​ ​meshes​ ​with​ ​correctly​ ​mapped​ ​textures​ ​in​ ​the​ ​viewport​ ​then​ ​generally​ ​if​ ​you
export​ ​those​ ​meshes,​ ​Unity​ ​should​ ​be​ ​able​ ​to​ ​automatically​ ​create​ ​and​ ​assign​ ​materials​ ​using
those​ ​textures.

Optional:​ ​There’s​ ​an​ ​option​ ​in​ ​FBX​ ​export​ ​settings​ ​to​ ​“Embed​ ​Media”,​ ​which​ ​will​ ​include​ ​all​ ​the
referenced​ ​textures​ ​into​ ​the​ ​file.​ ​However​ ​it’s​ ​normally​ ​better​ ​to​ ​not​ ​have​ ​this​ ​option​ ​enabled,
and​ ​keep​ ​all​ ​the​ ​referenced​ ​textures​ ​outside​ ​of​ ​the​ ​model​ ​file.

Setting​ ​Up​ ​For​ ​VR​ ​Presentation​ ​In​ ​Unity


This​ ​section​ ​of​ ​the​ ​manual​ ​assumes​ ​that​ ​you​ ​already​ ​have​ ​knowledge​ ​in​ ​using​ ​Unity,​ ​or​ ​that​ ​you
will​ ​have​ ​watched​ ​tutorials​ ​and​ ​read​ ​documentation​ ​on​ ​how​ ​to​ ​use​ ​Unity.​ ​The​ ​notes​ ​below​ ​are
special​ ​things​ ​to​ ​look​ ​out​ ​for​ ​and​ ​tips​ ​pertaining​ ​specifically​ ​to​ ​setting​ ​up​ ​the​ ​photogrammetry
mesh​ ​inside​ ​of​ ​Unity.​ ​It​ ​is​ ​also​ ​by​ ​no​ ​means​ ​the​ ​best​ ​method.

Importing
Be​ ​aware​ ​that​ ​importing​ ​a​ ​high​ ​triangle​ ​count​ ​mesh​ ​and​ ​all​ ​its​ ​corresponding​ ​textures​ ​into​ ​Unity
may​ ​take​ ​many​ ​minutes​ ​to​ ​process.

Textures
One​ ​useful​ ​benefit​ ​to​ ​creating​ ​4K​ ​textures​ ​with​ ​Reality​ ​Capture​ ​is​ ​that​ ​they​ ​can​ ​be
non-destructively​ ​resized​ ​within​ ​Unity​ ​at​ ​any​ ​time​ ​before​ ​creating​ ​the​ ​final​ ​build.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​90
By​ ​default,​ ​the​ ​textures​ ​are​ ​imported​ ​at​ ​2K​ ​resolution.​ ​If​ ​4K​ ​is​ ​preferred,​ ​then​ ​select​ ​all​ ​the
imported​ ​textures​ ​and​ ​set​ ​the​ ​Max​ ​Size​ ​to​ ​4096​ ​and​ ​click​ A ​ pply​.

If​ ​you​ ​are​ ​using​ ​texture​ ​streaming​ ​technology,​ ​then​ ​the​ ​Max​ ​Size​ ​setting​ ​may​ ​be​ ​irrelevant.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​91
Materials

One​ ​of​ ​the​ ​most​ ​performant​ ​shader​ ​types​ ​for​ ​viewing​ ​the​ ​photogrammetry​ ​model​ ​is​ ​to​ ​use
Unit/Texture.​ ​This​ ​will​ ​ignore​ ​all​ ​the​ ​lighting​ ​in​ ​the​ ​scene​ ​and​ ​present​ ​the​ ​original​ ​lighting
captured​ ​on​ ​site​ ​in​ ​the​ ​best​ ​way.​ ​If​ ​re-lighting​ ​in​ ​Unity​ ​is​ ​preferred,​ ​then​ ​don’t​ ​use​ ​the
Unlit/Texture​ ​shader.

LODs​ ​Setup
Please​ ​read​ ​up​ ​on​ ​Unity’s​ ​LOD​ ​system​ ​for​ ​more​ ​information​ ​on​ ​how​ ​to​ ​set​ ​it​ ​up​ ​properly.

Importing​ ​A​ ​Model​ ​With​ ​Multiple​ ​Sets​ ​of​ ​LODs


When​ ​Unity​ ​imports​ ​meshes,​ ​it​ ​automatically​ ​sets​ ​up​ ​LOD​ ​components​ ​on​ ​the​ ​meshes.​ ​It’s​ ​been
noted​ ​however​ ​that​ ​if​ ​the​ ​model​ ​file​ ​contains​ ​more​ ​than​ ​one​ ​full​ ​set​ ​of​ ​LODs​ ​(ex.​ ​The​ ​several
pieces​ ​of​ ​photogrammetry​ ​mesh​ ​that​ ​are​ ​below​ ​65535​ ​vertices​ ​each),​ ​the​ ​LOD​ ​Group
component​ ​will​ ​be​ ​automatically​ ​applied​ ​incorrectly​ ​to​ ​the​ ​parent​ ​game​ ​object.

The​ ​incorrectly​ ​applied​ ​“LOD​ ​Group”​ ​component​ ​needs​ ​to​ ​be​ ​removed​ ​from​ ​the​ ​parent​ ​game
object​ ​in​ ​this​ ​case,​ ​as​ ​seen​ ​below:

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​92
Create​ ​a​ ​new​ ​parent​ ​empty​ ​game​ ​objects​ ​to​ ​contain​ ​each​ ​set​ ​of​ ​LODs,​ ​and​ ​apply​ ​the​ ​LOD
Group​ ​to​ ​that​ ​parent​ ​empty​ ​game​ ​object.

Apply​ ​renderers​ ​to​ ​each​ ​LOD​ ​as​ ​it​ ​is​ ​normally​ ​done.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​93
LOD​ ​Issues​ ​with​ ​VR
At​ ​the​ ​time​ ​of​ ​writing,​ ​Unity​ ​5.6​ ​(not​ ​confirmed​ ​if​ ​it​ ​also​ ​applies​ ​to​ ​Unity​ ​2017)​ ​the​ ​LOD​ ​transition
distances​ ​are​ ​inconsistent​ ​between​ ​the​ ​editor​ ​view,​ ​the​ ​VR​ ​camera,​ ​and​ ​the​ ​VR​ ​camera​ ​in​ ​a
built​ ​executable.​ ​Examples​ ​of​ ​the​ ​issue​ ​include:

● Objects​ ​are​ ​visible​ ​in​ ​the​ ​scene​ ​view​ ​but​ ​are​ ​completely​ ​gone​ ​in​ ​the​ ​build
● Objects​ ​in​ ​the​ ​build​ ​are​ ​always​ ​showing​ ​at​ ​LOD1​ ​whereas​ ​they​ ​are​ ​showing​ ​as​ ​LOD0​ ​in
the​ ​scene​ ​view

LOD​ ​transition​ ​settings​ ​are​ ​affected​ ​by​ ​the​ ​FOV​ ​of​ ​the​ ​camera,​ ​and​ ​because​ ​the​ ​FOV​ ​of​ ​the​ ​VR
camera​ ​is​ ​different​ ​to​ ​that​ ​of​ ​the​ ​Scene​ ​viewport​ ​camera,​ ​the​ ​LOD​ ​transition​ ​behaviour​ ​will​ ​be
different.

For​ ​example:​ ​You​ ​may​ ​have​ ​to​ ​view​ ​and​ ​work​ ​on​ ​the​ ​LOD​ ​transition​ ​settings​ ​while​ ​viewing
through​ ​a​ ​dedicated​ ​camera​ ​in​ ​the​ ​scene​ ​at​ ​a​ ​FOV​ ​of​ ​around​ ​110-120.​ ​Also​ ​you​ ​may​ ​have​ ​to
tweak​ ​the​ ​LOD​ ​bias​ ​in​ ​the​ ​project’s​ ​Quality​ ​Settings​ ​(ex.​ ​3.8)​ ​for​ ​it​ ​to​ ​match​ ​behaviours.

Try​ ​doing​ ​some​ ​guess​ ​and​ ​checks​ ​until​ ​the​ ​LOD​ ​transition​ ​behaviour​ ​in​ ​the​ ​editor​ ​matches​ ​the
behaviour​ ​in​ ​the​ ​build.

Lighting
Any​ ​form​ ​of​ ​lighting​ ​onto​ ​the​ ​photogrammetry​ ​mesh​ ​may​ ​be​ ​expensive,​ ​especially​ ​if​ ​the​ ​mesh​ ​is
comprised​ ​of​ ​hundreds​ ​of​ ​thousands​ ​to​ ​millions​ ​of​ ​triangles.​ ​If​ ​you​ ​don’t​ ​need​ ​to​ ​project​ ​any
lighting​ ​or​ ​shadows​ ​onto​ ​the​ ​photogrammetry​ ​mesh,​ ​then​ ​select​ ​all​ ​the​ ​meshes​ ​and​ ​disable

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​94
“Cast​ ​Shadows”​ ​and​ ​“Receive​ ​Shadows”​ ​for​ ​their​ ​mesh​ ​renderers.​ ​If​ ​you​ ​must​ ​use​ ​custom
lighting,​ ​then​ ​it​ ​may​ ​be​ ​better​ ​to​ ​use​ ​baked​ ​lighting​ ​rather​ ​than​ ​real-time​ ​methods.

Static​ ​Objects
If​ ​your​ ​photogrammetry​ ​meshes​ ​won’t​ ​be​ ​moved​ ​and​ ​lighting​ ​will​ ​stay​ ​static,​ ​then​ ​it’s​ ​best​ ​to​ ​set
the​ ​game​ ​objects​ ​containing​ ​the​ ​photogrammetry​ ​meshes​ ​to​ ​static​ ​for​ ​optimization.

Development

Locomotion
A​ ​form​ ​of​ ​locomotion,​ ​such​ ​as​ ​teleporting​ ​or​ ​“armswinging”​ ​will​ ​need​ ​to​ ​be​ ​implemented​ ​to​ ​the
project​ ​for​ ​the​ ​viewer​ ​to​ ​move​ ​around​ ​the​ ​scene.

“Navmesh”

Note:​ ​This​ ​“Navmesh”​ ​is​ ​unrelated​ ​to​ ​the​ ​traditional​ ​“Navmesh”​ ​term​ ​used​ ​for​ ​AI​ ​and​ ​Pathing​ ​in
Unity.

For​ ​our​ ​project,​ ​we​ ​used​ ​both​ ​teleportation​ ​and​ ​armswinger​ ​as​ ​forms​ ​of​ ​locomotion.​ ​A​ ​simplified
copy​ ​of​ ​the​ ​ground​ ​mesh​ ​was​ ​used​ ​as​ ​a​ ​guide​ ​to​ ​control​ ​where​ ​teleportation​ ​was​ ​valid.​ ​It​ ​also
has​ ​a​ ​Mesh​ ​Collider​ ​(Convex​ ​option​ ​unchecked)​ ​to​ ​prevent​ ​physical​ ​objects​ ​in​ ​the​ ​scene​ ​from
falling​ ​through​ ​the​ ​ground.​ ​The​ ​navmesh​ ​is​ ​created​ ​on​ ​top​ ​of​ ​the​ ​final​ ​aligned​ ​photogrammetry
model​ ​to​ ​ensure​ ​accuracy.

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​95
“Armswinger​ ​Blocker​ ​Mesh”

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​96
Because​ ​we​ ​used​ ​“Armswinger”​ ​and​ ​later​ ​“Walk​ ​In​ ​Place”​ ​from​ ​VRTK,​ ​another​ ​mesh​ ​had​ ​to​ ​be
created​ ​along​ ​the​ ​boundaries​ ​to​ ​prevent​ ​the​ ​viewer​ ​from​ ​walking​ ​past​ ​the​ ​boundaries,​ ​which​ ​we
called​ ​the​ ​“Armswinger​ ​Blocker​ ​Mesh.”​ ​The​ ​mesh​ ​had​ ​to​ ​be​ ​broken​ ​into​ ​fully​ ​convex​ ​pieces,​ ​and
they​ ​all​ ​had​ ​the​ ​Mesh​ ​Collider​ ​component​ ​applied​ ​in​ ​Unity​ ​with​ ​the​ ​“Convex”​ ​option​ ​enabled.

Examples​ ​of​ ​fully​ ​convex​ ​objects​ ​that​ ​the​ ​Armswinger​ ​Blocker​ ​mesh​ ​is​ ​broken​ ​into.

SpeedTrees
If​ ​you​ ​aren’t​ ​able​ ​to​ ​capture​ ​trees​ ​properly​ ​in​ ​the​ ​scene,​ ​you​ ​can​ ​use​ ​SpeedTree​ ​assets​ ​to
replace​ ​the​ ​photogrammetry​ ​trees.​ ​Doing​ ​so​ ​will​ ​also​ ​save​ ​a​ ​lot​ ​of​ ​texture​ ​space.

Suggestions​ ​for​ ​Further​ ​Exploration


● Capturing​ ​with​ ​LIDAR​ ​scanners​ ​to​ ​complement​ ​regular​ ​photo​ ​cameras
● Using​ ​ShaderMap​ ​for​ ​baking​ ​normal​ ​maps​ ​to​ ​further​ ​reduce​ ​poly​ ​count
● Using​ ​a​ ​multi-camera​ ​rig​ ​for​ ​instant​ ​stereo​ ​/​ ​instant​ ​parallax​ ​shots
● Using​ ​Simplygon​ ​to​ ​automatically​ ​make​ ​LOD​ ​for​ ​the​ ​pieces​ ​of​ ​photogrammetry​ ​mesh
● Using​ ​GPS​ ​to​ ​geo-tag​ ​photos​ ​taken​ ​from​ ​camera.​ ​Reality​ ​Capture​ ​has​ ​geotagging
support​ ​to​ ​partially​ ​help​ ​with​ ​alignment.​ ​(Shows​ ​up​ ​as​ ​orange​ ​lines​ ​after​ ​camera
alignment)
● Using​ ​a​ ​Colour​ ​Checker​ ​passport​ ​in​ ​conjunction​ ​with​ ​photos​ ​to​ ​get​ ​extremely​ ​accurate
and​ ​consistent​ ​colour​ ​reproduction
● Using​ ​vertex​ ​paint​ ​to​ ​paint​ ​opacity​ ​near​ ​photogrammetry​ ​mesh​ ​seams​ ​so​ ​that​ ​they​ ​blend
into​ ​each​ ​other

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​97
Final​ ​Word
Thank​ ​you​ ​for​ ​taking​ ​the​ ​time​ ​to​ ​read​ ​this​ ​document.

And​ ​special​ ​thanks​ ​to​ ​the​ ​University​ ​of​ ​British​ ​Columbia​ ​for​ ​entrusting​ ​us​ ​to​ ​lead​ ​their​ ​team​ ​of
professors​ ​and​ ​students​ ​which​ ​led​ ​to​ ​the​ ​creation​ ​of​ ​this​ ​manual.

The​ ​online​ ​Google​ ​Docs​ ​version​ ​(​http://bit.ly/2xYl6DX​)​ ​will​ ​be​ ​kept​ ​up​ ​to​ ​date​ ​as​ ​much​ ​as
possible.

If​ ​you​ ​have​ ​any​ ​questions,​ ​suggestions,​ ​or​ ​would​ ​like​ ​to​ ​contribute​ ​to​ ​this​ ​document,​ ​don’t
hesitate​ ​to​ ​contact​ ​us.

Or,​ ​if​ ​you​ ​are​ ​a​ ​company​ ​and​ ​would​ ​like​ ​to​ ​hire​ ​us​ ​for​ ​your​ ​project,​ ​you​ ​can​ ​contact​ ​us​ ​here:

● Website:​ ​http://metanautvr.com
● E-Mail:​ ​hello​ ​[at]​ ​metanautvr.com

We​ ​hope​ ​that​ ​this​ ​document​ ​has​ ​been​ ​a​ ​helpful​ ​resource​ ​for​ ​you​ ​and​ ​your​ ​team.

Cheers,
Metanaut​ ​Team

You​ ​can​ ​find​ ​us​ ​here:

metanautvr.com
t​witter.com/metanautvr
f​acebook.com/metanaut

A​ ​Guide​ ​To​ ​Capturing​ ​and​ ​Preparing​ ​Photogrammetry​ ​For​ ​Unity​ ​ ​ ​|​ ​ ​ ​Metanaut​ ​ ​ ​|​ ​ ​ ​v1.0.0​ ​ ​ ​|​ ​ ​ ​Last​ ​Updated:​ ​2017.10.23​ ​ ​ ​|​ ​ ​ ​Page​ ​98

You might also like