Project Pitch and Overview: the goal of the project was the create an accurate reconstruction of a real-life scene within an unreal vr project.
This would involve accurate scale and measurements with hard copies of the layout.
To create help achieve the goal I explored many tutorials on unreal 4 light and setup, VR setup, texturing and photogrammetry.
2.
Workflow breakdown:
Measurements layout and accurate block out and reconstruction: so this involved field work with reference gathering and then putting this information into a document or a program that makes it easy to review and present.
This involved looking into the programs adobe illustrator and sketchup and layout, for the most part this part of the project was straightforward with the only research being around using the basics of these programs.
-SketchUp
this was my main tutorial I used followed by the 2018 more recent one. the others I had running just to see if there were any other things that may have been missed or considered not a beginner skill by one guy but to another it could be an important thing to have.
-illustrator- Same as above was an overview on operating within the program itself.
I didn’t get an opportunity to try out this App but ill look into it next time a project like this rolls around.
Challenges I encountered on this section aside from the unfamiliarity of new software was some of my measurements didn’t line up well in the block out stage of this. A lot came down to the slope of my backyard mixed in with height measurements causing things to not line up ideally.
Workflow breakdown:
Texturing: this was a good process to cover as up until this point I had been using libraries and tileable textures on my past projects but had never gone out of my way to create my own from photographs. This of course involves a lot of time in photoshop adjusting perspective and warping images and easing out shadows and creating tileable sections with the heal brush and lining these things up with UV maps. There was a big push to spend more time refining my texture but sometimes I just had to pull the pin due to time constraints.
Once the base image was complete and lined up with the uvs I then proceeded to generate the normal maps for them via quixel. For the most part this came out looking decent tho I am aware of another way to generate these out of Photoshop itself and am curious to look and see if that would get me a better result.
Once the normal was generated I then opened DDO in quixel and loaded in my normal and a prebaked albedo which was my original image and then generated to the UE4 specifications. Once it went through its process I then created a blank layer made the albedo opacity 0% and then tweaked that other metalness and roughness maps as I needed. followed with a export of these maps in the end.
Images and videos
These were the best when it came to flattening out images and tips and tricks to getting your textures to blend.
-This other video was about using mega scans to get a realistic result this video was an interesting view but I didn’t use any of the techniques but it did cover a lot of good information on LODs
-creating 2nd UV channels and lightmaps
Discuss challenges: So one of the biggest challenges was in the photomanipulation itself trying to identify areas that are not fit for the texture pushing or reducing the shadows with the dodge tool without causing pixelation and overall eye balling the image and doing multiple passes on it as well as the general atlasing not only on one object but getting the pattern to flow evenly along multiple objects of different sizes and dimensions this required scaling things in a way that fit well but did not distort or blur the original texture image.
An issue I ran into was that I mapped my uvs to the block out before I refined some of my models applying chamfer afterward. This made some of the UVs look a bit strange in areas but for the most part worked out fine the big issue I faced was some of my light mass maps where all screwy from the chamfer not being UVed properly I then proceeded to create another UVunwrap in the modifier stack and then proceed to flatten map to a 2nd UV channel causing unreal to use this 2nd channel to help map out it’s lightmass mapping.
The consumption of time was another challenge I had with this process due to the workflow moments between steps and the constant bouncing between programs coupled with hardware limitations and crashes due to me running a lot of programs at once over an extended period of time.
Workflow breakdown:
Photogrammetry was an interesting tool that I had a great time learning about but sadly didn’t get to utilize it to its fullest extent.
The workflow that I have been aware of takes photos of the object in the ideal settings and then you load them into photoscan. Once in photoscan you then generate points from the image and after tidying up the object you want from by culling or cropping down to the points you need you then generate a dense cloud followed by generating a mesh.
The mesh for the most part is way too dense to be used effectively in other programs so there are some options for retopologizing the mesh down to a more optimized form. I would use ZBrush for this then I would if need be bounce it into topogun to create a low poly.
Research and images:
-this first video outlined the process for making an object from a photoscan to game ready asset I haven’t watched the texturing part mainly because I didn’t get an asset rendered out to run through the pipeline.
The guide to turntables also had some great ideas in it and I tried to utilize these when creating a turntable for the mech and legoman objects unfortunately I had little success
In addition to these Steve had linked me a workshop that he had put together that outlined a heap of information as well as a good breakdown of the required camera settings be used for photogrammetry.
Challenges: the challenges I faced was getting it to operate in the way I wanted it to operate I tried turntables bright lighting and dim lighting and even walking around objects and I was still getting issues with objects rendering out for me. Once issue Is that many of the test objects had glossy surfaces and moving refractions of light on the surfaces where probably throwing it out. The other thing that I feel was causing some issues was the size of the objects I was trying to capture. It would seem I was having more luck with larger objects and outside environments then I was with single objects that ranged from a lego man to a mech the model the size of my fist to a guitar. These all had plastic or clear coat finishes on them and I feel that maybe could have been part of the reason that they were not forming up the way that I would have hoped.
That other issue I had was time that it took to render out some of these objects even with not rendering out at the best setting I had used way too many photos to and it took to much time to render out some of these objects I thought more would generate a better result but now I’m thinking that perhaps I may have had too many photos and the information might be conflicting and causing my point clouds to not form properly.
I need to look into is what steps to I take to retain the captured vertacie colours or even if it is even possible to do so after converting the dense mesh into a more practical one. I’m aware of the steps but didn’t look into anything beyond that. I’m assuming that I would have to then take my lowpoly into max create a colour map and also a normal from baking the height polly version from ZBrush and going from there but this is me just taking a punt at the process.
Workflow breakdown:
Unreal scene construction and VR headset integration.
To get this working with little blueprinting as possible I constructed my scene within the UE4 VR template which allowed me to nearly plug and play with the Oculus DK2.
Once I had imported all my assets and up the Lightmass resolution, I just had to go about making materials and changing up all the post processes and world settings to get what I wanted. For the most part I left a lot of my settings for the VR portion close to base but for my rendering I tweaked out a lot of my lighting and post processes to get some afternoon shots and try and line up my shadows as close to the reference shots as I could.
Research: images:
51dDeadalus videos had a heap of great information in them from working with colour tables to light channels and assigning them to specific objects working from reference and even using the console to change the colour grading. A lot of quality information within these tutes and I’m going to definitely use these in the future. there are a total of 6 videos in this.
VR-vids-these helped me solve some of the vr issues I had with this project
snippet of my VR
Challenges: one of my biggest hurdles at the start was getting the old DK2 linkedIn and functioning properly I had a lot of software and driver conflicts and am still dealing with some issues due to dated software. Also the DK2 only supports a 720p equivalent resolution so it was hard to gauge how high rez my textures needed to me on the DK2 most things looked fine but on with closer inspection on the display some things looked “gluggy” in their appearance.
I spent a lot of time researching things for UE4 mainly because I intend to utilize it more as a primary thing I render out of but there was a lot of info and new techniques I have gathered but didn’t get a chance to try out due to time running short.
The other challenge is getting an exported executable working but the preview in VR works fine. As of writing, I am still unsure what is going on I’m gonna have a comb through the error log and see if anything stands out.EDIT* got the export working
Appraisal of the finished project:
So the UE4 finished product looked ok. I had set I rather high bar for myself I had expected to finish the base version and beable to create a fake event like a 21st or a BBQ and change the yard around but I spent a lot of time on the texturing and another chunk on photogrammetry.
The other disappointment was with photogrammetry I know that with a bit more time and another overview I can get it working when I have a dead line looming over learning new things it does cause some stress and I feel learning this at my leisure will get some better results.
Overall this specialization was a bit stranger than my others. with my past ones I was learning single skillsets that you learn in the asset creating stages. this project was more focused on final assembly with a lot of different aspects being tied together to achieve the final result this lead to me doing a lot of research on multiple steps and ideas.
Future lessons and Goals.
so a big take away from this is why megascans is a big thing and why it would be worth looking into. building textures from scratch is rather involved and being able to cut out the middleman is always a benefit.
Another thing I plan on doing is revisiting photogrammetry I wish to be able to scan single objects and create assets from them I plan on attacking this over the holidays and I know I can get a result now that I don’t have the combined pressure of the vr environment.
I want to continue learning lighting and environmental design for unreal engine and just overall functionality for the software itself. I have huge respect for epic and its engine and I’m looking forward to doing a lot more projects within the engine.