Updated: Oct 4, 2019
Written by - Anonymous
Extended Reality (XR) technologies can enhance users’ perception of the real world (i.e., AR) or immerse them into fully synthetic scenarios (i.e., VR). To this end, XR needs to render 3D content of scenes, objects or people into real or virtual environments. One of the most important challenges here is how to create such 3D models (geometry and color).
3D models can be hand crafted by artists using 3D software (e.g., Maya, Blender, 3ds Max). This method guarantees compelling, artifact-free 3D models, but the creation process is time consuming and requires a considerable level of expertise.
Another way to create 3D models is by automatically recovering them from the real-world, a process called 3D reconstruction. We can do this using specialized hardware or via photogrammetry, which only requires photos. Regarding the former approach, 3D scanners (e.g., EinScan, RealSense, Artec Eva) certainly generate geometrically accurate models, but they are bulky, not so cheap, and learning the most suitable scanning configuration may disengage common users. The later approach consists of capturing a bunch of photos with a smartphone and loading them to a photogrammetry PC software (e.g., RealityCapture, Metashape). This may alleviate the learning effort for common users compared to 3D scanners, but it’s still not a consumer-level solution.
So, is there a way in which consumers can become 3D content creators? The answer is yes. In the following, we introduce two consumer-friendly 3D reconstruction solutions: photogrammetry apps for smartphones and 3D smartphones.
Photogrammetry apps such as Sony 3D Creator  and Qlone  allow users to reconstruct objects within seconds. Using Sony’s 3D Creator app, the user points to the target object with the smartphone’s rear camera while walking around. The UI helps the user know which parts have been reconstructed. Qlone app provides a slightly different approach. The user first needs to print out a checkered mat where the object to be scanned will be put on. To scan, the user can simply rotate the mat instead of walking around. Using such apps, 3D content creation seems convenient for the broader audience. But photogrammetry has some algorithm constraints. The recovered 3D model lacks the global scale (i.e., the true size in meters) since only color images are employed in the computation. In addition, textureless objects can’t be reconstructed as geometric triangulation is hard to be done in absence of image features.
Several smartphones equipped with 3D sensors have been released in recent years—e.g., Lenovo Phab 2 Pro (2016), Apple iPhone X (2017), Huawei Mate 20 Pro (2018). There are some cool apps exploiting such 3D information. The Bellus3D FaceApp  for iPhone X and later scans a high-resolution 3D model of your face. You just need to grab your phone in a selfie-like manner while turning your face to the left and right. You can later use your 3D face for virtual makeup or to customize an avatar. Another example is the 3D Live Maker app  for Huawei Mate 20 Pro. To scan using this app, you grab the phone with one hand in a way that the front camera points to the target object and the screen is always visible to you. You then use your other hand to freely grab and move the toy until all its surface has been reconstructed. The app automatically rigs the reconstructed 3D model so that you can control it—you can make it walk, jump, dance and so on. 3D smartphones fill in the gap between 3D scanners and photogrammetry apps for 3D content creation. These devices offer the best of both worlds—they are easy to use and fairly accurate, while overcoming the scale and texture limitations of conventional photogrammetry. However, 3D sensors themselves have some handicaps. Depending of the 3D sensor technology, the reconstruction quality can be greatly affected by the scene itself (e.g., illumination, material’s reflective properties, object color, etc).
For now, photogrammetry apps and 3D smartphones provide a good solution for consumer-friendly 3D content creation. They both have their limitations regarding what type of objects can be scanned. Also, you’ll likely need several attempts until your first successful 3D reconstruction. In the near future, these approaches will hopefully evolve and overcome these drawbacks with the help of learning-based algorithms and cloud computing solutions.