How to create VR content
Virtual Reality (VR) headsets continue to grow in popularity, with the likes of Apple taking the technology in new directions, and a host of cheaper devices hitting the market. With VR entering the mainstream, there has also been a boom in demand for 3D models. In this article, we explain how you can make your own ultra-realistic VR assets for the virtual world.
The immersive world of VR is rapidly entering how we live, work, watch, and play
Wherever you look, from the ever-expanding Metaverse to the Apple Vision Pro headset, VR is edging closer to mainstream adoption. Previously used almost exclusively for creating immersive video games, the technology is now threatening to enter the professional space – not just in CGI – in e-commerce, training, and even within productivity apps.
As the applications of VR continue to grow and diversify, there has been an increase in demand for 3D models. After all, there’s no point building a virtual world if it’s empty. Recreating real objects, people, and places is vital to constructing an immersive virtual reality experience that really makes users believe they’re somewhere else.
Let’s start by taking a look at the different types of VR assets out there, before exploring how you can make 3D models of your own.
Key point
Growing VR headset popularity has led to a rise in demand for the realistic 3D models needed to create immersive simulations.
Types of VR content
When it comes to making immersive web content or capturing environments for VR, there are two main methods: creating 360° videos and interactive simulations. The former are captured from all directions, putting viewers in the shoes of the camera person. With stereoscopic lenses, it’s even possible to film these from ‘a left and right eye’ perspective for VR content creation.
If video experiences were to be viewed side-by-side with interactive simulations on a PC, you wouldn’t be able to see the difference either. But once hooked up to a VR device, they can seem quite restrictive – limiting users to the exact route taken by the video camera person. By contrast, simulations allow users to explore virtual spaces in their own time.
Key point
Creating 360° videos may be easier than making lifelike simulations, but they offer a limited VR experience.
3D models for VR are made in another two slightly different ways: sketching and reality capture. ‘Sketching’ describes a manual process whereby 3D artists digitally sculpt models, either using photos as a reference or freestyling to create entirely new designs.
Though this workflow is time-tested – many popular films, TV shows, and video games feature sketched CGI models – it’s also time-consuming and requires a high skill level. Reality capture, on the other hand, allows designers to digitize objects, people, and environments as they stand, so they don’t need to make models from scratch.
Collecting data is often a key part of creating VR content
As well as being faster, reality capture picks up texture details that may otherwise be difficult to identify, while also being much easier to master. These benefits have allowed photogrammetry to establish a strong CGI foothold, but other technologies like 3D scanning and smartphone capture are gaining ground in this area – we’ll cover this in greater detail later.
How to create VR content
Depending on your application and technology, your workflow may differ case-by-case. But there are generally two main ways of creating VR assets: one that starts with using photos as reference materials, the other with 3D scanning.
1. Sketching
Once you’ve come up with a concept and decided what exactly you need to 3D model, the next stage is traditionally digital sculpting. This involves gathering photos of a real area, object, or person from several angles, then using these as sketching references. It’s a particularly useful process for turning 2D concept art into full 3D models for CGI or video game applications.
Of course, you can also come up with designs organically, creating models from scratch using basic geometric shapes to define proportions, before manually adding details. This approach rewards creativity – but beware, it can be slow, hard work!
Key point
Sketching is a time-consuming process that can be accelerated with 3D scanning and other advanced technologies.
2. Retopology
For advanced 3D modeling applications like animation, gaming, or VR, it’s usually necessary to turn a high-resolution sculpted mesh into a simplified base. This is also vital for reducing file sizes – after all, if your models consist of millions of vertices, they’re going to be too heavy for any commercial VR headset to handle.
Retopology can get highly complex, with manual tools enabling the creation of mind-blowing visuals for AAA blockbusters, but it’s also possible to streamline. While ZBrush comes with Zremesher, which retopologizes 3D models on users’ behalves, 3DCoat makes retopology quick and easy, allowing complex surfaces to be simplified into just a few vertices.
3. UV mapping
Next up, you’ll need to tell your 3D modeling software where you’d like to map texture. It may seem counterintuitive, but this starts with the process of ‘unwrapping’ the texture from your model and converting it into a 2D format, so it can later be reapplied using ‘UV mapping.’
This works as if you’re wrapping an individual candy, only with textures being folded carefully around models instead so that they land on the correct geometry. Free programs like Blender simplify unwrapping into four basic steps: scaling, projection, seam marking, and unwrapping.
UV mapping is essential to achieving realistic models
You can also download extensions for platforms such as SketchUp that allow you to unwrap in segments and work with double-curvature shapes. Whatever software you end up using, careful UV mapping is essential to achieving realistic models, so take your time to get it right.
4. Texture baking
After UV mapping, it’s then worth asking yourself ‘what details would I like to bake into my final model?’ This could be color or fine surface details – even dynamic effects like corrosion and lighting can be made static and transferred onto model surfaces.
Transferring textures (or texture baking) also allows you to take realism-boosting data from high-poly models and apply them to retopologized ones. We’ll explain what retopology means next, but for now, it’s worth remembering that you don’t need to sacrifice detail to keep models lightweight.
Key point
When 3D modeling metal surfaces, texture baking is vital to retaining tiny details through later design stages like retopology.
5. Preparing for VR
How your workflow ends will depend on your model’s application. If it’s a character model for a VR video game or experience, you’ll need to give them a skeleton for movement using ‘rigging.’ This can be achieved in many 3D modeling programs including Blender, Autodesk Maya, and more dedicated platforms like Houdini, which is packed with advanced animation features.
On the other hand, models made for animation or proof of concept purposes will need to be 3D rendered – a process that sees lighting and textures fine-tuned for photorealism.
How realistic can VR get? The future will show us – and the future is now
Such models often require multiple iterations before they’re sufficiently lifelike for integration into simulations. But high-end models are absolutely vital to building virtual worlds that closely match our own – so it’s worth applying polish to them. There are also ways of making VR 3D modeling faster with advanced technologies, as we’ll see next.
Key point
Combining structured-light and LiDAR scanning allows you to capture entire areas in high resolution for stunningly realistic VR assets.
How to make VR content with 3D scanning
As mentioned earlier, technologies like photogrammetry and 3D scanning are increasingly accelerating 3D modeling. Where the former really thrives is texture capture. Photogrammetry – the process of photographing an object or area from multiple angles and stitching captured images together – can be used to create VR 3D models with photorealism.
However, photogrammetry still relies on the taking of photos, whether with a professional device or a smartphone, and this can get pretty time-consuming. 3D scanning, on the other hand, allows you to capture any object, person, or space in real time, while doing so more easily, accurately, and with greater speed.
Professional handheld devices like Artec Leo digitize bodily features in seconds, making character capture quick and easy. Digitizing objects in HD Mode also means picking up fine details with 0.2 mm resolution, so you can use them as references during sketching.
To scan wider areas and turn them into VR environments, it’s also possible to use LiDAR devices such as Artec Ray II. In its own right, Ray II is a great tool for capturing buildings or open spaces, but once combined with Leo, it takes on another dimension. Scans captured by both can be merged to make models using the highest-resolution data from each.
It’s possible to boost the texture quality of 3D scans with photo-texturing to create lifelike scenes
This is made possible by algorithms unique to Artec Studio, an advanced 3D scan capture and processing software. Through scan global registration, fusion, and editing, the program also provides ample opportunities for users to maximize detail capture. It’s even possible to deploy photo-texturing as a means of further sharpening textures and boosting resolution.
Overall, it can be said that adopting 3D scanning is essential to achieving the best possible VR results, as it allows you to base models on real assets, capture their most intricate details during sketching, and accelerate your design workflow.
Key point
With photo-texturing, it’s possible to boost the texture quality of 3D scans further and achieve even more realistic models.
Bring your 3D models into VR worlds
Once you’ve finished 3D modeling, you’ll need to create a virtual world they can be used to populate. There’s a lot of crossover between VR and video gaming, so many developers use Unreal Engine and Unity, although dedicated programs like Simlab Composer are also popular. Overall, whichever software you choose, there are four key steps to building a VR environment.
1. Creating a scene
First of all, you’ll need to establish a base on which your VR world can be built. This could either take the form of a plane or a 3D cube that acts as a ‘room’ you can fill with 3D models. After that, it’s necessary to make a character who lets you view this space in the first person.
With camera positioning, it’s advised that you set it at eye level, as most simulations are designed to be viewed as if you were there. But this can always be adjusted later on. It’s also worth noting that Simlab Composer allows you to build virtual worlds in the first person, without the use of a VR headset. This can take the edge off lengthy world-building sessions.
Key point
With Unreal Engine, Unity & Simlab Composer, you can create scenes in virtual reality and watch them grow in real time.
2. Setting up your controls
Next up, you’ll need to configure your character so they can move. This works slightly differently depending on which software you’re using, but in general, it means assigning controls that allow game controllers or other input devices to look, move, and trigger actions on command.
In Unity, for example, you can add a locomotion system that allows users to freely roam the worlds they build, as well as snap-turn and teleportation functionalities for fast travel. It’s also possible to integrate ‘anchors,’ which hold characters in place. If you were to create a virtual auditorium, these would allow you to transport attendees into a designated viewing spot.
Specialized software for animating and editing VR scenes
3. Adjusting lighting & starting to build
Once you’ve given yourself a blank slate, it’s time to start creating. Depending on the type of scene you want to build, you’ll have to set up lighting in a certain way. The simplest method of ensuring objects and characters are lit as desired is to establish an internal light source such as a lamp – with these, it’s even possible to see how shadows are affected by movement.
With lighting all sorted, you can then begin building your VR world block-by-block. Most programs come with generic 3D models and allow you to create using primitive shapes, but it’s easy enough to find additional resources on sites like Sketchfab and CGTrader.
4. Import, scale & animate 3D models
Bringing 3D models – complete with vibrant, lifelike textures – into your platform of choice is usually quick and easy, requiring only the dragging and dropping of files.
In VR world-building programs, it’s also possible to scale, orientate, and animate objects – turning them into interactive elements like doors, vehicles, and more.
Key point
Making VR worlds can be straightforward – simply create a scene, set up controls, import models, and start creating!
Applications
Immersive video games
Hyper-realism has long been a target for video game developers, and if anything, this is even more important with VR titles. Fortunately, many models made for regular games are already based on real-world objects, so they have VR-level visual fidelity.
Take the team at Creative Mesh, who have combined Artec Leo & Ray II to capture farming and emergency service equipment to create 3D models for Farming Simulator. A similar approach can also be used to develop lifelike CGI for movies or TV Shows. For instance, using scans captured with the lightweight, flexible Artec Eva and ultra high-resolution Artec Space Spider, VFX experts have created an eerily realistic character for Sleepy Hollow.
Commercial VR content creation
E-commerce is already an established industry for 3D scanning innovation. At leading sports equipment manufacturer ASICS, high-quality footwear 3D models are used for product quality inspection, as well as creating engaging animated marketing content.
Companies like HEMO are taking this approach in a different direction, capturing machines weighing up to 30 tons with Artec Leo & Ray II, and turning them into VR exhibition models. Instead of exhibiting colossal machinery to visitors at trade shows, the firm can now showcase products via a VR simulation, saving them huge amounts on transportation.
Save space (and 30 tons of weight!) by showcasing huge objects in VR
With fashion sites also starting to introduce virtual product try-ons, it’s clear that there’s still significant headroom for VR content creation to grow in the e-commerce space.
Key point
VR simulates environments virtually, AR brings interactive overlays to the real world. Together, they have the potential to turn 3D models into tools with tangible productivity benefits.
Augmented reality
Often used interchangeably, AR and VR are actually different but closely related technologies. While VR describes immersive videos or simulations, AR sees objects integrated into VR worlds, so that they appear to float in real-life environments. Using the VR & AR Apple Vision Pro headset, for example, it’s possible to digitally pick up, view, and place objects around simulated rooms.
See the world differently!
The potential here for 3D scanning then modeling and integrating objects into AR applications is massive. In industries ranging from healthcare to heavy industry, people already undergo training via simulations on VR headsets. As other players enter the market, it will be fascinating to see how VR continues unlocking opportunities to connect the real and virtual worlds.
Virtual viewings
You may not typically view them via headset, but virtual 360° tours that allow you to see properties in VR are rapidly becoming the new normal. These are great for scoping out potential living areas and ensuring that office furniture will fit into proposed business spaces.
At the moment, many property videos are designed for viewing on a PC, smartphone, or tablet, but they can also be observed using VR headsets – and it’s not beyond the realms of possibility that they could be captured with stereoscopic lenses for greater immersion.
More broadly, who’s to say what other buildings could be used for VR content creation? With the sky being the limit, it’s clear why there’s so much hype around the technology in this area.
Key point
Advanced technologies continue to make 3D modeling easier to achieve – now VR content creation is open to anyone.
Conclusion
As you can see from the workflow above, 3D modeling for VR is becoming easier and more accessible than ever before. There are now plenty of programs out there – some designed for modeling newcomers, others with advanced feature sets – but all deliver great results.
Using professional-grade 3D scanning, it’s also proving possible to simplify and accelerate the sketching part of the process. As the technology continues to drive 3D modeling forward, it will no doubt create fresh opportunities for VR experiences indistinguishable from the real world.
Interested in finding out more about Artec 3D scanning for 3D modeling? Check out our in-depth breakdown on how to create video game 3D models here.
Read this next
More from
the Learning center
Why would you scan a room, and what do you need to keep in mind when you do? First, it’s important to ensure the 3D room scanner you use is suited to the task. In this guide, we look at why you’d scan a room, how to do it, what the best room 3D scanner is, and what else you should be aware of.
There are lots of factors that influence how much it might cost to 3D scan something. In this guide, we cover the main factors that contribute to the cost of 3D scanning, and look at whether it makes sense to get your own scanner, or if you should consider turning to a professional scanning service.
The process of creating a 3D model starts with choosing the right tool for the job, and this choice primarily revolves around what you need the 3D model for – in other words, how you’re planning to use it. Some of the most popular tools include 3D scanning and photogrammetry, and this article is here to guide you as to which one of them to pick for a specific project, as well as when they can be combined.