360-degree Spherical (VR) Video Basics
An introduction to the concepts and basic workflow
The fundamental processes this guide aims to introduce are:
- acquisition – filming/photographing 360 panoramic content
- processing (stitching) – combining multiple images to form panoramas (stills or video)
- viewing/publishing – makes the list because typical video players and image viewers don’t handle spherical images well
Use a 360 rig to capture images(video) from multiple sensors(cameras) simultaneously. These rigs are designed to position the cameras such that each camera’s field of view will overlap that of at least one other camera in the rig. Most types of “action cams” can be used (GoPro, Xiaomi, SJCam, etc). SimplifyVR tries to make every rig as universal as possible – but check the product description/documentation for notes regarding compatibility. Resultant files from each camera are combined using stitching software that combines multiple images(videos) into a single spherical image. This is ideally a largely automated process. Stitching quality can be improved with skillful use of available software, as well as careful positioning of the rig. A quality 360 rig combined with knowledgeable positioning of the cameras and planning of the shot can drastically reduce or eliminate the post-processing effort required. Freely-available software is used to view/navigate the panoramas on the local computer, and numerous online services and widgets are available to interact with this content on the browser.
Acquiring 360 Content
This requires a camera, or array of cameras (360 rig), capable of capturing images in all directions.
Should you choose one of the dedicated 360 cameras that has recently become available, or a 360 rig? Depends on your goal and budget.
The dedicated cameras have some obvious advantages:
- price (a few hundred to a couple thousand dollars)
- ease of stitching (if any manual effort is required at all)
- ease of use
Why go with a more complicated, and typically more expensive, array of cameras in a 360 rig then?
360 cameras just can’t match the capabilities of a multi-camera rig. The various technical reasons for this are outside the scope of this guide, but we will provide a brief explanation to help you understand why sacrificing some of the advantages in the first list is likely worthwhile
A 4K 360 camera typically uses one 4k-capable sensor (the component of a digital camera that captures images). 4K video is typically (and appropriately) considered to be very high quality – why would that not be enough? The field of view of a standard, non-spherical video is limited to where a single camera is pointing. All of these 8,847,360 (4096 x 2160) pixels are dedicated to representing that area. Now consider the additional viewpoints available in a virtual reality video. With a single sensor, we have the same number of pixels available to represent all of that additional data. The resulting video is nowhere near the quality people typically associate with “4K”.
With a rig, we’ve got multiple sensors (one per camera), and while the resulting video resolution isn’t as simple as “4K times the number of cameras” it is much, much better than what can be achieved with a single sensor.
There are two basic methods to gain a wider field of view (capture more of the spherical view) that is possible with a single, standard action camera: more cameras, and wider angle (“fisheye”) lenses. Changing lenses can reduce the number of cameras required – but can be complicated and expensive-and a rig and stitching is still required.
We’ll talk more about resultant quality considerations when selecting a 360 rig and cameras in another article. For now, it will suffice to say that for professional-quality spherical videos, you’ll want a rig that holds 6 GoPros or similar action cameras (Xiaomi Yi, SJ4000, and so on). We will also discuss later the differences in these cameras and things you should keep in mind when selecting a camera model. For now, we will just say that any of these models will work, but you want to use the same type of camera for each position in the array (rig).
You’ve got cameras and a 360° rig – now what?
The purpose of this article isn’t to document the usage of a specific camera rig, but we do have some general tips that are fairly universal. One thing to remember is to always put the same camera in the same location. While not absolutely necessary, it helps when trying to diagnose problems and with using stitching templates (more on that later).
Once you’ve got a rig and your cameras in it, there are a couple of considerations when the time comes to capture:
- How should the rig be mounted?
- How will I activate/deactivate the cameras? (starting/stopping recording)
You’ve got to be creative here. Your rig likely has a 1/4-20 SAE (Imperial) threaded connector designed to mount to a tripod. Remember this thread specification if you aren’t familiar with it — it opens up a whole lot of possibilities at the hardware store. Eye bolts (a metal ring with threaded shaft that screws into the tripod hole) allow you to hang the rig from an overhead object. Much more is possible, but tripods and eye bolts are an excellent start.
A note on tripods – depending on the tripod head, it may be difficult to get the threads on the head to reach the threaded hole on the rig. Extenders are readily available (see Amazon/Ebay or contact us for help until we start carrying them) which alleviate this problem. Most rigs are designed to keep the cameras as close to one another as possible (this minimizes parallax error and is a very good thing- more on this later), but this decreases the clearances between cameras. A typically great choice of “tripod” is a light stand, available from around $30 and up on Amazon. These typically extend higher than a standard tripod which is great for getting your rig up and away from objects (and reducing parallax error), with the added benefit that they don’t have large fancy camera heads on top that can require an extension to mount a rig. We like to keep both a tripod and light stand in the back of the truck when looking for opportunities to grab some good 360 views. A tripod is typically only about the height of a person, but has the advantage of adjustable legs to set up on uneven terrain (think about setting up your rig on rocks in the middle of a stream rushing down a mountain – level ground is not likely to be found). The light stand’s height is great not just for avoiding parallax issues, but simply to get above obstacles at “man height” – 10 or more feet is common. You do need to consider the effect of wind at these heights.
Read reviews and look around before making your selection, but we HIGHLY recommend this combination for effectiveness, versatility, portability, and affordability
Light Stand-besides the already mentioned benefits of a light stand, this thing works great for recording hikes in 360. It attaches easily to the side of my Eberlestock backpack (this one) and even at its lowest height produces good 360 video while hiking with minimal parallax error (my head and top of the backpack typically are slightly distorted). Extending it a foot or two is an option as well if you want to get rid of the stitching error, but I love the way it carries at its lowest height. Cost is only about $40 – so both the tripod and light stand we suggest come it at a total of around $110.
Tripod – This tripod is very lightweight and compact, legs are quick to extend/retract. Excellent for hiking. Note that it does have a ball head which is great for “normal” cameras but makes it difficult to attach to some of our 360 rigs. You’ll want a 1/4-20 extension. Not a big deal when this awesome tripod is only around 70 bucks. The aluminum material is great too for the way I use it – corrosion resistance is essential when you like to set it in streams and carry it deep into the mountains. Carbon fiber is great as well, but is more expensive and less rugged. Seriously, this tripod is the one – packs small, super light and even more effective in difficult situations.
I’m trying to avoid too getting too in-depth here, but I keep mentioning the term “parallax error” and it deserves some explanation and a little bit of consideration here. When you look at the same scene from a slightly different angle, there are differences in the way the same objects appear due to the different perspective. Since (for this example anyway) we are combining six images taken from different positions, there will always be slight “errors” between each image. This can usually be addressed via the stitching process, but should be considered when selecting a position for the rig.
Keeping moving subjects on a single camera is ideal. With the capabilities of stitching software nowadays, this is less of an issue but to minimize post-processing effort and simplify the process as much as possible this should be considered and accounted for when selecting a position whenever possible.
Similarly, the field of view of the individual cameras, combined with their position in the rig, is worth consideration. When viewing 360° videos on youtube or elsewhere, move around to view objects that are closest to the camera. Frequently, (hopefully) minor issues are evident. Stitching can only be done where images overlap. Think of the field of view as a pyramid with its tip at the lens of each camera, extending outward. Where these pyramids overlap, images can be stitched (depending on the details in the images, but let’s stick to basics). When using a well-designed rig, any object within some distance (x) of the rig should be in the field of view of at least 2 cameras, allowing stitching to work. A reality of any multi-camera rig is that an object positioned less than distance (x) from one of these cameras and on the boundary between the FOV (field of view) of multiple cameras may not stitch well. This can lead to distortion in the final image/video product.
Well-designed camera rigs keep the cameras as close to a central point as possible to minimize this issue. With proper consideration it is rarely a problem-even in extreme close-up situations, orienting the close subject to a single camera frequently can eliminate problems.
Bottom line – try to put some distance between your subject and the rig. It makes things much easier. If absolutely impossible, consider different lenses (but this should only be nececssary in extreme circumstances).
A rig mounted on a light stand like this one, above and away from the subject, is ideal. As with everything, practice makes perfect. When you get your first rig, devote as much time as possible to acquiring content in various places under various conditions. Take the time to learn the capabilities of whatever software you choose to use – automatic settings are typically good but you can work wonders with some skill. Try indoors, outdoors, underwater (we shouldn’t have to say this – but ensure your equipment is up to it). Have people walk between the field of view of cameras, both close and farther away. It won’t take a whole lot of practice to learn both what the software can do in your hands an what conditions are ideal using your cameras/rig. This investment will pay dividends.
Be sure to consider the safety of your cameras given the protection the rig offers and the way they are being used. Even if it isn’t raining, a rig mounted atop a vehicle moving at highway speed is likely to enounter dust, insects, and even pieces of gravel. A fully-enclosed housing is highly recommended here. A rig like the Ultra360 provides maximum flexibility.
Starting and Stopping Recording – and some comments on the stitching process
When recording a single standard video, this is usually as simple as pressing a button once to start and again to stop. Essentially, it is the same here – just multiplied times the number of cameras in your rig.
It really isn’t more complicated (other than more buttons to press – and read on for a solution to even that) when filming VR, however synchronization of the video files is related to this action. A very simple explanation of the video stitching process is that a single reference panorama is created from a single stitched image (frame) from each of the component videos, and then the coordinates used in this reference panorama are used for the rest of the video. A video is essentially just a large series of still images, so why wouldn’t this method result in a great video? If the cameras remain at fixed positions relative to one another, doesn’t this work? Yes and no.
For many videos, stitching this single reference panorama and applying the same stitching to the rest of the video will produce OK results. Assuming the orientation of each camera with respect to the others remains constant, depending on the subject it may be perfectly fine with no additional effort required. We are getting way ahead here, but lets sum things up and simply say that stitching depends on overlapping video files taken at the same *time* from a fixed location.
Note the emphasis on *time*. Stitching is done on a set of still images (frames taken from a video). If your rig, camera settings, and positioning are OK, they should stitch. The big variable here is time.
Remember that stitching is done on a set of still images (one per camera) taken from the video files. Lets say your rig is stationary at position 00:00:01, and then rotates or moves at 00:00:02. If we then extract still images from 3 of the cameras at 00:00:01 and the other 3 at 00:00:03, these images will likely not stitch.
Consider the time needed to begin recording, and the innate differences in the hardware of each individual camera. Even starting multiple cameras “simultaneously” via mechanisms such as the GoPro smart remote does not produce files synchronized to the individual frame (sometimes the start frame is off by a second or more).
This is all OK – Its handled by software as long as the user is mindful of the need for synchronization. Software sync is accomplished by one of two available methods:
Audio compares the time when a sound is recognized by each camera to match frames. Motion does the same based on motion perceived by each camera. We recommend trying to make both available in your video whenever possible.
- clap your hands close to the rig a few times when starting recording
- spin the rig/tripod around a few times
Synchronization is often possible without doing either of these, but take our word for it and please try to do one or preferably both of these actions each time after you begin recording.
A quick tip – the GoPro smart remote, capable of controlling multiple cameras with one button press, is a great convenience here. Not to say that pressing a “start” button 6 times (once for each camera) is a great inconvenience, but once you are used to this as part of your workflow the idea of pressing multiple buttons sounds like a real hassle. We live by the “simplify” part of our name (sure, call us lazy) and love the smart remote.
This seems simple enough – you have your files on the cameras’ SD cards, and you copy it to your computer to be processed. It is simple – but frustrating due to the number of cards (multiplied by the number of files per card). Fortunately, there are solutions. A small investment in enough of these (usually 2 for 6 cameras) is worth their weight in gold once you have experienced inserting 6 or more SD cards, one at a time, and copying files before moving on to the next one. We have software that will soon be released (for free) to further automate the process – one action results in all of your files being copied (or moved) to your hard drive in a logical manner. Even without any scripts to manage this process for you, the multi-readers are a godsend. We stick 2 together with a square of double-sided tape to make it like a single 6-card reader.
The Stitching Process
Once you’ve returned from acquiring your content and copied it to your computer, your next step is to combine the individual files to form 360° panoramas – this is referred to as “stitching”.
There are several software packages available to help you here, all of which we cover in detail elsewhere. We will simply list the relevant players here:
- Hugin – still panorama tools (free and open source)
- PTGui – still panorama tools (not free, but popular and widely documented)
- VRDL Panomore
- VideoStitch Studio
- Kolor Autopano Video/Giga
We will not provide a detailed analysis of these tools here. Hugin and PTGui are extremely useful but are not adequate (for the typical user) for creating VR videos. They are, however, excellent for creating reference panoramas (“templates”) for stitching within other software packages.
Stitching software provides automated algorithms for determining where the individual files overlap and should connect to form a spherical image. You should plan to become intimately familiar with whatever software you choose to use – it will make the difference between “neat” 360 content and professional-level.
You’ve got a stitched video – what now?
For viewing on your computer, use the freely-available Kolor Eyes (recent renamed GoPro VR).
YouTube supports spherical video; FaceBook supports both 360° video and still panoramas.
There are a variety of widgets available for rendering this content on your own web site, which we will cover elsewhere.
As always, if you have any additional questions please contact support [AT] simplifyvr.net