VideoStitch Studio
Description
VideoStitch Studio takes footage from multiple cameras and stitches it into one seamless 360deg spherical video. VideoStitch SAS originally developed the software in Paris, and when the company closed in early 2018, a non-profit organization called stitchEm acquired the source code and made it available under the MIT License. The program is available for Windows and macOS and supports a wide range of camera rigs.
A typical project progresses through four stages: the user imports video clips from each camera, aligns them to a common timeline, sets up the stitch geometry, color and exposure correction, then exports a standard equirectangular file that will play on YouTube 360, Facebook 360, Vimeo, and consumer VR headsets.
STITCHING AND CALIBRATION
The automatic calibration engine prompts the user to select one or a few representative frames, and then computes an optimal stitching template in seconds. Two different algorithms are used to take care of different aspects of the job. The geometric algorithm searches overlapping regions of adjacent camera views and matches corresponding control points, dragging the images into alignment. The photometric algorithm measures the vignetting and exposure response of each lens, and then corrects for these differences so the seams blend evenly.
When automatic calibration is insufficient — in scenes with no texture detail, or objects very close to the camera rig — the user can import a template from PTGui or Hugin instead. Both tools specialize in the calibration of panoramic images and work well as a fall back. The user can also reposition and level the output sphere by dragging directly on the preview, instead of having to type in numerical values.
SYNCHRONIZATION
VideoStitch Studio has four synchronization methods: audio-based, motion-based, flash-triggered and a combined audio-video mode. Each method is aimed at frame-accurate alignment between cameras that were recorded independently without a hardware sync signal. The preview window reflects any manual offset adjustment immediately at full resolution, so the user can dial in the correct sync by eye without waiting for a render. Changing the offset does not require re-importing the source clips.
COLOR AND EXPOSURE
Each source clip has its own color and exposure controls, and the stitched output has a separate layer of controls on top. All adjustments support keyframes, meaning that values can change gradually over the duration of the clip. The automatic exposure compensation mode scans the overlap areas between adjacent cameras and calculates a correction to even out the brightness and color differences over the entire sphere. The Feather slider in the output settings controls the hardness or softness of the blend at each stitch seam.
PERFORMANCE AND GPU SUPPORT
The rendering engine splits one project across multiple graphics cards simultaneously, reducing overall processing time on workstations with more than one GPU. The user can dedicate one card to the live preview in the Studio interface and another card to the batch processor only, so that both tasks are running in parallel. Version 2.3 added support for AMD and Intel graphics in addition to Nvidia CUDA, which opened up the software to most Mac computers from 2013 onwards and to Intel NUC systems.
EXPORT AND OUTPUT
VideoStitch Studio exports in equirectangular 360×180 format, the standard that is natively accepted by YouTube 360, Facebook 360, Littlstar, GearVR and Cardboard players. The user can select from ProRes (four quality levels), H.264, MPEG-2, or Motion JPEG for video, or export individual frames as sequences of JPEGs, PNGs, or TIFFs. H.264 supports variable and constant bitrate modes. Projects that require additional editing in Adobe Premiere Pro or DaVinci Resolve benefit from a High Quality ProRes export or an uncompressed image sequence. The batch stitcher allows the user to queue multiple projects and walk away while the software processes them in order. A resolution of 4096×2048 pixels provides the best quality for full equirectangular output.
VIDEO STABILIZATION
The built-in stabilization tool analyzes footage and corrects shake and vibration that crept in during handheld shooting, vehicle-mounted rigs or action sports capture. The user executes this step within the same project without switching to an external application.
SYSTEM REQUIREMENTS
VideoStitch Studio is available for Windows 7 or higher (64-bit) and macOS with a compatible AMD, Intel or Nvidia GPU. The Nvidia CUDA requirement is only for the companion live stitching tool Vahana VR. VideoStitch Studio itself accepts all three GPU vendors from version 2.3 onwards. Projects at high output resolutions benefit from 8 GB of RAM or more. Available storage depends on the amount of raw multi-camera footage involved in the project.
COMPATIBLE CAMERA RIGS
The software takes footage from nearly any multi-camera 360deg setup. Tested rigs include GoPro Hero cameras with four, five and six cameras, Canon 5D Mark II with single-camera panoramic setups, and the Elmo QBiC Panorama X with four cameras. Custom rigs made from action cameras or industrial lenses work too, as long as the user has a valid calibration template. Templates from PTGui or Hugin are imported directly into the project. The official download page contains pre-built sample projects for common rigs, including a six GoPro Hero2 paramotor rig.
LICENSING AND AVAILABILITY
The free demo mode has no time limit and no features are locked, but a watermark is added to any output video that is larger than 1000×500 pixels. A paid license removes the watermark. Each license key supports two activations by default; the support team handles requests for additional activations. The full source code for VideoStitch Studio and Vahana VR is located at github.com/stitchEm/stitchEm under the MIT License.