With the development of mirrorless camera technology, the distinction between photo-taking devices and video cameras has become further blurred. Today, there are plenty of mirrorless stills/video cameras available that can shoot footage comparable to the standard of professional video tools, but at a fraction of the cost.
In this guide we’ll be explaining the key technologies and features of today’s hybrid stills/video cameras, to help you make the right buying decision. To help you navigate the jargon-heavy world of video, we’ve created a glossary of terms which you’ll find at the end of this article.
The most quoted video specification you’ll see for a camera is the output resolution, typically 1080p/Full HD, 4K or even 8K on the latest cameras. Most recent TVs can display 1080p/Full HD, and the ability to show 4K video, which has twice the resolution, is becoming increasingly common. Shooting 4K footage gives some flexibility during the editing process, even if your final output will be 1080, but the files tend to be a lot larger and require more storage and a more powerful machine for editing.
The same is true to an even greater degree with 8K capture: it affords you some creative flexibility (in terms of cropping or stabilizing your footage) if you’re outputting a 4K video, but the storage and processing requirements are even greater. Most people will find good quality 4K more useful than 8K footage, most of the time.
An important consideration beyond the quoted output resolution is how the footage is captured: the best cameras capture greater-than-4K resolution and downscale to give highly detailed 4K output, but other models have to sub-sample (only capturing some lines of their sensor, or lumping pixels together) which gives a less-detailed result that is more prone to glitches. Finally, some cameras have to crop in and use a small area of their sensor, which lowers quality (especially in low lighting conditions), and means your footage is more ‘zoomed-in’ than in your camera’s photo mode, making it harder to get a wideangle view. This is a detail most manufacturers don’t publish, so you’ll need to learn the lingo and read reviews to be certain.
The final factor to consider is rolling shutter: the wobbly, Jello-like distortion of subjects that move quickly past the camera. This is caused because cameras capture their video one line at a time, scanning down the sensor: on a camera where this is slow, there’s more of a risk of your subject moving and being in a different position by the time the camera is capturing the bottom of each frame. Generally, cameras with smaller sensors are quicker to read-out, so are less prone to this problem.
Most video is shot at approximately 24 frames per second or 30 frames per second (with 25 fps being the standard for TV broadcast outside North America). But many cameras offer faster frame rates, which can be used in a number of ways. 60p footage can do a better job of representing motion, so can be a good way of capturing bursts of action. The alternative is to capture 60p or faster and then slow it down to 24 or 30p, to give a slow-motion effect. Most cameras can’t offer fast frame rates at their highest resolution, but 1080 capture at 120 fps or faster is not uncommon, which can be great if your project doesn’t have to be 4K.
Another detail to check is whether a camera has any recording restrictions. Some models can only record for 29 minutes and 59 seconds (an old restriction that related to import duty), but most end up being limited simply because high-resolution video capture generates a lot of heat.
The processing needed to capture video generates heat and most stills/video cameras aren’t very effective at dissipating this heat, eventually requiring them to shut down to cool off. Pro video cameras have cooling fans but most stills/video hybrids simply try to transfer this heat to the camera’s body panels, where it can escape into the environment. The best of these designs can continue shooting for extended periods, while other models let you disable their overheat limits (or, at least, make them less stringent). This is rarely a problem if you plan to shoot lots of short clips to edit together but will prevent you leaving the camera running at something like a school recital, especially if you try to shoot in 4K or higher. Fast frame rates can cause similar headaches in terms of heat and storage.
Once you’ve found a camera that shoots good footage at the resolution you want, a key thing to consider is audio. Most audiences are more forgiving of poor-looking footage than they are of bad-sounding video, and it’s a factor easily overlooked if most of your experience is photographic.
A microphone input socket is a must: the internal microphones in cameras tend to be simple affairs that will pick up ever movement of the operators hands or clothes moving nearby, so you’ll want to be able to attach an external microphone. The next most valuable feature is a headphone socket so that you can check the volume level and monitor for distracting background sounds: the human brain is great at filtering-out the sound of a car passing or an airplane flying overhead but you won’t be able to remove it from your audio recording, when you watch the footage back.
|More video-focused models let you attach audio modules for attaching and controlling high-end microphones with XLR connectors.|
One of the biggest distinctions in modern cameras is how reliably their autofocus works when capturing video. Unlike stills shooting, video captures all of the camera’s attempts to focus, as well as the moments it’s in focus, so you’ll need a camera that’s decisive and dependable if you’re hoping to trust it to autofocus while you’re recording.
The best performers are able to reliably track subjects (especially human subjects), and let you decide whether they should re-focus rapidly (to keep a moving subject in focus), or slowly and smoothly, for when you want to draw attention from one subject to another. Autofocus depends on both the camera and the design of the lens you use, so it’s worth doing a degree of research (and, perhaps, testing), before you decide to rely heavily on autofocus.
|That little AF/MF switch in the center of the image represents the classic dilemma of video: put faith in autofocus or take control yourself. The most modern cameras make autofocus much more dependable.|
Focus peaking and Zebras
The alternative to autofocus is, as you might expect, to focus manually. This is the way a lot of professional video is still shot. Most modern cameras let you use autofocus to set your initial focus position, before you start recording, then provide a ‘focus peaking’ function that highlights the edges of the in-focus points in your scene. When used with an appropriate lens (ideally one with linear focus response, where the focus always changes by the same amount as you turn the focus ring) and a bit of practice, manual focus is pretty workable, but a lot of subjects can be arranged so that you don’t need to re-focus very often.
|Focus peaking is added a red highlight to the highest-contrast (sharpest focused) parts of this image, making it easy to see what you’re doing if you manual focus a shot.|
As well as focus peaking, most cameras let you ‘punch-in’ to the video: giving a magnified view of part of the scene to check critical focus. Whereas nearly all cameras will punch-in before you start recording, only some will let you zoom-in to double-check your focus while you’re recording, which is a useful option to have.
The other useful video tool worth checking for, when researching a video camera is the option of overlay a Zebra pattern onto the screen, indicating a specified brightness. It’s a useful tool for judging exposure, and can be adjusted to check for over-exposed regions or to check you’re exposing skin-tones correctly (getting exposure correct in video is much more critical than in stills, where you can shoot Raw to preserve some latitude for adjustment).
One detail that won’t be mentioned on a camera makers’ website is whether exposure and other settings are carried over from stills to video shooting. The ideal photo settings are often drastically different from the ideal video settings, so we prefer when exposure, white balance and focus modes are kept separate.
Even with separate (or separable) settings for stills and video, it’s not uncommon to have to add darkening (neutral density) filters to your lens when jumping from stills to video capture, but not having to constantly adjust your settings can help make switching back and forth a lot simpler.
High Dynamic Range capture
Unhelpfully, the term ‘High Dynamic Range’ is used to refer to two things: modes that try to squeeze a wide range of bright and dark tones into standard footage, and modes that capture a wide range of bright and dark tones for playback on HDR TVs that can properly display them as bright and dark.
HDR TVs can properly display a wider range of bright and dark tones, and can arguably have more of an impact on the viewer than the jump from Full HD to 4K
This second approach can arguably have more of an impact on the viewer than the jump from Full HD resolution to 4K. The most common system for doing this is Hybrid Log Gamma (HLG), a system developed for broadcast TV, designed to show wider dynamic range on the latest TVs but still look good on older sets. The other option is called ‘PQ,’ which is a more sophisticated system, but doesn’t necessarily offer a dramatic difference to HLG. Both systems are supported by YouTube, which will also generate a standard DR (SDR) version for viewers without HDR TVs.
Most HDR standards require 10-bit capture (which has sufficient space to encode the additional color and tonal range that HDR footage needs).
|Log footage (left) looks very flat and washed-out, but retains a lot of information about the scene being shot, allowing more flexibility to achieve a specific ‘look’ and retaining a wider range of bright and dark tones.|
The other type of video that benefits from 10-bit capture is Log recording: a way of capturing and retaining more information about the original scene, to provide greater flexibility when you come to edit the footage.
Log footage tends to look very low contrast and desaturated, to prevent color or tonal data clipping and becoming harder to edit. The downsides are that Log capture usually encourages lower exposure levels, which capture more highlight information but risk other parts of your footage looking noisier. The other disadvantage is that you’ll definitely need to edit and color-grade your footage. This can be as simple as applying a color preset (called a LUT), but it’s an extra step you’ll have to go through.
On the subject of LUTs, most cameras that shoot Log let you apply some kind of correction to their screen or viewfinder to let you preview what the processed footage might look like. so you’re not looking at grey, washed-out footage.
|Raw video output is becoming increasingly common, but often requires an external recording device and some more work when editing the footage.|
Some cameras can capture or output Raw footage, often requiring an external recorder to encode the results into a quasi-standard format. In principle this gives a level of control over the brightness and white balance of the footage, beyond what’s possible with well-shot Log footage. However, because the Raw output doesn’t have the camera’s processing, sharpening and noise reduction applied, it requires more work to make it look good. And, at present, most editing software has been designed to work with regular compressed footage or Log-encoded video, so the workflow isn’t as smooth as it could be.
Getting the most out of your video camera
Whatever camera you choose, the ability to shoot and edit high resolution video can be an exciting creative adventure. There’s a lot to learn, even if you’re familiar with photography, but the satisfaction of capturing and creating video footage can be immense. There are extensive resources on the internet to help you as you learn, and with the latest models, it won’t be your equipment holding you back.
Sensor sampling – The method used to derive video resolution (eg 4K) from a sensor that may well have more capture pixels than needed. The most common methods are: native (1:1) sampling, line skipping, pixel binning and oversampling.
Native (1:1) sampling – Utilizes the same number of capture pixels as the output video resolution. This often means cropping-in to use a central portion of the sensor. Gives detailed video but the crop results in a narrower field of view, making it hard to achieve wide-angle shooting, and impacts image quality (especially in low light).
Line skipping – Only uses information from select horizontal rows of pixels, skipping the ones in-between. Gives fast readout but increases risk of moire, lowers image quality as the whole sensor isn’t being used. Common on high-res sensors.
Pixel binning – Combines information from neighboring pixels, to create larger effective pixels. Boosts readout speed and can give detail levels comparable to native sampling, but using more of the sensor so less noisy.
Oversampling – Captures more pixels than are needed, processes them and then downsizes the footage. Gives more detail than native sampling can with good noise performance and reduced risk of moire. Challenging for the camera in terms of speed and temperature build-up.
Bitrate – The typical amount of data generated by each second of footage. Bigger numbers aren’t automatically better: as different codecs can vary in terms the algorithm they use and how much information is retained about each frame. But generally higher bitrates are better at conveying subtle motion and detail.
Bit depth – The number of data values used to describe each pixel. Most cameras (and displays) are 8-bit, but 10-bit capture preserves more information about the original scene, so is preferable for HDR TV footage with a wide range of tones, and for Log capture, where you’re likely to make large adjustments
Codec – Shorthand for ‘coder-decoder’, this indicates the method a camera uses to compress video for storage. The most common codecs are H.264 and the newer H.265 (sometimes called HEVC). H.265 is about twice as efficient, meaning that it can record the same quality as H.264 at half the bit rate or higher quality at the same bitrate. H.265 is usually more challenging for computers to edit, though.
Frame rate (eg 23.97p) – The number of frames of video that will be played each second. Cinema tends to use 24p, TV usually uses 30p (in North America, often 25p elsewhere) and 60p can be used for sports and fast motion. Most phones and monitors will happily show any of these, so it becomes a creative choice that helps define the ‘look’ of your footage.
LUT – Short for Look-up-table, a LUT is a series of values used to take footage shot with one color and contrast ‘look’ and translate it to a give a different ‘look.’ Most commonly used to make flat, low-contrast Log footage look more like regular footage.
Moiré – More correctly called aliasing in most instances, it’s a visual error in video footage caused by trying to capture finer detail than the sensor can correctly capture. Moiré is usually seen as bands of false color or ‘dancing’ patterns in fine textural detail.
Resolution – 1080/Full HD, 4K and 8K are the most common resolutions used in video. Full HD is also called 1080, and is 1920 x 1080 pixels. 4K can refer to DCI (4096 x 2160 pixels) or the more common UHD (3840 x 2160 pixels), 8K also has DCI and UHD versions, which double the resolution in each dimension. DCI footage is in the wider 1:1.79 aspect ratio, whereas UHD is the same 16:9 shape as most TVs.
Rolling shutter / Jello effect – A type of motion distortion caused when your subject moves while each video frame is still being captured, causing warped vertical lines and a disconcerting ‘wobble’ to the footage. It’s less noticeable on cameras with fast sensor readout.