This article illustrates when and where you can optimize quality by shooting in progressive rather than streaming mode.

Shooting for Streaming – Progressive or Interlaced?

*/]]>

Introduction:

This article discusses the respective streaming quality produced by interlaced and deinterlaced source videos. Before I started this article, I knew the following:

·That if you were shooting for streaming, and had full frame rate (24 fps or higher) progressive capabilities, you should definitely shoot in progressive mode.

·If you were buying a camcorder for streaming, you should buy a camcorder with progressive capture capabilities (like the Canon XH A1 I used in my tests).

What I didn’t know was whether to recommend that you ditch your interlaced only camcorder (like my Sony HDR-FX1) in favor of a progressive camcorder if you were currently shooting for streaming. Since affordable, high quality compressive camcorders only recently became available, this is a critical issue for many producers equipped with interlaced camcorders. As with many quality related issues, the answer is a definite it depends.

Let me give you the Cliff Notes version right away. In a controlled office environment, with relatively low motion and low detail, which is pretty much your standard, set-based streaming shoot, you’ll see little difference if shooting in progressive or interlaced mode, even with a high energy performer like John Madden or Dick Vitale.

If you’re shooting sports, or other high motion video outside or under very good lighting, where you can sustain a high shutter speed, put your interlaced camcorder away, and buy a progressive camcorder. Finally, if you frequently shoot under conditions that you can’t control, like documentaries, roving pod-casts and the like, a progressive camcorder will likely also improve the quality of your streaming video.

Let’s discuss the theoretical background of interlaced vs. progressive shooting, and then we’ll have a look at my tests.

Technology background

As you probably know, analog TVs don’t display 29.97 frames a second, they display 59.94 fields per second, with the first field of each frame containing odd lines (1,3,5,7) and the second field even lines (2,4,6,8). This interlaced display technique was deemed necessary to promote smoothness during the early days of television.

On traditional, interlaced camcorders, these fields are captured 1/60th of a second apart. This obviously works well when playing back on interlaced television sets, but creates problems for streaming formats, which are all progressive formats, and display complete frames from top to bottom, not fields. This is shown in Figure 1.

Figure 1: Interlaced video on the left, deinterlaced video on the right. Click the figure to view the full image in a separate window.

On the left is the original video, where we see two observers, two lines of text and a blurry skateboarder coming down a ramp. The effect is most clear with the observer, who’s position during each of the two fields that comprise the frame quite distinct. In the image on the right, I applied a “de-interlacing filter” that intelligently combined the two fields into a single frame, minimizing the artifacts significantly. If you produced lots of streaming video from interlaced source, as I have, the ability to enable and perfect deinterlacing filters is a critical skill.

At least for me, deinterlacing has proved such a valuable technique that I tended to forget that it’s like a bandage you put on a cut; in some instances it works well, but you’d rather avoid the cut in the first place. That’s where progressive video comes in. Rather than shooting two fields for each frame, progressive camcorders shoot one complete frame for each frame, and if necessary, divide those frames into fields for displaying on analog TVs. Since progressive camcorders capture a complete frame, there’s no interlacing, and no need for deinterlacing. That’s why if buying new, you should definitely buy a camcorder with progressive capabilities, like Canon’s XH A1, used in our testing.

But that’s where we started. The question on the table is if you’re currently shooting in interlaced mode, whether a progressive camcorder will delivery better quality results. To set up this discussion, let’s discuss scene related characteristics that tend to highlight deinterlacing artifacts, and therefore favor progressive camcorders. Then we’ll analyze some delivery related factors that control whether your viewers will actually see the difference between a progressive or interlaced camcorder.

When Interlacing Artifacts Matter

Deinterlacing artifacts are most pronounced during high motion sequences with lots of sharp detail, as we saw in Figure 1, but in many instances, only when the shutter speed is fast enough to preserve the detail. These three criteria make perfect sense. If there is minimal or no motion, there is little or no difference between the two fields, so no interlacing artifacts to start with.

If there is minimal sharp detail, say when shooting fast moving clouds or waves breaking on the shore, it’s very challenging to perceive interlacing issues. For example, if you look at the blue back wall in Figure 1, you really can’t tell that there’s an interlacing problem. Rather, it’s the text on the wall and skateboarder and observer, along with edges on the platform and back wall, that highlight the problem.

Finally, the shutter speed has to be high enough to preserve the detail during the motion. For example, Figure 2, shot at a shutter speed of 100 in both interlaced and progressive modes, the young girl walking is a bit sharper in the deinterlaced frame (center) than the progressive frame (right). That’s because a shutter speed of 100 wasn’t fast enough to preserve the moving detail, most notably in her legs.

For completeness, on the left is the original frame of the interlaced video, pre-deinterlacing. The center frame shows the results after deinterlacing, while the frame on the right is the progressive frame, which obviously needed no processing. I shot this sequence of shots in HDV mode, which accounts for the funky interlaced pattern on the left, and all others in DV mode for reasons explained below.

Figure 2. The original interlaced image on the left, deinterlaced in the middle and progressive on the right. Click the figure to view the full image in a separate window.

As you probably know, shutter speed often isn’t a choice as much as a compromise. In this gymnasium, the lighting was perfect for playing and watching, but inadequate for shooting, and I had to pump the gain in both camcorders up to 12db to support the 100 shutter speed and produce even decent exposure. Increased gain would have enabled a faster shutter speed, but may have degraded the quality of the compressed footage more noticeably than the additional preserved detail would improve it.

Contrast this image with that shown in Figure 3, which was shot outside at a shutter speed of 1/2000th of a second. Even with my 120 mph plus (har, har) golf swing, the progressive camcorder captured the image perfectly, and you see clear jaggies on the interlaced club, even after deinterlacing.

Figure 3. Here we see jaggies on the club even after deinterlacing. Click the figure to view the full image in a separate window.

I should state for the record that all tools do not deinterlace equally well, and that not all encoders even offer deinterlacing filters. For example, Adobe’s Flash 8 Video Encoder does not deinterlace, though the CS3 version does. Microsoft’s Windows Media Encoder has a tragically poor deinterlacing filter, as does QuickTime Player. When encoding with these tools, you should scale and deinterlace in your video editor, not in the encoding tool.

The best deinterlacing tool I’ve ever used is in a product called AlgoSuite produced by Algolith (www.algolith.com). It’s expensive and slow, but could prove invaluable if you’ve got problematic interlaced source video to convert to streaming. Obviously, the better the deinterlacing tool, the less difference between interlaced and progressive source video. If you’re using AlgoSuite or a similar tool already, expect less quality benefits from switching to a progressive source camcorder, though not having to pre-process your video through AlgoSuite would save tons of time. For these tests, I deinterlaced in Adobe Premiere Pro.

When Interlacing Artifacts Are Noticeable in the Final Stream

The image shown in Figure 3 is the original uncompressed video, straight from the Premiere Pro timeline. When will the artifacts be most notable in the compressed stream?

Well, obviously, if you distribute the video at 160×120, at 28.8 kbps, changing from an interlaced to progressive camcorder will make little, if any, difference. On the other hand, at the rough parameters that ESPN uses (440×330 at about 600 kbps), the difference is clearly visible. This is shown in Figure 4, where I zoomed in on both videos to make the figure more prominent in the stream.

Figure 4.  This video stream works out to 440×330 at about 500 kbps, and the jaggies from the deinterlacing artifacts is noticeable. Click the figure to view the full image in a separate window.

The interlaced source clip is a bit fuzzier than the progressive image, in part because of deinterlacing, and in part because I had to apply more zoom to make it equal the size of the progressive image. That said, the jaggies on the club are all deinterlacing artifacts, and the club on the right is much clearer. Moreover, you also notice slight jaggies throughout the image, on the sweater, in the neckline and arms.

Overall, it takes the confluence of factors to actually see the benefit of shooting in progressive mode as opposed to interlaced. That is, you need high motion video, with sharp detail shot at a very high shutter speed, along with video delivered at a resolution large enough and data rate high enough to show off and preserve the quality differences.

With this as background, let’s walk through my test scenarios and view the results. In the case of the high motion videos, I’ll identify the shutter speed tipping point where interlaced mode produces similar results to progressive. In addition, the final test revealed a completely different set of circumstances where progressive video totally outshined interlaced.

Office Environment

The first test emulates the carefully crafted streaming environment with a dark background, flat lighting and minimal detail. The shot is a medium close-up, chest and higher, with minimal room for arm waving. In this and all subsequent tests, I shot in interlaced mode with the Sony HDR-FX1, while shooting in progressive mode with the Canon XH A1. I shot all these tests in DV mode, some in 16:9 and some in 4:3, eschewing HDV because the XH A1 has higher resolution CCDs than the FX1, and produces a better high def images. At DV resolutions, resolution was virtually identical, focusing the test on the format differences, rather than the qualitative differences between the cameras themselves.

Figure 5. The original interlaced image on the left, deinterlaced in the middle and progressive on the right. Click the figure to view the full image in a separate window.

For these indoor shots, I disabled gain, set shutter speed at 60 and adjusted aperture and/or lighting until exposure on the face was optimal, using the waveform monitor on Adobe’s DV Rack Product as a guide. I used single key lighting with a back light for this and the next shoot, and with 1000 watts shining down from behind the camcorders, both were at or near the lowest available aperture. I perhaps could have boosted shutter speed to 100 without darkening the image, but no further.

As with all the videos I produced for this article, I output the two files, side by side, using multiple codecs, including Flash, Windows Media and QuickTime, and multiple resolutions, including 640×480, 440×330 and 320×240. None of the videos in this test run showed any noticeable difference in quality between the progressive and interlaced source.

Analyzing the source video and shooting parameters, this isn’t really surprising. There’s very low motion due to the tight framing, no sharp detail like the shaft of a golf club and the shutter speed was relatively slow, dulling the minimal detail in the shot. After viewing these results, I wondered whether shooting a medium shot with increased motion would make a difference – the John Madden/Dick Vitale scenario. Specifically, does the result change if you have a highly energetic speaker shot in a medium shot to make room for the motion.

Figure 6. Lots of interlacing artifacts on the left, but deinterlacing worked well (center) and the progressive image was slightly blurry due to the relatively low shutter speed. Click the figure to view the full image in a separate window.

So, using the same camera settings, I zoomed out and shot again, playing Dickie V (which I found very freeing). Though the increased motion produced more visible interlacing artifacts (on the left in Figure 6), the deinterlaced image looks almost identical to the progressive image. In real time playback trials, the interlaced video looked almost identical to the progressive source video. My conclusion? In a controlled, compression friendly environment, with minimal detail, relatively low motion and typical studio lighting, progressive source video won’t look substantially better than interlaced.

Sports

We’ve already seen that at a shutter speed of 1/2000th of a second, progressive source video looks better than interlaced source video. This test is designed to determine how low you can go in terms of shutter speed and still see the difference. To determine this, I shot the same motions – golf and baseball swings – while adjusting the shutter speed from 2000 to 30 and controlling exposure via a combination of aperture and ND filters. Gain was set to zero in all tests.

Figure 7. The shaft starts to get blurry at a shutter speed of 500. Click the figure to view the full image in a separate window.

With a pitching wedge, the shaft started to get blurry at a shutter speed of 500, as shown in Figure 7, making the deinterlaced video tough to discern from the progressive video. What conclusions can we draw from this? Overall, if you’re shooting high motion video, and don’t currently shoot at a shutter speed of 500 or higher, you may not see a significant difference between progressive and interlaced source, at least as it relates to motion-related deinterlacing artifacts.

This is significant, since at least some sports videographers recommend shooting at a shutter speed of 60, blurry still frames and all, to minimize jitter and maintain a smooth looking image. Remember, your viewers will watch the entire video in real time, and won’t get to study an individual frame for blurriness.

Interestingly, however, in other areas of the frame, it seemed clear that the progressive frame retained much greater detail than the interlaced frame, particularly in areas of fine detail like branches in the background. This impression was confirmed on my final shoot, which I’ll affectionately call the horror show.

The Horror Show

The horror show was a series of three, one-act plays performed by the Galax Theater Guild at a local gathering spot called String Bean. The stage was next to the plate glass window over looking the street, and the plays started at 7:00 under full daylight, and ended at about 8:30, well after sunset. A thin rattan curtain covered the window, dulling the incoming light and muting main street traffic and passers by, but provided tons of extraneous detail. I could have removed the curtain, but was more concerned with traffic and backlighting.

All stage lighting was provided by overhead fluorescent lights of the normal, office variety, and some of the lights directly overlooking the stage were out with no replacement bulbs in site, exacerbating the backlighting. With shutter speed set at 60, and each camera’s aperture wide open, I still had to pump gain up to 12 dB to brighten the actor’s faces sufficiently.

To complete the picture, the head of the theater guild, who had asked me to shoot, breezed in about 30 second before show time, and it wasn’t until the shoot was over that I noticed he was wearing a black and white checkered coat. Between the curtain, coat, backlighting and gain, I had no hope for the video, at least not without some significant filtering that would blur all detail. As it turned out, these circumstances tended to confirm that the progressive camcorder retained significantly more detail than the interlaced camcorder.

Figure 8 – the progressive camcorder retains more detail in the coat than the interlaced camcorder. Click the figure to view the full image in a separate window.

For example, Figure 8 shows Theater Guild president Donn Bogert belting out an introductory number in the coat I should have ripped from his back before letting him onstage. As you can see, the detail is much better preserved on the right, in the progressive image.

Figure 9. Ditto for the detail in the rattan curtain, which is much sharper on the left. Click the figure to view the full image in a separate window.

The progressive camcorder also reproduced much better detail with the rattan curtain, as shown on the right in Figure 9. This was unexpected at first, particularly because resolution tests that I had performed previously with the Canon XL H1, which shares the same optics as the XH A1, showed no difference in resolution when shooting in progressive vs. interlaced.

Then I remembered that unlike my resolution chart, which was fixed to a wall, Mr. Bogert’s coat, and the curtains, were probably moving slightly during the shooting, the coat sharing his body movements and the curtains shifting slightly with air conditioning, or motion on the stage.

None of these objects were moving much, of course, but with such fine detail, all it takes is a shift of one line, and the editing or encoding software will have to deinterlace the results. As you can see in Figures 8 and 9, in frame to frame comparisons, the progressive source video simply retained more detail. During real time playback of the streaming file, the interlaced source also showed more distracting motion in the background. In addition, the detail in the scene also highlighted minor deinterlacing artifacts, like jagged edges on chairs or eyeglasses, or the paper scripts used by the actors, that obviously appeared only in the interlaced source video.

What do I take from this? If you shoot lots of footage bound for streaming in the real world, often you can’t control factors like foreground or background detail, or even what your subjects are wearing. In these instances, shooting in progressive mode could yield significant quality benefits over interlaced modes, even at slow shutter speeds and in scenes with limited motion. This may be especially true when shooting with handheld or even shoulder mounted cameras, where it’s impossible to keep the image perfectly stable.

To be honest, this finding was new to me, and I couldn’t confirm any similar results in any other tests of progressive vs. interlaced source footage. Though the results make sense, that (of course) doesn’t mean that they are correct, and could have been idiosyncratic to the last two setups that I described, or even the two cameras used in testing.

Overall, if you’re shooting in interlaced mode, and your streaming video doesn’t exhibit lots of noticeable deinterlacing artifacts, or lost fine detail, keep doing what you’re doing. On the other hand, if your output has been plagued by these issues, moving to progressive source could definitely resolve them.

About Jan Ozer

Avatar photo
I help companies train new technical hires in streaming media-related positions; I also help companies optimize their codec selections and encoding stacks and evaluate new encoders and codecs. I am a contributing editor to Streaming Media Magazine, writing about codecs and encoding tools. I have written multiple authoritative books on video encoding, including Video Encoding by the Numbers: Eliminate the Guesswork from your Streaming Video (https://amzn.to/3kV6R1j) and Learn to Produce Video with FFmpeg: In Thirty Minutes or Less (https://amzn.to/3ZJih7e). I have multiple courses relating to streaming media production, all available at https://bit.ly/slc_courses. I currently work as www.netint.com as a Senior Director in Marketing.

Check Also

Rating techniques that cut bandwidth costs.

Five Codec-Related Techniques to Cut Bandwidth Costs

The mandate for streaming producers hasn’t changed since we delivered RealVideo streams targeted at 28.8 …

Single-Pass vs Two-Pass VBR: Which is Better?

Let’s start this article with a quiz regarding how the quality and encoding speed of …

Moscow State University's Video Quality Measurement Tool analyzing five files. This program is featured in our video quality metrics course.

Updated Lessons for Video Quality Metrics Course

We’ve updated several lessons in our Video Quality Metrics course, including some relating to the …

Leave a Reply

Your email address will not be published. Required fields are marked *