Video: Canon Lenses – A Tale of Three Formats

Lenses are seen by some a black art, by some as a mass of complex physics equations and others who see them as their creative window onto the stories that need to be told. Whilst there is an art behind using lenses, and it’s true making them is complex, understanding how to choose lenses doesn’t require PhD academia.

SMPTE Fellow Larry Thorpe from Canon is here to make the complex accessible as he kicks off talking about lens specifications. He discusses the 2/3-inch image format comparing it with super 35 and full frame. He outlines the specs that are most discussed when purchasing and choosing lenses and shows the balancing act that all lenses are, wanting to maximise sharpness whilst minimising chromatic aberration. On the subject of sharpness, Larry moves on to discussing the way the camera’s ability to sample the video interacts with the lenses ability to capture optical resolution.

Larry considers a normal 1920×1080 HD raster with reference to the physical size of a TV 2/3inch sensor. That works out to be approximately 100 line pairs per millimetre. Packing that into 1mm is tricky if you wish to also maintain quality of the lines. The ability to transfer this resolution is captured by the MTF – the Modulation Transfer Function. This documents the contrast you would see then certain frequencies are viewed through the lens. Larry shows that for a typical lens, this 100 line pairs would have 70% of the original contrast. The higher the frequency, the lower the contrast until it just becomes a flat grey. Larry then looks at a 4K lens showing that it’s needs are 200 line pairs per mm and looking at the MTF, we see that we’re only reaching 50% contrast

Aberrations are important to understand as every lens suffers from them. Larry walks through the 5 classical aberrations, focus and chromatic. To the beginner, chromatic aberrations are, perhaps, the most obvious where colours are seen on the edge of objects, often purple. This is also known as colour fringing. Larry talks about how aperture size can minimise the effect and keeping your image above the 50% contrast limit in the MTF will keep chromatic aberration from being obvious. As a reality check, we then see the limits that have been calculated as limits beyond which it’s simply not possible to improve. Using these graphs we see why 4K lenses offer less opportunity to stop down than HD lenses.

Sharpness zones are zones in lenses optimised for different levels of sharpness. Within the centre, unsurprisingly is the highest sharpness as that’s where most action is. There is then a middle and an outer zone which are progressively less sharp. The reason for this is to recognise that it’s not possible to make the whole image sharp to the same degree. By doing this we are able to create a flatter central zone but with a manage decrease at the corners.

Larry moves on to cover HDR an mentions a recent programme on Fox which was shot in 1080p HDR making the point that HDR is not a ‘4K technology’. He also makes the point that HDR is about the low-lights as well as the specular highlights, so a lens’s ability to be low-noise in the blacks is important an whilst this is not often a problem for SDR, with HDR we are now seeing this coming up more often. For dramas and similar genres, it’s actually very important to be able to shoot whole scenes in low light and Larry shows that the large number of glass elements in lenses is responsible for the low light performance being suboptimal. With up to 50% of light not making it through the lens, this light can be reflected internally and travels around the lens splashing the blacks. Larry explains that coating elements can correct a lot of this and careful choice of the internal surface of the lens mechanisms is also important in minimising such reflections.

Telephoto lenses are lenses which have variable zoom. Larry shows how Canon developed a lens so fully frame a 6 foot athlete from 400 metres away so that they were fully framed on a 2/3″ sensor, but still with a wide angle lens of 60 degrees. With such a long zoom, internal stabilisation is imperative which is done by a very quick active feedback sensor.

So far, Larry has talked about the TV’s standardised 2/3″ image sensor. He now moves on to cover motion format sizes. He shows that for Super 35, you only need 78 line pairs per millimetre which has the knock-on effect of allowing sharper pictures. Next Larry talks about the different versions of ‘full frame’ formats emphasising the creative benefits of larger formats. One is giving a larger field of view which Larry both demonstrates and explains, another is greater sharpness and by having a camera which can choose how much of the sensor you actually use, you can put all sorts of different lenses on. Depth of field is a well known benefit of larger frame formats. The depth of field is much lower which, creatively, is often much desired, though it should be noted that for entertainment shows in TV, that’s much less desirable whilst in films, this is an intrinsic part of the ‘grammar.

As the talk comes to a conclusion, Larry discusses debayering whereby a single sensor has to record red, green and blue. He explains the process and the disadvantages versus separate sensors in larger cameras. As part of this conversion, he shows how oversampling can improve sharpness and avoid aliasing. the talk finishes with an overview of solid storage options

Watch now!

Larry Thorpe Larry Thorpe
National Marketing Executive,
Canon USA Inc.

On Demand Webinar: The Technology of Motion-Image Acquisition

A lot of emphasis is put on the tech specs of cameras, but this misses a lot of what makes motion-image acquisition an art form as much as it is a science. To understand the physics of lenses, it’s vital we also understand the psychology of perception. And to understand what ‘4K’ really means, we need to understand how the camera records the light and how it stores the data. Getting a grip on these core concepts allow us to navigate a world of mixed messages where every camera manufacturer from webcam to phone, from DSLR to Cinema is vying for our attention.

In the first of four webinars produced in conjunction with SMPTE, Russell Trafford-Jones from The Broadcast Knowledge welcomes SMPTE fellows Mark Schubin and Larry Thorpe to explain these fundamentals providing a great intro for those new to the topic, and filling in some blanks for those who have heard it before!

Russell will start by introducing the topic and exploring what makes some cameras suitable for some types of shooting, say, live television and others for cinema. He’ll talk about the place for smartphones and DSLRs in our video-everywhere culture. Then he’ll examine the workflows needed for different genres which drive the definitions of these cameras and lenses; If your live TV show is going to be seen 2 seconds later by 3 million viewers, this is going to determine many features of your camera that digital cinema doesn’t have to deal with and vice versa.

Mark Schubin will be talking about at lighting, optical filtering, sensor sizes and lens mounts. Mark spends some time explaining how light is made up and created whereby the ‘white’ that we see may be made of thousands of wavelengths of light, or just a few. So, the type of light can be important for lighting a scene and knowing about it, important for deciding on your equipment. The sensors, then, are going to receive this light, are also well worth understanding. It’s well known that there are red-, green- and blue-sensitive pixels, but less well-known is that there is a microlens in front of each one. Granted it’s pricey, but the lens we think most about is one among several million. Mark explains why these microlenses are there and the benefits they bring.

Larry Thorpe, from Canon, will take on the topic of lenses starting from the basics of what we’re trying to achieve with a lens working up to explaining why we need so many pieces of glass to make one. He’ll examine the important aspects of the lens which determine its speed and focal length. Prime and zoom are important types of lens to understand as they both represent a compromise. Furthermore, we see that zoom lenses take careful design to ensure that the focus is maintained throughout the zoom range, also known as tracking.

Larry will also examine the outputs of the cameras, the most obvious being the SDI out of the CCU of broadcast cameras and the raw output from cinema cameras. For film use, maintaining quality is usually paramount so, where possible, nothing is discarded hence creating ‘raw’ files which are named as they record, as close as practical, the actual sensor data received. The broadcast equivalent is predominantly RGB with 4:2:2 colour subsampling meaning the sensor data has been interpreted and processed to create RGB pixels and half the colour information has been discarded. This still looks great for many uses, but when you want to put your image through a meticulous post-production process, you need the complete picture.

The SMPTE Core Concepts series of webcasts are both free to all and aim to support individuals to deepen their knowledge. This webinar is in collaboration with The Broadcast Knowledge which, by talking about a new video or webinar every day helps empower each person in the industry by offering a single place to find educational material.

Watch now!

Mark Schubin Mark Schubin
Engineer and Explainer
Larry Thorpe Larry Thorpe
Senior Fellow,
Canon U.S.A., Inc.
Russell Trafford-Jones Russell Trafford-Jones
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex
Exec Member, IET Media