Video: Canon Lenses – A Tale of Three Formats

Lenses are seen by some a black art, by some as a mass of complex physics equations and others who see them as their creative window onto the stories that need to be told. Whilst there is an art behind using lenses, and it’s true making them is complex, understanding how to choose lenses doesn’t require PhD academia.

SMPTE Fellow Larry Thorpe from Canon is here to make the complex accessible as he kicks off talking about lens specifications. He discusses the 2/3-inch image format comparing it with super 35 and full frame. He outlines the specs that are most discussed when purchasing and choosing lenses and shows the balancing act that all lenses are, wanting to maximise sharpness whilst minimising chromatic aberration. On the subject of sharpness, Larry moves on to discussing the way the camera’s ability to sample the video interacts with the lenses ability to capture optical resolution.

Larry considers a normal 1920×1080 HD raster with reference to the physical size of a TV 2/3inch sensor. That works out to be approximately 100 line pairs per millimetre. Packing that into 1mm is tricky if you wish to also maintain quality of the lines. The ability to transfer this resolution is captured by the MTF – the Modulation Transfer Function. This documents the contrast you would see then certain frequencies are viewed through the lens. Larry shows that for a typical lens, this 100 line pairs would have 70% of the original contrast. The higher the frequency, the lower the contrast until it just becomes a flat grey. Larry then looks at a 4K lens showing that it’s needs are 200 line pairs per mm and looking at the MTF, we see that we’re only reaching 50% contrast

Aberrations are important to understand as every lens suffers from them. Larry walks through the 5 classical aberrations, focus and chromatic. To the beginner, chromatic aberrations are, perhaps, the most obvious where colours are seen on the edge of objects, often purple. This is also known as colour fringing. Larry talks about how aperture size can minimise the effect and keeping your image above the 50% contrast limit in the MTF will keep chromatic aberration from being obvious. As a reality check, we then see the limits that have been calculated as limits beyond which it’s simply not possible to improve. Using these graphs we see why 4K lenses offer less opportunity to stop down than HD lenses.

Sharpness zones are zones in lenses optimised for different levels of sharpness. Within the centre, unsurprisingly is the highest sharpness as that’s where most action is. There is then a middle and an outer zone which are progressively less sharp. The reason for this is to recognise that it’s not possible to make the whole image sharp to the same degree. By doing this we are able to create a flatter central zone but with a manage decrease at the corners.

Larry moves on to cover HDR an mentions a recent programme on Fox which was shot in 1080p HDR making the point that HDR is not a ‘4K technology’. He also makes the point that HDR is about the low-lights as well as the specular highlights, so a lens’s ability to be low-noise in the blacks is important an whilst this is not often a problem for SDR, with HDR we are now seeing this coming up more often. For dramas and similar genres, it’s actually very important to be able to shoot whole scenes in low light and Larry shows that the large number of glass elements in lenses is responsible for the low light performance being suboptimal. With up to 50% of light not making it through the lens, this light can be reflected internally and travels around the lens splashing the blacks. Larry explains that coating elements can correct a lot of this and careful choice of the internal surface of the lens mechanisms is also important in minimising such reflections.

Telephoto lenses are lenses which have variable zoom. Larry shows how Canon developed a lens so fully frame a 6 foot athlete from 400 metres away so that they were fully framed on a 2/3″ sensor, but still with a wide angle lens of 60 degrees. With such a long zoom, internal stabilisation is imperative which is done by a very quick active feedback sensor.

So far, Larry has talked about the TV’s standardised 2/3″ image sensor. He now moves on to cover motion format sizes. He shows that for Super 35, you only need 78 line pairs per millimetre which has the knock-on effect of allowing sharper pictures. Next Larry talks about the different versions of ‘full frame’ formats emphasising the creative benefits of larger formats. One is giving a larger field of view which Larry both demonstrates and explains, another is greater sharpness and by having a camera which can choose how much of the sensor you actually use, you can put all sorts of different lenses on. Depth of field is a well known benefit of larger frame formats. The depth of field is much lower which, creatively, is often much desired, though it should be noted that for entertainment shows in TV, that’s much less desirable whilst in films, this is an intrinsic part of the ‘grammar.

As the talk comes to a conclusion, Larry discusses debayering whereby a single sensor has to record red, green and blue. He explains the process and the disadvantages versus separate sensors in larger cameras. As part of this conversion, he shows how oversampling can improve sharpness and avoid aliasing. the talk finishes with an overview of solid storage options

Watch now!
Speakers

Larry Thorpe Larry Thorpe
National Marketing Executive,
Canon USA Inc.

Video: Building Television Systems in a Time of Multiple Technology Transitions

Major technology transitions can be hard to keep up with, and when you have a project requiring you decide which one to go with, it can seem unmanageable. This panel put together by SMPTE New York looks gives the view from System Integrators on how to make this work and cover their experience with a wide range of new technologies.

SMPTE ST 2110 is an entire paradigm shift

John Humphrey
John Turner kicked off explaining the reasoning for using SDI over SMPTE ST 2110 in some circumstances. For that project, his client had a fixed space so wouldn’t see the benefits of 2110 in terms of expansion. Their workflow already worked well in SDI and at the time, the costs of 2110 would have been higher. Overall, the project went with SDI, was successful and they are a happy customer. Karl Paulsen agreed that new technology shouldn’t be ‘for the sake of it’ and added that whilst individual products with a new technology may be stable, that’s not certain to be the case when interoperating within a whole system. As such, this puts the implementation time up meaning the incumbent technologies do tend to get chosen when time is at a premium.

Turning to 5G, Karl answered the question “what are the transformational technologies”. For some applications, for instance, back of the camera RF in a stadium, 5G is a major leap compared to microwave packs, but early on in a technology’s life, like we are with 5G, it’s a matter of working out where it does and where it doesn’t work well. In time, it will probably adapt to some of those other use cases that it wasn’t suited for initially. John Turner highlighted the elements that ATSC 3.0 transforms in a big way. From an RF perspective, its modulation is much stronger and more flexible, that it’s able to drive new business models.

John Mailhot’s view on the transformational challenge is ‘the people’. He puts forward the idea that the technical constraints of router size and max cable length, to name two examples, embedded themselves into the routines, assumptions and architectures that people embody in their work. With SMPTE ST-2110, most of these constraints are removed. This means you are a lot freer to work out the workflows the business wants. The challenge here is to have the imagination and fortitude to forge the right workflow without getting paralysed by choice.

“SMPTE ST 2110 is an entire paradigm shift”, John Humphrey

After responding to the moderator’s question on how much turmoil these transitions are causing, Mark Schubin summarises the situation by saying we need to work out which of the technologies is like a fridge (replacing previous technologies), a microwave (used as well as a conventional oven) and an induction cooker (requires change in cookware, little adoption). John Humphrey adds that ST 2110 is a technology which viewers don’t notice since the visual quality is the same. HDR, is the opposite so they need different approaches.

During the last 45 minutes, the panel took questions from the audience covering how to hire talent, the perspective of younger people on technology, programming specifically made for smartphones, ATSC 3.0 implementation, reliability of home internet, PTP and more.

Watch now!
Speakers

Mark Schubin Mark Schubin
Consultant & Explainer
John Humphrey John Humphrey
VP, Business Development,
Hitachi Kokusai Electric America Ltd.
Karl Paulsen Karl Paulsen
CTO,
Diversified
John Turner John Turner
Principal Engineer
Turner Engineering Inc.
John Mailhot John Mailhot
Systems Architect for IP Convergence
Imagine Communications

Video: Efficient Carriage of Sub-Rasters With ST 2110-20

One of the main promises of IP video is flexibility and what better way to demonstrate that than stepping off the well-worn path of broadcast resolutions? 1920×1080 is much loved nowadays, but not everything needs to be put into an HD-sized frame. SMPTE ST-2110 allows video of all shapes and sizes, so let’s not be afraid to use the control given to us.

Paul Briscoe, talking on behalf of Evertz, takes the podium to explain the idea. Using logo insertion as an expample, he shows that if you want to put a small BUG/DOG/graphic on screen with a key, then real there’s not a lot of data that needs to be transferred. Typically a graphic needs a key and a fill. Whilst the key is typically luma-only, the fill needs to be full colour.

In the world of SDI, sending your key and fill around would need two whole HD signals and up to 6Gbps of data. When your graphic is only a small logo, these SDI signals are mainly redundant data. Using ST 2110-20, however, in the IP domain we can be much more efficient. 2110 allows resolutions up to 32,000 pixels square so we should be able to send just the information which is necessary.

Paul introduces the idea of a “pixel group” (pgroup) which is the minimal group of video data samples that make up an integer number of pixels and also align to an octet boundary. Along with defining a size, we also get to define an X,Y position. Paul explains how using pgroups helps, and hinders, sending video this way and then delves in to how timing would work. To finish off, Paul examines edge cases and talks about other examples such as stock tickers, not to mention the possibility of motion as we get to define the X, Y position.

Watch now!
This wall chart gives more info on pgroups and other low-level ST 2110-20 constructs.
Download the slides from this presentation

Speakers

Paul Briscoe Paul Briscoe
Principal Consultant,
Televisionary Consulting

On Demand Webinar: The Technology of Motion-Image Acquisition

A lot of emphasis is put on the tech specs of cameras, but this misses a lot of what makes motion-image acquisition an art form as much as it is a science. To understand the physics of lenses, it’s vital we also understand the psychology of perception. And to understand what ‘4K’ really means, we need to understand how the camera records the light and how it stores the data. Getting a grip on these core concepts allow us to navigate a world of mixed messages where every camera manufacturer from webcam to phone, from DSLR to Cinema is vying for our attention.

In the first of four webinars produced in conjunction with SMPTE, Russell Trafford-Jones from The Broadcast Knowledge welcomes SMPTE fellows Mark Schubin and Larry Thorpe to explain these fundamentals providing a great intro for those new to the topic, and filling in some blanks for those who have heard it before!

Russell will start by introducing the topic and exploring what makes some cameras suitable for some types of shooting, say, live television and others for cinema. He’ll talk about the place for smartphones and DSLRs in our video-everywhere culture. Then he’ll examine the workflows needed for different genres which drive the definitions of these cameras and lenses; If your live TV show is going to be seen 2 seconds later by 3 million viewers, this is going to determine many features of your camera that digital cinema doesn’t have to deal with and vice versa.

Mark Schubin will be talking about at lighting, optical filtering, sensor sizes and lens mounts. Mark spends some time explaining how light is made up and created whereby the ‘white’ that we see may be made of thousands of wavelengths of light, or just a few. So, the type of light can be important for lighting a scene and knowing about it, important for deciding on your equipment. The sensors, then, are going to receive this light, are also well worth understanding. It’s well known that there are red-, green- and blue-sensitive pixels, but less well-known is that there is a microlens in front of each one. Granted it’s pricey, but the lens we think most about is one among several million. Mark explains why these microlenses are there and the benefits they bring.

Larry Thorpe, from Canon, will take on the topic of lenses starting from the basics of what we’re trying to achieve with a lens working up to explaining why we need so many pieces of glass to make one. He’ll examine the important aspects of the lens which determine its speed and focal length. Prime and zoom are important types of lens to understand as they both represent a compromise. Furthermore, we see that zoom lenses take careful design to ensure that the focus is maintained throughout the zoom range, also known as tracking.

Larry will also examine the outputs of the cameras, the most obvious being the SDI out of the CCU of broadcast cameras and the raw output from cinema cameras. For film use, maintaining quality is usually paramount so, where possible, nothing is discarded hence creating ‘raw’ files which are named as they record, as close as practical, the actual sensor data received. The broadcast equivalent is predominantly RGB with 4:2:2 colour subsampling meaning the sensor data has been interpreted and processed to create RGB pixels and half the colour information has been discarded. This still looks great for many uses, but when you want to put your image through a meticulous post-production process, you need the complete picture.

The SMPTE Core Concepts series of webcasts are both free to all and aim to support individuals to deepen their knowledge. This webinar is in collaboration with The Broadcast Knowledge which, by talking about a new video or webinar every day helps empower each person in the industry by offering a single place to find educational material.

Watch now!
Speakers

Mark Schubin Mark Schubin
Engineer and Explainer
Larry Thorpe Larry Thorpe
Senior Fellow,
Canon U.S.A., Inc.
Russell Trafford-Jones Russell Trafford-Jones
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex
Exec Member, IET Media