On Demand Webinar: The Technology of Motion-Image Acquisition

A lot of emphasis is put on the tech specs of cameras, but this misses a lot of what makes motion-image acquisition an art form as much as it is a science. To understand the physics of lenses, it’s vital we also understand the psychology of perception. And to understand what ‘4K’ really means, we need to understand how the camera records the light and how it stores the data. Getting a grip on these core concepts allow us to navigate a world of mixed messages where every camera manufacturer from webcam to phone, from DSLR to Cinema is vying for our attention.

In the first of four webinars produced in conjunction with SMPTE, Russell Trafford-Jones from The Broadcast Knowledge welcomes SMPTE fellows Mark Schubin and Larry Thorpe to explain these fundamentals providing a great intro for those new to the topic, and filling in some blanks for those who have heard it before!

Russell will start by introducing the topic and exploring what makes some cameras suitable for some types of shooting, say, live television and others for cinema. He’ll talk about the place for smartphones and DSLRs in our video-everywhere culture. Then he’ll examine the workflows needed for different genres which drive the definitions of these cameras and lenses; If your live TV show is going to be seen 2 seconds later by 3 million viewers, this is going to determine many features of your camera that digital cinema doesn’t have to deal with and vice versa.

Mark Schubin will be talking about at lighting, optical filtering, sensor sizes and lens mounts. Mark spends some time explaining how light is made up and created whereby the ‘white’ that we see may be made of thousands of wavelengths of light, or just a few. So, the type of light can be important for lighting a scene and knowing about it, important for deciding on your equipment. The sensors, then, are going to receive this light, are also well worth understanding. It’s well known that there are red-, green- and blue-sensitive pixels, but less well-known is that there is a microlens in front of each one. Granted it’s pricey, but the lens we think most about is one among several million. Mark explains why these microlenses are there and the benefits they bring.

Larry Thorpe, from Canon, will take on the topic of lenses starting from the basics of what we’re trying to achieve with a lens working up to explaining why we need so many pieces of glass to make one. He’ll examine the important aspects of the lens which determine its speed and focal length. Prime and zoom are important types of lens to understand as they both represent a compromise. Furthermore, we see that zoom lenses take careful design to ensure that the focus is maintained throughout the zoom range, also known as tracking.

Larry will also examine the outputs of the cameras, the most obvious being the SDI out of the CCU of broadcast cameras and the raw output from cinema cameras. For film use, maintaining quality is usually paramount so, where possible, nothing is discarded hence creating ‘raw’ files which are named as they record, as close as practical, the actual sensor data received. The broadcast equivalent is predominantly RGB with 4:2:2 colour subsampling meaning the sensor data has been interpreted and processed to create RGB pixels and half the colour information has been discarded. This still looks great for many uses, but when you want to put your image through a meticulous post-production process, you need the complete picture.

The SMPTE Core Concepts series of webcasts are both free to all and aim to support individuals to deepen their knowledge. This webinar is in collaboration with The Broadcast Knowledge which, by talking about a new video or webinar every day helps empower each person in the industry by offering a single place to find educational material.

Watch now!
Speakers

Mark Schubin Mark Schubin
Engineer and Explainer
Larry Thorpe Larry Thorpe
Senior Fellow,
Canon U.S.A., Inc.
Russell Trafford-Jones Russell Trafford-Jones
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex
Exec Member, IET Media

Video: Working remotely in a crisis

We’ve perhaps all seen the memes that the ‘digital transformation’ of a company is not because of ‘leadership vision’, adapting to the competition, but rather ‘Covid-19’. Whilst this is both trite yet often true, there is value in understanding what broadcast companies have done to deal with the pandemic virus and COVID-19.

Robert Ambrose introduces and talks to our guests to find out how their companies have changed to accommodate remote working. First to speak is Jack Edney of The Farm Group, a post production company. They looked closely at the communication needed within the organisation, managing priorities of tasks and maintaining safety and resources. Jack shows how the stark difference between pre- and during- lockdown workflows seeing how much they are now remote. Jack explains how engaged his technical teams have been in making this work very quickly.

Brian Leonard from IMG has done much the same as IMG have moved towards remote working as they have changed from 300 people on site to around 3 people on site and everything else remote. Brian talks about how they’d expanded into a local building in order to make life easier in the earlier days. He then considers the pros and cons of being reliant on a significant freelance staff – that being the option of using their pre-existing equipment at home. Finally we look at how their computer-based SimplyLive production software allows them the immediate ability to remotely produce video.

OWNZONES is up next with Rick Phelps who gives a real example of a customer’s workflow which was on-premise showing the before and after diagrams for when this moved remotely. These workflows were extended into the cloud by, say, using proxies and editing using an EDL, encoding and amending metadata all in the cloud. Rick suggests that this is both a short-term trend but suggests much will remain like this in the longer-term.

Finally, Johan Sundström from Yle in Finland takes to the stand to give a point of view from a public broadcaster. He explains how
they have created guest booths near their main entrance connected to the new channels so facilitate low-contact interviews. Plexiglass is being installed in control rooms and people are doing their own makeup. He also highlights some apps which allow for remote contribution of audio. They are also using software-based mixers like the Tricaster plus Skype TX to keep producers connected and involved in their programmes. The session concludes with a Q&A.

Watch now!
Speakers

Jack Edney Jack Edney
Operations Director,
The Farm Group
Johan Sundström Johan Sundström
Head of Technology Vision,
Yle Finland
Rick Phelps Rick Phelps
Chief Commercial Officer,
OWNZONES
Brian Leonard Brian Leonard
Head of Engineering: Post and Workflows
IMG
Robert Ambrose Robert Ambrose
Managing Consultant,
High Green Media

Video: Reducing peak bandwidth for OTT

‘Flattening the curve’ isn’t just about dealing with viruses, we learn from Will Law. Rather, this is one way to deal with network congestion brought on by the rise in broadband use during the global lockdown. This and other key ways such as per-title encoding and removing the top tier are just two other which are explored in this video from Akamai and Bitmovin.

Will Law starts the talk explaining why congestion happens in a world where ABR (adaptive bitrate streaming) is supposed to deal with this. With Akamai’s traffic up by around 300%, it’s perhaps not a surprise there’s a contest for bandwidth. As not all traffic is a video stream, congestion will still happen when fighting with other, static, data transfers. However deeper than that, even with two ABR streams, the congestion protocol in use has a big impact as will shows with a graph showing Akamai’s FastTCP and BBR where BBR steals all the bandwidth rather than ‘playing fair’.

Using a webpage constructed for the video, Will shows us a baseline video playback and the metrics associated with it such as data transferred and bitrate which he uses to demonstrate the different benefits of bitrate production techniques. The first is covered by Bitmovin’s Sean McCarthy who explains Bitmovin’s per-title encoding technology. This approach ensures that each asset has encoder settings tuned to get the best out of the content whilst reducing bandwidth as opposed to simply setting your encoder to a fairly-high, safe, static bitrate for all content no matter how complex it is. Will shows on the demo that the bitrate reduces by over 50%.

Swapping codecs is an obvious way to reduce bandwidth. Unlike per-title encoding which is transparent to the end-user, using AV1, VP9 or HEVC requires support by the final device. Whilst you could offer multiple versions of your assets to make sure you still cover all your players despite fragmentation, this has the downside of extra encoding costs and time.

Will then looks at three ways to reduce bandwidth by stopping the highest-bitrate rendition from being used. Method one is to manually modify the manifest file. Method two demonstrates how to do so using the Bitmovin player API, and method three uses the CDN itself to manipulate the manifests. The advantage of doing this in the CDN is because this allows much more flexibility as you can use geolocation rules, for example, to deliver different manifests to different locations.

The final method to reduce peak bandwidth is to use the CDN to throttle download speed of the stream chunks. This means that while you may – if you are lucky – have the ability to download at 100Mbps, the CDN only delivers 3- or 5-times the real-time bitrate. This goes a long way to smoothing out the peaks which is better for the end user’s equipment and for the CDN. Seen in isolation, this does very little, as the video bitrate and the data transferred remain the same. However, delivering the video in this much more co-operative way is much more likely to cause knock-on problems for other traffic. It can, of course, be used in conjunction with the other techniques. The video concludes with a Q&A.

Watch now!
Speakers

Will Law Will Law
Chief Architect,
Akamai
Sean McCarthy Sean McCarthy
Technical Product Marketing Manager,
Bitmovin

Video: RIST in the Cloud

Cloud workflows are starting to become an integral part of broadcasters’ live production. However, the quality of video is often not sufficient for high-end broadcast applications where cloud infrastructure providers such as Google, Oracle or AWS are accessed through the public Internet or leased lines.

A number of protocols based on ARQ (Adaptive Repeat reQuest) retransmission technology have been created (including SRT, Zixi, VideoFlow and RIST) to solve the challenge of moving professional media over the Internet which is fraught with dropped packets and unwanted delays. Protocols such as a SRT and RIST enable broadcast-grade video delivery at a much lower cost than fibre or satellite links.

The RIST (Reliable Internet Streaming Transport) protocol has been created as an open alternative to commercial options such as Zixi. This protocol is a merging of technologies from around the industry built upon current standards in IETF RFCs, providing an open, interoperable and technically robust solution for low-latency live video over unmanaged networks.

In this presentation David Griggs from Amazon Web Services (AWS) talks about how the RIST protocol with cloud technology is transforming broadcast content distribution. He explains that delivery of live content is essential for the broadcasters and they look for a way to deliver this content without using expensive private fibre optics or satellite links. With unmanaged networks you can get content from one side of the world to the other with very little investment in time and infrastructure, but it is only possible with protocols based on ARQ like RIST.

Next, David discusses the major advantages of cloud technology, being dynamic and flexible. Historically dimensioning the entire production environment for peak utilisation was financially challenging. Now it is possible to dimension it for average use, while leveraging cloud resources for peak usage, providing a more elastic cost model. Moreover, the cloud is a good place to innovate and to experiment because the barrier to entry in terms of cost is low. It encourages both customers and vendors to experiment and to be innovative and ultimately build more compelling and better solutions.

David believes that open and interoperable QoS protocols like RIST will be instrumental in building complex distribution networks in the cloud. He hopes that AWS by working together with Net Insight, Zixi and Cobalt Digital can start to build innovative and interoperable cloud solutions for live sports.

Watch now!

Speaker

David Griggs
Senior Product Manager, Media Services
AWS Elemental