On Demand Webinar: The Technology of Motion-Image Acquisition

Now available on demand. Just follow the registration link to watch,

A lot of emphasis is put on the tech specs of cameras, but this misses a lot of what makes motion-image acquisition an art form as much as it is a science. To understand the physics of lenses, it’s vital we also understand the psychology of perception. And to understand what ‘4K’ really means, we need to understand how the camera records the light and how it stores the data. Getting a grip on these core concepts allso us to navigate a world a world of mixed messages where every camera manufacturer from webcam to phone, from DSLR to Cinema is vying for our attention.

In the first of a four webinars produced in conjunction with SMPTE, Russell Trafford-Jones from The Broadcast Knowledge welcomes SMPTE fellows Mark Schubin and Larry Thorpe to explain these fundamentals providing a great intro for those new to the topic, and filling in some blanks for those who have heard it before!

Russell will start by introducing the topic and exploring what makes some cameras suitable for some types of shooting, say live television, and others for cinema. He’ll talk about the place for smartphones and DSLRs in our video-everywhere culture. Then he’ll examine the workflows needed for different genres which drive the definitions of these cameras and lenses; If youre live TV show is going to be seen 2 seconds later by 3 million viewers, this is going to determine many features of your camera that digital cinema doesn’t have to deal with and vice versa.

Mark Schubin will be talking about at lighting, optical filtering, sensor sizes and lens mounts. Mark spends some time explaining how light is made up and created whereby the ‘white’ that we see may be made of thousands of wavelengths of light, or just a few. So, the type of light can be important for lighting a scene and knowing about it, important for deciding on your equipment. The sensors, then, are going to receive this light, are also well worth understanding. It’s well known that there are red-, green- and blue-sensitive pixels, but less well-known is that there is a microlens in front of each one. Granted it’s pricey, but the lens we think most about is one among several million. Mark explains why these microlenses are there and the benefits they bring.

Larry Thorpe, from Canon, will take on the topic of lenses starting from the basics of what we’re trying to achieve with a lens working up to explaining why we need so many pieces of glass to make one. He’ll examine the important aspects of the lens which determine its speed and focal length. Prime and zoom are important types of lens to understand as they both represent a compromise. Furthermore, we see that zoom lenses take careful design to ensure that the focus is maintained throughout the zoom range, also known as tracking.

Larry will also examine the outputs of the cameras, the most obvious being the SDI out of the CCU of broadcast cameras and the raw output from cinema cameras. For film use, maintaining quality is usually paramount so, where possible, nothing is discarded hence creating ‘raw’ files which are named as they record, as close as practical, the actual sensor data received. The broadcast equivalent is predominantly RGB with 4:2:2 colour subsampling meaning the sensor data has been interpreted and processed to create RGB pixels and half the colour information has been discarded. This still looks great for many uses, but when you want to put your image through a meticulous post-production process, you need the complete picture.

The SMPTE Core Concepts series of webcasts are both free to all and aim to support individuals to deepen their knowledge. This webinar is in collaboration with The Broadcast Knowledge which, by talking about a new video or webinar every day helps empower each person in the industry by offering a single place to find educational material.

Watch now!
Speakers

Mark Schubin Mark Schubin
Engineer and Explainer
Larry Thorpe Larry Thorpe
Senior Fellow,
Canon U.S.A., Inc.
Russell Trafford-Jones Russell Trafford-Jones
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex

Video: Online Streaming Primer

A trip down memory lane for some, a great intro to the basics of streaming for others, this video from IET Media looks at the history of broadcasting and how that has moved over the years to online streaming posing the question whether, with so many people watching online, is that broad enough to now be considered broadcast?

The first of a series of talks from IET Media, the video starts by highlighting that the recording of video was only practical 20 years after the first television broadcasts then talks about how television has moved on to add colour, resolution and move to digital. The ability to record video is critical to almost all of our use of media today. Whilst film worked well as an archival medium, it didn’t work well, at scale, for recording of live broadcasts. So in the beginning, broad casting from one, or a few, transmitters was all there was.

Russell Trafford-Jones, from IET Media, then discusses the advent of streaming from its predecessor as file-based music in portable players, through the rise of online radio and how this naturally evolved into the urge to stream video in much the same way.

Being a video from the IET video, Russell then looks at the technology behind getting video onto a network and over the internet. He talks about cutting the stream into chunks, i.e. small files, and how sending files can create a seamless stream of data. One key advantage of this method is Adaptive BitRate (ABR) meaning being able to change from one quality level, to another which typically means changing bitrate to adapt to changing network conditions.

Finishing by talking about the standards available for online streaming, this talk is a great introduction to streaming and an important part of anyone’s foundational understanding of broadcast and streaming.

Watch now!

This video was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Russell Trafford-Jones Russell Trafford-Jones
Exec Member, IET Media
Manager, Support & Services, Techex
Editor, The Broadcast Knowledge

Video: Where can SMPTE 2110 and NDI co-exist?

When are two video formats better than one? Broadcasters have long sought ‘best of breed’ systems matching equipment as close as possible to your ideal workflow. In this talk we look getting the best of both compressed, low-latency and uncompressed video. NDI, a lightly compressed, ultra low latency codec, allows full productions in visually lossless video with a field of latency. SMPTE’s ST-2110 allows full productions with uncompressed video and almost zero latency.

Bringing together the EBU’s Willem Vermost who paints a picture from the perspective of public broadcasters who are planning their moves into the IP realm, Marc Risby from UK distributor and integrator Boxer brings a more general view of the market’s interest and Will Waters who spent many years in Newtek, the company that invented NDI we hear the two approaches of compressed and uncompressed compliment each other.

This panel took place just after the announcement that Newtek had been bought by VizRT, the graphics vendor, who sees a lot of benefit in being able to work in both types of workflow, for clients large and small and who have made Newtek its own entity under the VizRT umbrella to ensure continued focus.

A key differentiator of NDI is it’s focus on 1 gigabit networking. Its aim has always to enable ‘normal’ companies to be able to deploy IP video easily so they can rapidly benefit from the benefits that IP workflows bring over SDI or other baseband video technologies. A keystone in this strategy is to enable everything to happen on normal, 1Gbit switches which are prevalent in most companies today. Other key elements to the codec are: free, software development kit, bi-directionality, resolution independent, audio sample-rate agnostic, tally support, auto discovery and more.

In the talk, we discuss the pros and cons of this approach where interoperability is assured as everyone has to use the same receive and transmit code, against having an standard such as SMPTE ST-2110. SMPTE ST-2110 has the benefit of being uncompressed, assuring the broadcaster that they have captured the best possible quality of video, promises better management at scale, tighter integration into complex workflows, lower latency and the ability to treat the many different essences separately. Whilst we discuss many of the benefits of SMPTE ST-2110, you can get a more detailed overview from this presentation from the IP Showcase.

Watch now!

This panel was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Willem Vermost Willem Vermost
Senior IP Media Technology Architect,
EBU
Marc Risby Marc Risby
CTO,
Boxer Group
Will Walters Will Waters
Vice President Of Worldwide Customer Success,
VizRT
Russell Trafford-Jones Moderator: Russell Trafford-Jones
Exec Member, IET Media
Manager, Support & Services, Techex
Editor, The Broadcast Knowledge

Video: Streaming Live Events: When it must be alright on the night

Live Streaming is an important part of not only online viewing, but increasingly of broadcast in general. It’s well documented that live programming is key to keeping linear broadcast’s tradition of ‘everyone watching at once’ which has been diluted – for both pros and cons – by non-linear viewing in recent years.

This panel, as part of IBC’s Content Everywhere, looks at the drivers behind live streaming, how it’s evolving and its future. Bringing together ultra-low-latency platform nanocosmos with managed service provider M2A Media and video player specialists Visual On , Editor of The Broadcast Knowledge, Russell Trafford-Jones starts the conversation asking what gamification is and how this plays in to live streaming.

nanocosmos’s Oliver Lietz explains how gamification is an increasing trend in terms of not only monetising existing content but is a genre in of itself providing content which is either entirely a game or has a significant interactive element. With such services, it’s clear that latency needs to be almost zero so his company’s ability to deliver one second latency is why he has experience in these projects.

We hear also from VisualOn’s Michael Jones who explains the low-latency service they were involved in delivering. Here, low-latency CMAF was used in conjunction with local synced-screen technology to ensure that not only was latency low, but second screen devices were not showing video any earlier/later than the main screen. The panel then discussed the importance of latency compared to synchronised viewing and where ultra-low latency was unnecessary.

Valentijn Siebrands from M2A talks about the ability to use live streaming and production in the cloud to deliver lower cost sports events but also deliver new types of programming. Valentijn then takes us into the topic of analytics, underlining the importance of streaming analytics which reveal the health of your platform/infrastructure as much as the analytics which are most usually talked about; those which tell you the quality of experience your viewers are having and their activities on your app.

The talk concludes with a look to the future, talking about the key evolving technologies of the moment and how they will help us move forward between now and IBC’s Content Everywhere Hub in 2021.

Watch now!

Speakers

Oliver Lietz Oliver Lietz
CEO & Founder,
nanocosmos
Michael Jones Michael Jones
SVP and Head of Business Development,
VisualOn Inc
Valentijn Siebrands Valentijn Siebrands
Solutions Architect,
M2A Media
Russell Trafford-Jones Russell Trafford-Jones – Moderator
Manager, Support & Services – Techex
Executive Member – IET Media Technical Network