Video: Power Talks – ATSC 3.0

ATSC 3.0 is the major next step in broadcasting for the US, South Korea and other countries and is a major update to the ATSC standard in so many way that getting across it all is not trivial. All terrestrial broadcasting in the US are done with ATSC as opposed to many other places, including Europe, which use DVB.

ATSC 3.0 brings in OFDM modulation which is a tried and tested technology also used in DVB. But the biggest change in the standard is that all of the transport within ATSC is IP. Broadcasters now, using broadband as a return path, have two-way communication with their viewers allowing transfer of data as well as media.

In this talk from Imagine Communications, we talk a look into the standard which, as is common nowadays, is a suite of standards. These standards cover Early Alerts, immersive audio, DRM, return paths and more. We then have a look at the system architecture of the ATSC 3.0 broadcast deployed in Phoenix.

South Korea has been pushing forward ATSC 3.0 and Chet Dagit looks at what they have been doing and how they’ve created high quality UHD channels to the consumer. He then looks at what the US can learn from this work but also DVB deployments in Europe.

Finally, Yuval Fisher looks at how the data and granularity available in ATSC 3.0 allows for more targeted ads and how you would manage both internally and harnessing it for ad campaigns.

Watch now!

Speakers

Steve Reynolds Steve Reynolds
President, P & N Solutions,
Imagine Communications
Mark Corl Mark Corl
SVP of Emergent Technology,
Triveni Digital
Chet Dagit Chet Dagit
Founder & Managing Member
RTP Holdings-Lokita Solutions
Yuval Fisher Yuval Fisher
CTO, Distribution
Imagine Communications

Video: 2019 Display Trends and Hot Display Apps

Display technology has always been deeply intertwined with broadcasting. After all, when John Logie Baird first demonstrated his working television, he had to invent both the camera and the display device, then known as at the televisor. He himself worked tirelessly on improving television and less than 20 years after his black and white debut was working on a colour television which used two CRT (Cathode Ray Tubes) to produce its picture culminating in the world’s first demonstration of a colour TV in 1944 – incidentally discovering, demonstrating and patenting 3D TV on the way!

So it is today that the displays define what we can show to viewers. Is there any point in mastering a video to show at 10,000 NITs if there is no display that can show something so bright? Pushing all of Europe and the US’s television programmes to 8K resolution is of limited benefit when 8K TVs are in limited supply and in few homes.

This talk looks at the state of the art of display technology seeing where it’s being used and how. Digital Signage is covered and of course this is where the high brightness technology is developed, for signs outside, some of which could influence more conventional TVs on which we want to watch HDR (High Dynamic Range) video.

When OLED technology first came along it was quickly slated as a great option TVs and yet all these years later we see that its adoption in large panels is low. This shows the difficulty, sometimes, in dealing with the technical challenges of great technologies. We now see OLEDs in wearable devices and smaller screens. The number of the screens is quickly increasing as IoT devices, watches and other electronics start to adopt full screens instead of just flashing LEDs. This increase in manufacturing should lead to renewed investment in this field potentially allowing OLEDs to be incorporated in to full-sized, large TVs.

The talk finished with a look at the TV market covering quantum dots and what people really mean when they mention ‘LED TVs’.

This webinar is from the Society for Information Display and is produced in partnership with SMPTE.

Watch Now!

Speaker

Sri Peruvemba Sri Peruvemba
CEO,
Marketer International

Video: Building Large SMPTE ST 2110 Systems Using JT-NM TR-1001-1


With the SMPTE 2110 suite of standards largely published and the related AMWA IS-04 and -05 specifications stable, people’s minds are turning to how to implement all these standards bringing them together into a complete working system.

The JT-NM TR-1001-1 is a technical recommendation document which describes a way of documenting how the system will work – for instance how do new devices on the network start up? How do they know what PTP domain is in use on the network?

John Mailhot starts by giving an overview of the standards and documents available, showing which ones are published and which are still in progress. He then looks at each of them in turn to summarise its use on the network and how it fits in to the system as a whole.

Once the groundwork is laid, we see how the JT-NM working group have looked at 5 major behaviours and what they have recommended for making them work in a scalable way. These cover things like DNS discovery, automated multicast address allocation and other considerations.

Watch now

Speaker

John Mailhot John Mailhot
CTO Networking & Infrastructure
Imagine Communications

Video: Per-title Encoding at Scale

MUX is a very pro-active company pushing forward streaming technology. At NAB 2019 they have announced Audience Adaptive Encoding which is offers encodes tailored to both your content but also the typical bitrate of your viewing demographic. Underpinning this technology is machine learning and their Per-title encoding technology which was released last year.

This talk with Nick Chadwick looks at what per-title encoding is, how you can work out which resolutions and bitrates to encode at and how to deliver this as a useful product.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we see some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

Looking then at how to actually work out on a title-by-title basis, Nick explains the pros and cons of the different approaches going on to explain how MUX used machine learning to generate the model they created to make this work.

Finishing off with an extensive Q&A, this talk is a great overview on how to pick great encoding parameters, manually or otherwise.

Watch now!

Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux Inc.

Video: Into the Depths: The Technical Details behind AV1

As we wait for the dust to settle on this NAB’s AV1 announcements hearing who’s added support for AV1 and what innovations have come because of it, we know that the feature set is frozen and that some companies will be using it. So here’s a chance to go in to some of the detail.

AV1 is being created by the AOM, the Alliance for Open Media, of which Mozilla is a founding member. The IETF is considering it for standardisation under their NetVC working group and implementations have started. On The Broadcast Knowledge, we have seen explanations from Xiph.org, one of the original contributors to AV1. We’ve seen how it fares against HEVC with Ian Trow and how HDR can be incorporated in it from Google and Warwick University. For a complete list of all AV1 content, have a look here.

Now, we join Nathan Egge who talks us through many of the different tools within AV1 including one which often captures the imagination of people; AV1’s ability to remove film grain ahead of encoding and then add back in synthesised grain on playback. Nathan also looks ahead in the Q&A talking about integration into RTP, WebRTC and why Broadcasters would want to use AV1.

Watch now!

Speaker

Nathan Egge Nathan Egge
Video Codec Engineer,
Mozilla

Video: Routing AES67

Well ahead of video, audio moved to uncompressed over IP and has been reaping the benefits ever since. With more mature workflows and, as has always been the case, a much higher quantity of feeds than video traditionally has, the solutions have a higher maturity.

Anthony from Ward-Beck Systems talks about the advantages of audio IP and the things which weren’t possible before. In a very accessible talk, you’ll hear as much about soup cans as you will about the more technical aspects, like SDP.

Whilst uncompressed audio over IP started a while ago, it doesn’t mean that it’s not still being developed – in fact it’s the interface with the video world where a lot of the focus is now with SMPTE 2110-30 and -31 determining how audio can flow alongside video and other essences. As has been seen in other talks here on The Broadcast Knowledge there’s a fair bit to know.(Here’s a full list.

To simplify this, Anthony, who is also the Vice Chair of AES Toronto, describes the work the AES is doing to certify equipment as AES 67 ‘compatible’ – and what that would actually mean.

This talk finishes with a walk-through of a real world OB deployment of AES 67 which included the simple touches as using google docs for sharing links as well as more technical techniques such as virtual sound card.

Packed full of easy-to-understand insights which are useful even to those who live for video, this IP Showcase talk is worth a look.

Watch now!

Speaker

Anthony P. Kuzub Anthony P. Kuzub
IP Audio Product Manager,
Ward-Beck Systems

Video: Multicast ABR

Multicast ABR is a mix of two very beneficial technologies which are seldom seen together. ABR – Adaptive Bitrate – allows a player to change the bitrate of the video and audio that it’s playing to adapt to changing network conditions. Multicast is a network technology which efficiently sends a video stream over the network without duplicating bandwidth.

ABR has traditionally been deployed for chunk-based video like HLS where each client downloads its own copy of the video in blocks of several seconds in length. This means that you bandwidth you use to distribute your video increases by one thousand times if 1000 people play your video.

Multicast works with live streams, not chunks, but allows the bandwidth use for 1000 players to increase – in the best case – by 0%.

Here, the panelists look at the benefits of combining multicast distribution of live video with techniques to allow it to change bitrate between different quality streams.

This type of live streaming is actually backwards compatible with old-style STBs since the video sent is a live transport stream, it’s possible to deliver that to a legacy STB using a converter in the house at the same time as delivering a better, more modern delivery to other TVs and devices.

It thus also allows pure-streaming providers to compete with conventional broadcast cable providers and can also result in cost savings in equipment provided but also in bandwidth used.

There’s lots to unpack here, which is why the Streaming Video Alliance have put together this panel of experts.

Watch now and find out more!

Speakers

Phillipe Carol Phillipe Carol
Senior Product Manager,
Anevia
Neil Geary Neil Geary
Technical Strategy Consultant,
Liberty Global
Brian Stevenson Brian Stevenson
VP of Ecosystem Strategy & Partnerships,
Ericsson
Mark Fisher Mark Fisher
VP of Marketing & Business Development,
Qwilt
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Blockchain & the Hollywood Supply Chain

At The Broadcast Knowledge, we’re continuing to cut through the hype and get to the bottom of blockchain. Now part of the NAB drinking game along with words like AI and 5G, it’s similarly not going away. The principle of blockchain is useful – just not useful everywhere.

So what can broadcasters do with Blockchain, and – given this is a SMPTE talk – what can film studios do with it? It’s doubtless that blockchain really makes secure, trusted systems possible so the mind immediately jumps to using it to ensure all the files needed to create films are distributed securely and with an audit trail.

Here, Steve Wong looks at this but explores the new possibilities this creates. He starts with the basics on what blockchain is and how it works, but soon moves in to how this could work for Hollywood explaining what could exist and what already does.

Watch now!

Speaker

Steve Wong Steve Wong
Cloud & Platform Services General Manager, Telecom, Media & Technology
DXC Technology

Video: Holographic update: Light Fields and the Future of Video

Recording Light Fields sounds like sci-fi as it allows you to record a video and then move around that video as you please changing the angle you look at it and your position. This is why it’s also referred to as holography.

It works by recording the video from many different viewpoints rather than just from one angle. Processing all of these different videos of the same thing allows a computer to build a 3D video model of the scene which you can then watch using VR goggles or a holographic TV.

In this talk from San Francisco Video Tech, Ryan Damm from Visby.io talks us through some of the basics of light fields touching and brings us up to date with the current status. Google, Microsoft, Intel are some of the big players investing in R&D among many smaller startups.

Ryan talks about the need for standardisation for light fields. The things we take for granted in 2D video are compared with what you have with light field video by way of explaining the challenges and approaches being seen today in this active field.

Watch now and learn!

Speaker

Ryan Damm Ryan Damm
Co founder,
Visby

Video: A Basic Guide For Real-Time IP Video

There are a lot of videos looking into the details of uncompressed video over IP, but not many for those still starting out – and let’s face it, there are a lot of people who are only just embarking on this journey. Here, Andy Jones takes us through the real basics do prove very useful as a building block for understanding today’s IP technologies.

Andy Jones is well known by many broadcast engineers in the UK having spent many many years working in The BBC’s Training and Development department and subsequently running training for the IABM. The news that he passed away on Saturday is very saddening and I’m posting this video in recognition of the immense amount he has contributed to the industry through his years of tireless work. You can see from this video from NAB 2018 his passion, energy and ability to make complicated things simple.

In this talk, Andy looks at the different layers that networks operate on, including the physical layer i.e. the cables. This is because the different ways in which traffic gets from A to B in networking are interdependent and need to be considered as such. He looks at an example network which shows all the different standards in use in an IP network and talks about their relevance.

Andy briefly looks at IP addresses and the protocol that makes them work. This underpins much of what happens on most networks before looking at the Real-time Transport Protocol (RTP) which is heavily used for sending audio and video streams.

After looking at how timing is done in IP (as opposed to black and burst) he has laid enough foundations to look at SMPTE ST 2110 – the suite of standards which show how different media (essences) are sent in networks delivering uncompressed streams. AES67 for the audio is also looked at before how to control the whole kit and caboodle.

A great primer for those starting out, watch now!

Speaker

Andy Jones Andy Jones