Video: DVB-I. Linear Television with Internet Technologies

Outside of computers, life is rarely binary. There’s no reason for all TV to be received online, like Netflix of iPlayer, or all over-the-air by satellite or DVB-T. In fact, by using a hybrid approach, broadcasters can reach more people and deliver more services than before including securing an easier path to higher definition or next-gen pop-up TV channels.

Paul Higgs explains the work DVB have been doing to standardise a way of delivering this promise: linear TV with internet technologies. DVB-I is split into three parts:

1. Service discovery

DVB-I lays out ways to find TV services including auto-discovery and recommendations. The A177 Bluebook provides a mechanism to find IP-based TV services. Service lists bring together channels and geographic information whereas service lists registries are specified to provide a place to go to in order to discover service lists.

2. Delivery
Internet delivery isn’t a reason for low-quality video. It should be as good or better than traditional methods because, at the end of the day, viewers don’t actually care which medium was used to receive the programmes. Streaming with DVB-I is based on MPEG DASH and defined by DVB-DASH (Bluebook A168). Moreover, DVB-I services can be simulcast so they are co-timed with broadcast channels. Viewers can, therefore, switch between broadcast and internet services.

 

 

3.Presentation
Naturally, a plethora of metadata can be delivered alongside the media for use in EPGs and on-screen displays thus including logos, banners, programme guide data and content protection information.

Ian explains that this is brought together with three tools: the DVB-I reference client player which works on Android and HbbTV, DVB-DASH reference streams and a DVB-DASH validator.

Finishing up, Ian adds that network operators can take advantage of the complementary DVB Multicast ABR specification to reduce bitrate into the home. DVB-I will be expanded in 2021 and beyond to include targetted advertising, home re-distribution and delivering video in IP but over traditional over-the-air broadcast networks.

Watch now!
Speaker

Paul Higgs Paul Higgs
Chairman – TM-I Working Group, DVB Project
Vice President, Video Industry Development, Huawei

Video: Broadcasting WebRTC over Low Latency Dash


Using sub-second WebRTC with the scalability of CMAF: Allowing panelists and presenters to chat in real-time is really important to foster fluid conversations, but broadcasting that out to thousands of people scales more easily with CMAF based on MPEG DASH. In this talk, Mux’s Dylan Jhaveri (formerly CTO, Crowdcast.io) explains how they’ve combined WebRTC and CMAF to keep latencies low for everyone.

Speaking at the San Francisco VidDev meetup, Dylan explains that the Crowdspace webpage allows you to watch a number of participants talk in real-time as a live stream with live chat down the side of the screen. The live chat, naturally, feeds into the live conversation so latency needs to be low for the viewers as much as the on-camera participants. For them, WebRTC is used as this is one of the very few options that provides reliable sub-second streaming. To keep the interactivity between the chat and the participants, Crowdcast decided to look at ultra-low-latency CMAF which can deliver between 1 and 5 second latency depending on your risk threshold for rebuffering. So the task became to convert a WebRTC call into a low-latency stream that could easily be received by thousands of viewers.

 

 
Dylan points out that they were already taking WebRTC into the browser as that’s how people were using the platform. Therefore, using headless Chrome should allow you to pipe the video from the browser into ffmpeg and create an encode without having to composite individual streams whilst giving Crowdcast full layout control.

After a few months of tweaking, Dylan and his colleagues had Chrome going into ffmpeg then into a nodejs server which delivers CMAF chunks and manifests (click to learn more about how CMAF works). In order to scale this, Dylan explains the logic implemented in a CDN to use the nodejs server running in a docker container as an origin server. Using HLS they have a 95% cache hit rate and achieve 15 seconds latency. The tests at the time of the talks, Dylan explains, show that the CMAF implementation hits 3 seconds of latency and was working as expected.

The talk ends with a Q&A covering how they get the video out of the headerless Chrome, whether CMAF latency could be improved and why there are so many docker containers.

Watch now!
Speaker

Dylan Jhaveri Dylan Jhaveri
Senior Software Engineer, Mux
Formerly CTO & Co-founder, Crowdcast.io

Video: Low-latency DASH Streaming Using Open Source Tools

Low Latency Dash also known as LL-DASH is a modification of MPEG DASH to allow it to operate with close to two seconds’ latency bringing it down to meet, or beat, standard broadcast signals.

Brightcove’s Bo Zhang starts by outlining the aims and methods of getting there. For instance, he explains, the HTTP 1.1 Chunked Transfer element is key to low-latency streaming as it allows the server to start sending a video segment as its being written, not waiting until the file is complete. LL-DASH also has the ability to state an availability window (‘availabilityTimeOffset’).

As LL-MPEG DASH is a living standard, there are updates on the way: Resync points will allow a player to receive a list of places where it can join a stream using SAP types in the ISO-BMFF spec, the server can send a ‘service description’ to the player which can use the information to adjust its latency. Event messages can now be inserted in the middle of segments.

Bo then moves on to explain that he and the team have set up and experiment to gain experience with LL-DASH and test overall latency. He shows that they decided to stream RTMP out of OBS, into a github project called ‘node-gpac-dash’ then to the dash.js player all. between Boston and Seattle. This test runs at 800×600, 30fps with a bitrate of 2.5Mbps and shows results of between 2.5 and 5 seconds depending on the network conditions.

As Bo moves towards the Q&A, he says that low-latency streaming is less scalable because a TCP connection needs to be kept open between the player and the CDN which is a burden.
Another compromise is that smaller chunk sizes in LL-DASH give reduced latency but IO increases meaning sometimes you may have to increase the chunk sizes (and hence latency of the stream) to allow for better performance. He also adds that adverts are more difficult with low-latency streams due to the short amount of time to request and receive the advertising.

Watch now!</a
More detail about the experiments in this talk can be found in Bo’s
blog post.
Speakers

Bo Zhang Bo Zhang
Staff Video System Engineer, Research
Brightcove

Video: Web Media Standards

The internet has been a continuing story of proprietary technologies being overtaken by open technologies, from the precursors to TCP/IP, to Flash/RTMP video delivery, to HLS. Understanding the history of why these technologies appear, why they are subsumed by open standards and how boost in popularity that happens at that transition is important to help us make decisions now and foresee how the technology landscape may look in five or ten years’ time.

This talk, by Jonn Simmons, is a talk of two halves. Looking first at the history of how our standards coalesced into what we have today will fill in many blanks and make the purpose of current technologies like MPEG DASH & CMAF clearer. He then looks at how we can understand what we have today in light of similar situations in the past answering the question whether we are at an inflexion point in technology.

John first looks at the importance of making DRM-protected content portable in the same way as non-protected content was easy to move between computers and systems. This was in response to a WIPO analysis which, as many would agree, concluded that this was essential to enable legal video use on the internet. In 2008, Mircosoft analysed all the elements needed, beyond the simple encryption, to allow such media to be portable. It would require HTML extensions for delivery, DRM signalling, authentication, a standard protocol for Adaptive Delivery (also known as ABR) and an adaptive container format. We then take a walk through the timeline staring in 2009 through to 2018 seeing the beginnings and published availability of such technologies Common Encryption, MPEG DASH and CMAF.

Milestones for Web Media Portability

John then walks through these key technologies starting with the importance of Common Encryption (also known as CENC). Previously all the DRM methods had their own container formats. Harmonisation of DRM is, likely, never going to happen so we’ll always have Apple’s own, Google’s own, Microsoft’s and plenty of others. For streaming providers, it’s a major problem to deliver all the different formats and makes for messy, duplicative workflows. Common Encryption allows for one container format which can contain any DRM information allowing for a single workflow with different inputs. On the player side, the player can, now, simply accept a single stream of DRM information, authenticate with the appropriate service and decode the video.

CMAF is another key technology called out by John in enabling portability of media. It was co-developed with Apple to enable a common media format for HLS and DASH. We’ve covered this before on The Broadcast Knowledge starting with the ISO BMFF format on which DASH and CMAF are based, Will Law’s famous ‘Chunky Monkey’ talk and many more. We recently covered FuboTV’s talk on how they distribute HLS & DASH multi-codec encoding and packaging.

Also highlighted by John. are the JavasScript Media Source Extensions and Encrypted Media Extensions which allow interaction from browsers/JavaScript with both ABR/Adaptive Streaming and DRM. He then talks about CTA WAVE which is a project that specifically aims to improve streamed media experiences on consumer devices, CTA being the Consumer Technology Association who are behind the annual CES exhibition in Las Vegas.

What is often less apparent is the current work happening developing new standards and specifications. John calls out a number of different projects within W3C and MPEG such as Low latency support for CMAF, MSE and codec switching in MSE. Work on ad signalling period boundaries and SCTE-35 is making its debut into JavaScript with some ongoing work to create the link between ad markers and JS applications. He also calls out VVC and AV1 mappings into CMAF.

In the second part of the presentation, John asked ‘where will we end up?’ John draws upon two examples. One is the number of TCP/IP hosts between 1980 and 1992. He shows it was clear that when TCP/IP was publicly available there was an exponential increase in adoption of TCP/IP, moving on from proprietary network interfaces available in the years before. Similarly with websites between 1990 and 1997. Exponential growth happened after 1993 when the standard was set for Web Clients. This did take a few years to have a marked effect, but the number of websites moved from a flat ‘less than 100’ number to 600, then 10,000 in 1994 increasing to a quarter of a million by 1995 and then over one million in 1996. This shows the difference between the power ‘walled garden’ environments and the open internet.

John sees media technology today as still having a number of ‘traditional’ walled gardens such as DISH and Sky TV. He sees people self-serving multiple walled gardens to create their own larger pool of media options, typically known as ‘cord cutters’. He, therefore, sees two options for the future. One is ever larger walled gardens where large companies aggregate the content of smaller content owners/providers. The other option is having cloud services that act as a one-stop-shop for your media, but dynamically authenticate against whichever service is needed. This is a much more open environment without the need to be separately subscribing to each and every outlet in the traditional sense.

Watch now!
Speakers

John Simmons John Simmons
W3C Evangelist, Media & Entertainment
W3C