Video: How CBS Sports Digital streams live events at scale

Delivering high scale in streaming really exposes the weaknesses of every point of your workflow, so even those of us who are not streaming at maximum scale, there are many lessons to be learnt. CBS Sports Digital delivered the Super Bowl using the principles of ‘practice, practice, practice’, keeping the solution as simple as possible and making mitigation of problems primary to solving them.

Taylor Busch tells walks us through their solution explaining how it supported their key principles and highlighting the technology used. Starting with Acquisition, he covers the SDI fibre delivery to a backup facility as well as the AWS Direct Connect links for their Elemental Live encoders. The origin servers were in two different regions and both received data from both sets of encoders.

CBS used ‘Output locking’ which ensures that the TS segments are all aligned even across different encoders which is done by respecting the timecode in the SDI and helps in encoder failover situations. QVBR encoding is a method of encoding up to a quality level rather than simply saying ‘7000 kbps’. QVBR provides a more efficient use a bandwidth since in the situations where a scene doesn’t require a lot of bandwidth, it won’t be sent. This variability, even if you run in capped mode to limit the bandwidth of particularly complex scenes, can look like a failing encoder to some systems, so the fact this is now in ‘VBR’ mode, needs to be understood by all the departments and companies who monitor your feed.

Advertising is famously important for the Super Bowl, so Taylor gives an overview of how they used the CableLabs ESAM protocol and SCTE to receive information about and trigger the adverts. This combined SCTE-104, ESAM and SCTE-35 as we’ll as allowing clients to use VAST for tracking. Extra caching was provided by Fastly’s Media Shield which tests for problems with manifests, origin servers and encoders. This fed a Multi-CDN setup using 4 CDNs which could be switched between. There is a decision point for requests to determine which CDN should answer.

Taylor then looks at the tools, such as Mux’s dashboard, which they used to spot problems in the system; both NOC-style tools and multiviewers. They set up three war rooms which looked at different aspects of the system, connectivity, APIs etc. This allowed them to focus on what should be communicated keeping ‘noise’ down to give people the space they needed to do their work at the same time as providing the information required. Taylor then opens up to questions from the floor.

Watch now!
Speaker

Taylor Busch, Sr. Taylor Busch
Senior Director Engineering,
CBS Sports Digital

Video: Edge Compute

Delivering personalised video at scale, live or otherwise, is a tradeoff between speed and complexity. In this lightning talk at Demuxed 2019, Kyle Boutette from Cloudflare explains the benefits of running code on the ‘edge’.

Kyle starts by highlighting the reason to use CDNs; they take the management of a whole fleet of servers off your hands allowing you to concentrate on delivering a video service and deploying the technology to do just that. This works really well and CDNs are the backbone of most of the large sites on the internet. Some companies build their own whilst some use Cloudflare or Amazon CloudFront among the many CDNs out there. Apart from dealing with the admin of the servers, CDNs are careful to provide servers as close to your users as practical which helps in reducing latency.

The problem that Kyle exposes is that any personalisation needs to be done on the player itself or on the server. The former requiring implementing the same features on many platforms, the latter destroying the value of the CDN since it’s based on needing the central server(s) to calculate the new information and send it to the CDN bringing us back to square one.

The solution that Cloudflare has developed allows javascript to run on the the CDN’s computers, referred to as the ‘edge’. This allows much of the logic to be done close to the consumer and gives the highest chance of reusing CDN assets whilst also reducing the latency of the requests compared to talking to the central server infrastructure. Doing this with javascript provides a well-understood environment for web developers. Kyle provides examples to understand how this can be done with relatively simple code.

Watch now!
Speaker

Kyle Boutette Kyle Boutette
Systems Engineer,
Cloudflare

Video: Benefits of IP Systems for Sporting Venues

As you walk around any exhibitions there seems to be a myriad of ‘benefits’ of IP working, many of which don’t resonate for particular use cases. Only the most extraordinary businesses need all of the benefits, so in this talk, Imagine Communication’s John Mailhot discusses how IP helps sports venues.

John sets the scene by separating out the function of OB trucks and the ‘inside production’ facilities which have a whole host of non-TV production to do including driving scoreboards, displays inside the venue, replays and importantly has to deal with over 250 events a year, not all of which will have an OB truck.

We see that the scale that IP can work at is a great benefit as many signals can fit down one fibre and 2022-7 seamless switching can easily provide full redundancy for every fibre and SFP. This is a level of redundancy which is simply not seen in SDI systems. With stadia being very large, necessitating cable runs of over 500m, the fact that IP needs fewer cables overall is a great benefit.

John shows an example of an Arista switch only 7U in height which provides 144x 100G ports meaning it could support over 4000 inputs and 4000 outputs. Such density is unprecedented and for OB trucks can be a dealbreaker. For sports venues, this can also be a big motivator but also allow more flexibility in distributing the solution rather than relying on a massive central interconnect with a 1100×1100 SDI router in a central CTA.

TV is nothing without audio and the benefits to audio in 2110 are non trivial since with the audio being split off from the video, we are no longer limited to dealing with just 16 channels per video and de-embedding from a video frame any time we want to touch it.

Timing is an interesting benefit. I say this because, whilst PTP can end up being quite complex compared to black and burst, it has some big benefits. First off, it can live in the same cables as your data where as black and burst requires a whole separate cable infrastructure. PTP also allows you to timestamp all essences which helps with lip-sync throughout your workflow.

John leads us through some examples of how this works for different areas finishing by summing up the relevant benefits such as scalability, multi-format, space efficient, and timing amongst others.

Watch now!
Download the slides
Speakers

John Mailhot John Mailhot
CTO, Networking & Infrastructure,
Imagine Communications

Video: DASH: from on-demand to large scale live for premium services

A bumper video here with 7 short talks from VideoLAN, Will Law and Hulu among others, all exploring the state of MPEG DASH today, the latest developments and the hot topics such as low latency, ad insertion, bandwidth prediction and one red-letter feature of DASH – multi-DRM.

The first 10 minutes sets the scene introducing the DASH Industry Forum (DASH IF) and explaining who takes part and what it does. Thomas Stockhammer, who is chair of the Interoperability Working Group explains that DASH IF is made of companies, headline members including Google, Ericsson, Comcast and Thomas’ employer Qualcomm who are working to promote the adoption of MPEG-DASH by working to improve the specification, advise on how to put it into practice in real life, promote interoperability, and being a liaison point for other standards bodies. The remaining talks in this video exemplify the work which is being done by the group to push the technology forward.

Meeting Live Broadcast Requirements – the latest on DASH low latency!
Akamai’s Will Law takes to the mic next to look at the continuing push to make low-latency streaming available as a mainstream option for services to use. Will Law has spoken about about low latency at Demuxed 2019 when he discussed the three main file-based to deliver low latency DASH, LHLS and LL-HLS as well as his famous ‘Chunky Monkey’ talk where he explains how CMAF, an implementation of MPEG-DASH, works in light-hearted detail.

In today’s talk, Will sets out what ‘low latency’ is and revises how CMAF allows latencies of below 10 seconds to be achieved. A lot of people focus on the duration of the chunks in reducing latency and while it’s true that it’s hard to get low latency with 10-second chunk sizes, Will puts much more emphasis on the player buffer rather than the chunk size themselves in producing a low-latency stream. This is because even when you have very small chunk sizes, choosing when to start playing (immediately or waiting for the next chunk) can be an important part of keeping the latency down between live and your playback position. A common technique to manage that latency is to slightly increase and decrease playback speed in order to manage the gap without, hopefully, without the viewer noticing.

Chunk-based streaming protocols like HLS make Adaptive Bitrate (ABR) relatively easy whereby the player monitors the download of each chunk. If the, say, 5-second chunk arrives within 0.25 seconds, it knows it could safely choose a higher-bitrate chunk next time. If, however, the chunk arrives in 4.8 seconds, it can choose to the next chunk to be lower-bitrate so as to receive the chunk with more headroom. With CMAF this is not easy to do since the segments all arrive in near real-time since the transferred files represent very small sections and are sent as soon as they are created. This problem is addressed in a later talk in this talk.

To finish off, Will talks about ‘Resync Elements’ which are a way of signalling mid-chunk IDRs. These help players find all the points which they can join a stream or switch bitrate which is important when some are not at the start of chunks. For live streams, these are noted in the manifest file which Will walks through on screen.

Ad Insertion in Live Content:Pre-, Mid- and Post-rolling
Whilst not always a hit with viewers, ads are important to many services in terms of generating the revenue needed to continue delivering content to viewers. In order to provide targeted ads, to ensure they are available and to ensure that there is a record of which ads were played when, the ad-serving infrastructure is complex. Hulu’s Zachary Cava walks us through the parts of the infrastructure that are defined within DASH such as exchanging information on ‘Ad Decision Parameters’ and ad metadata.

In chunked streams, ads are inserted at chunk boundaries. This presents challenges in terms of making sure that certain parameters are maintained during this swap which is given the general name of ‘Content Splice Conditioning.’ This conditioning can align the first segment aligned with the period start time, for example. Zachary lays out the three options provided for this splice conditioning before finishing his talk covering prepared content recommendations, ad metadata and tracking.

Bandwidth Prediction for Multi-bitrate Streaming at Low Latency
Next up is Comcast’s Ali C. Begen who follows on from Will Law’s talk to cover bandwidth prediction when operating at low-latency. As an example of the problem, let’s look at HTTP/1.1 which allows us to download a file before it’s finished being written. This allows us to receive a 10-second chunk as it’s being written which means we’ll receive it at the same rate the live video is being encoded. As a consequence, the time each chunk takes to arrive will be the same as the real-time chunk duration (in this example, 10 seconds.) When you are dealing with already-written chunks, your download time will be dependent on your bandwidth and therefore the time can be an indicator of whether your player should increase or decrease the bitrate of the stream it’s pulling. Getting back this indicator for low-latency streams is what Ali presents in this talk.

Based on this paper Ali co-authored with Christian Timmerer, he explains a way of looking at the idle time between consecutive chunks and using a sliding window to generate a bandwidth prediction.

Implementing DASH low latency in FFmpeg
Open-source developer Jean-Baptiste Kempf who is well known for his work on VLC discusses his work writing an MPEG-DASH implementation for FFmpeg called the DASH-LL. He explains how it works and who to use it with examples. You can copy and paste the examples from the pdf of his talk.

Managing multi-DRM with DASH
The final talk, ahead of Q&A is from NAGRA discussing the use of DRM within MPEG-DASH. MPEG-DASH uses Common Encryption (CENC) which allows the DASH protocol to use more than one DRM scheme and is typically seen to allow the use of ‘FairPlay’, ‘Widevine’ and ‘PlayReady’ encryption schemes on a single stream dependent on the OS of the receiver. There is complexity in having a single server which can talk to and negotiate signing licences with multiple DRM services which is the difficulty that Lauren Piron discusses in this final talk before the Q&A led by Ericsson’s VP of international standards, Per Fröjdh.

Watch now!
Speakers

Thomas Stockhammer Thomas Stockhammer
Director of Technical Standards,
Qualcomm
Will Law Will Law
Chief Architect,
Akamai
Zachary Cava Zachary Cava
Software Architect,
Hulu
Ali C. Begen Ali C. Begen
Technical Consultant, Video Architecture, Strategy and Technology group,
Comcast
Jean-Baptiste Kempf Jean-Baptiste Kempf
President & Lead VLC Developer
VideoLAN
Laurent Piron Laurent Piron
Principal Solution Architect
NAGRA
Per Fröjdh Moderator: Per Fröjdh
VP International Standards,
Ericsson