Video: The End of Broadcast? Broadcast to IP Impacts

It’s very clear that internet streaming is growing, often resulting in a loss of viewership by traditional over-the-air broadcast. This panel explores the progress of IP-delivered TV, the changes in viewing habits this is already prompting and looks at the future impacts on broadcast television as a result.

Speaking at the IABM Theatre at IBC 2019, Ian Nock, chair of IET Media, sets the scene. He highlights stats such as 61% of Dutch viewing being non-linear, DirecTV publicly declaring they ‘have bought their last transponder’ and discusses the full platform OTT services available in the market place now.

To add detail to this, Ian is joined by DVB, the UK’s DTG and Germany’s Television Platform dealing with transformation to IP within Germany. Yvonne Thomas, from the Digital Television Group, takes to the podium first who starts by talking about the youngest part of the population who have a clear tendency to watch streamed services over broadcast compared to other generations. Yvonne talks about research showing UK consumers being willing to have 3 subscriptions to media services which is not in line with the number and fragmented nature of the options. She then finishes with the DTG manifesto for a consolidated and thus simplified way of accessing multiple services.

Peter Siebert from DVB looks at the average viewing time averaged over Europe which shows that the amount of time spent watching linear broadcast is actually staying stable – as is the amount of time spent watching DVDs. He also exposes the fact that the TV itself is still very much the most used device for watching media, even if it’s not RF-delivered. As such, the TV still provides the best quality of video and shared experience. Looking at history to understand the future, Peter shows a graph of cinema popularity before and after the introduction of television. Cinema was, indeed, impacted but importantly it did not die. We are left to conclude that his point is that linear broadcast will similarly not disappear, but simply have a different place in the future.

Finally, head of the panel session, Andre Prahl explains the role of the Deutsche TV-Plattform who are focussing on ‘media over IP’ with respect to delivery of video to end user both in terms of internet bandwidth but also Wi-Fi frequencies within the home.

Watch now!

This panel was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Andre Prahl André Prahl
Deutsche TV-Plattform
Peter Siebert Peter Siebert
Head of Technology,
DVB Project
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group
Ian Nock Moderator: Ian Nock
Chair,
IET Media Technical Network

Video: Real-Time Remote Production For The FIFA Women’s World Cup

We hear about so many new and improved cloud products and solutions to improve production that, once in a while, you really just need to step back and hear how people have put them together. This session is just that, a look at the whole post production workflow for FOX Sports’ production of the Women’s World Cup.

This panel from the Live Streaming Summit at Streaming Media West is led by FOX Sports’ Director of Post Production, Brandon Potter as he talks through the event with three of his key vendors, IBM Aspera, Telestream and Levels Beyond.

Brandon starts by explaining that this production stood on the back of the work they did with the Men’s World Cup in Russia, both having SDI delivery of media in PAL at the IBC. For this event, all the edit crew was in LA which created problems with some fixed frame-rate products still in use in the US facility.

Data transfer, naturally is the underpinning of any event like this with a total of a petabyte of data being created. Network connectivity for international events is always tricky. With so many miles of cable whether on land or under the sea, there is a very high chance of the fibre being cut. At the very least, the data can be switched to take a different path an in that moment, there will be data loss. All of this means that you can’t assume the type of data loss, it could be seconds, minutes or hours. On top of creating, and affording, redundant data circuits, the time needed for transfer of all the data needs to be considered and managed.

Ensuring complete transfer of files in a timely fashion drove the production to auto archive of all content in real time into Amazon S3 in order to avoid long post-match ingest times of multiple hours, “every bit of high-res content was uploaded.” stated Michael Flathers, CTO of IBM Aspera.

Dave Norman, from Telestream explains how the live workflows stayed on-prem with the high-performance media and encoders and then, “as the match ended, we would then transition…into AWS”. In the cloud, the HLS proxies would then being rendered into a single mp4 proxy editing files.

David Gonzales explains the benefits of the full API integrations they chose to build their multi-vendor solution around, rather than simple watch-folders. For all platforms to know where the errors were was very valuable and was particularly useful for the remote users to know in detail where their files were. This reduces the number of times they would need to ask someone for help and meant that when they did need to ask, they had a good amount of detail to specify what the problem was.

The talk comes to a close with a broad analysis of the different ways that files were moved and cached in order to optimise the workflow. There were a mix of TCP-style workflows and Aspera’s UDP-based transfer technology. Worth noting, also, that HLS manifests needed to be carefully created to only reference chunks that had been transferred, rather than simply any that had been created. Use of live creation of clips from growing files was also an important tool, the in- and out-points being created by viewing a low-latency proxy stream then the final file being clipped from the growing file in France and delivered within minutes to LA.

Overall, this case study gives a good feel for the problems and good practices which go hand in hand with multi-day events with international connectivity and shows that large-scale productions can successfully, and quickly, provide full access to all media to their production teams to maximise the material available for creative uses.

Watch now!
Speakers

Mike Flathers Mike Flathers
CTO,
IBM Aspera
Brandon Potter Brandon Potter
Director of Post Production,
FOX Sports
Dave Norman Dave Norman
Principal Sales Engineer,
Telestream
Daniel Gonzales Daniel Gonzales
Senior Solutions Architect,
Levels Beyond

Video: There and back again: reinventing UDP streaming with QUIC

QUIC is an encrypted transport protocol for increased performance compared to HTTP but will this help video streaming platforms? Often conflated with HTTP/3, QUIC is a UDP-based way evolution of HTTP/2 which, in turn, was a shake-up of the standard HTTP/1.1 delivery method of websites. HTTP/3 uses the same well-known security handshake from TLS 1.3 that is well adopted now in websites around the world to provide encryption by default. Importantly, it creates a connection between the two endpoints into which data streams are multiplexed. This prevents the need to constantly be negotiating new connections as found in HTTP/1.x so helping with speed and efficiency. These are known as QUIC streams.

QUIC streams provide reliable delivery, explains Lucas Pardue from Cloudflare, meaning it will recover packets when they are lost. Moreover, says Lucas, this is done in an extensible way with the standard specifying a basic model, but this is extensible. Indeed, the benefit of basing this technology on UDP is that changes can be done, programmatically, in user-space in lieu of the kernel changes that are typically needed for improved TCP handling on which HTTP/1.1, for example, is based.

QUIC hailed from a project of the same name created by Google which has been taken in by the IETF and, in the open community, honed and rounded into the QUIC we are hearing about today which is notably different from the original but maintaining the improvements proved in the first release. HTTP/3 is the syntax which is a development on from HTTP/2 which uses the QUIC transport protocol underneath or as Lucas would say, “HTTP/3 is the HTTP application mapping to the QUIC transport layer.” Lucas is heavily involved within the IETF effort to standardise HTTP/3 and QUIC so he continues in this talk to explain how QUIC streams are managed, identified and used.

It’s clear that QUIC and HTTP/3 are being carefully created to be tools for future, unforeseen applications with clear knowledge that they have wide applicability. For that reason, we are already seeing projects to add datagrams and RTP into the mix, to add multiparty or multicast. In many ways mimicking what we already have in our local networks. Putting them on QUIC can enable them to work on the internet and open up new ways of delivering streamed video.

The talk finishes with a nod to the fact that SRT and RIST also deliver many of the things QUIC delivers and Lucas leaves open the question of which will prosper in which segments of the broadcast market.

The Broadcast Knowledge has well over 500 talks/videos on many topics so to delve further into anything discussed above, just type into the search bar on the right. Or, for those who like URLs, just add your search query to the end of this URL https://thebroadcastknowledge.com/tag/.

Lucas has already written in detail about his work and what HTTP 3 is on his Cloudflare blog post.

Watch now!
Speaker

Lucas Pardue Lucas Pardue
Senior Software Engineer,
Cloudflare

Video: 2019 What did I miss? – Introducing Reliable Internet Streaming Transport

By far the most visited video of 2019 was the Merrick Ackermans’ review of RIST first release. RIST, the Reliable Internet Stream Transport protocol, aims to be an interoperable protocol allowing even lossy networks to be used for mission-critical broadcast contribution. Using RIST can change a bade internet link into a reliable circuit for live programme material, so it’s quite a game changer in terms of cost for links.

An increasing amount of broadcast video is travelling over the public internet which is currently enabled by SRT, Zixi and other protocols. Here, Merrick Ackermans explains the new RIST specification which aims to allow interoperable internet-based video contribution. RIST, which stands for Reliable Internet Stream Transport, ensures reliable transmission of video and other data over lossy networks. This enables broadcast-grade contribution at a much lower cost as well as a number of other benefits.

Many of the protocols which do similar are based on ARQ (Automatic Repeat-reQuest) which, as you can read on wikipedia, allows for recovery of lost data. This is the core functionality needed to bring unreliable or lossy connections into the realm of usable for broadcast contribution. Indeed, RIST is an interesting merging of technologies from around the industry. Many people use Zixi, SRT, and VideoFlow all of which can allow safe contribution of media. Safe meaning it gets to the other end intact and un-corrupted. However, if your encoder only supports Zixi and you use it to deliver to a decoder which only supports SRT, it’s not going to work out. The industry as accepted that these formats should be reconciled into a shared standard. This is RIST.

File-based workflows are mainly based on TCP (Transmission Control Protocol) although, notably, some file transfer service just as Aspera are based on UDP where packet recovery, not unlike RIST, is managed as part of the the protocol. This is unlike web sites where all data is transferred using TCP which sends an acknowledgement for each packet which arrives. Whilst this is great for ensuring files are uncorrupted, it can impact arrival times which can lead to live media being corrupted.

RIST is being created by the VSF – the Video Standards Forum – who were key in introducing VS-03 and VS-04 into the AIMS group on which SMPTE ST 2022-6 was then based. So their move now into a specification for reliable transmission of media over the internet has many anticipating great things. At the point that this talk was given the simple profile has been formed. Whist Merrick gives the details, it’s worth pointing out that this doesn’t include intrinsic encryption. It can, of course, be delivered over a separately encrypted tunnel, but an intrinsic part of SRT is the security that is provided from within the protocol.

Despite Zixi, a proprietary solution, and Haivision’s open source SRT being in competition, they are both part of the VSF working group creating RIST along with VideoFlow. This is because they see the benefit of having a widely accepted, interoperable method of exchanging media data. This can’t be achieved by any single company alone but can benefit all players in the market.

This talk remains true for the simple profile which just aims to recover packets. The main protocol, as opposed to ‘simple’, has since been released and you can hear about it in a separate video here. This protocol adds FEC, encryption and other aspects. Those who are familiar with the basics may whoosh to start there.

Speaker

Merrick Ackermans Merrick Ackermans
Chair,
VSF RIST Activity Group