Moving video production to IP has been ongoing for over 5 years using both SMPTE ST 2022-6 and now ST-2110 but we’re still in the ‘Early Adopter’ phase, explains the Willem Vermost speaking at SMPTE 2019. Willem is the EBU topic lead for the transition to IP-based studios and he is tracking the upcoming projects with public broadcasters.
Willem talks about what’s motivating these Early Adopters. In general, he explains, they have a building move project and they are faced, as CBC (Canadian Broadcasting Corporation) was, with being the last to install an extensive SDI infrastructure – and be stuck with that for 7, 10 or more years to come – or the to be one of the first to use IP. Increasingly, they can’t justify the SDI workflow and IP, for all its risks and uncertainties, is the way forward.
CBC/Radio Canada needs to be ‘on air’ in 2020 so they put in a place a risk mitigation plan to test all the equipment before putting it in. Willem outlines what this test plan looks like and what it covers: AES67, ST 2110-40,-7, -30-, -20, EBU r148 security etc. Testing was also brought up by the BBC’s Mark Patrick when he discussed his work in bring in the BBC’s Cardiff Square building on-air. They found that automated testing was key in project delivery so that testing was quick and consistent to ensure that software/firmware patches were correctly accepted into the project.
Willem talks us through the EBU’s famous Technology Pyramid which shows to what extent each of the technologies on which media-over-IP requires has been defined and adopted by the industry. It shows that while the media aspect has been successfully deployed, there is a lot to do in, for example, security.
Difficulties arose due to different interpretations of standards. To aid in diagnosis of such issues, the LIST project has created a 2110 analysis tool and other related tools. This is created within the EBU and Willem highlights some key parts of what it does. He then shows how that connects in with the automated test programs and explains the underlying structure of how the software is built.
The talk finishes with mention of the JT-NM test plan, a summary and questions lead by Arista’s Gerard Phillips.
RIST solves a problem by transforming unmanaged networks into reliable paths for video contribution in an interoperable way. RIST not only improves reliability through re-requesting missing packets, but also comes with a range of features and tools, not least of which is tunnelling. Cobalt Digital’s EVP of engineering, Ciro Noronha explains how the protocol works and what’s next on the roadmap.
Ciro starts with a look at the RIST Simple Profile covering the ARQ negative acknowledgement (NACK) mechanism, link bonding and seamless switching. He then moves on to examine the missing features such as content encryption, authentication, simpler firewall configurations, in-band control, high bitrates, NULL packet extraction. These features define RIST’s Main Profile.
Tunnelling and Multiplexing is a technique to combine Simple Profile flows into a bi-directional tunnel, providing simpler network and encryption configuration. Using a GRE (RFC 8086) tunnel, RIST provides a full, protocol agnostic tunnel and a UDP-only reduced overheard mode which only requires 0.6% data overhead to implement. Ciro explains a number of setups, including one where the connection is initiated by the receiver – something that the Simple Profile doesn’t allow.
Authentication and Encryption are covered next. DTLS us the UDP implementation of TLS which is the security mechanism used on secure websites. This provides security to the tunnel so everything which travels through is covered. Ciro explains the pre-shared key (PSK) mechanism in the Main Profile.
The talk finishes by covering NULL Packet removal, also known as ‘bandwidth optimisation’, header extension which extends RTP’s sequence number to allow for more in-flight packets and questions from the audience.
Open source software can be found powering streaming solutions everywhere. Veterans of the industry on this panel at Streaming Media West, give us their views on how to successfully use open source in on-air projects whilst minimising risk.
The Streaming Video Alliance’s Jason Thibeault starts by finding out how much the panelists and their companies use open source in their work and expands upon that to ask how much the support model matters. After all, some projects have paid support but based on free software whereas others have free community-provided support. The feeling is that it really depends on the community; is it large and is it active? Not least of the considerations is that, in a corporate setting, if the community is quick to accuse, is it right to ask your staff to go through layers of ‘your a newbie’ and other types of pushback each time they need to get an answer?
Another key question is whether we give should back to the open source community and, if so, how. The panels discusses the difficulties in contributing code but also covers the importance of other ways of contributing – particularly when the maintainer is one individual. Contribution of money is an obvious, but often forgotten way to help but writing documentation is also really helpful as is contributing on the support forums. This all makes for a vibrant community and increases the chances that other companies will adopt the project into their workflows…which then makes the community all the stronger.
With turn-key proprietary solutions ready to to be deployed, Jason asks whether open source actually saves money on the occasions that you can, indeed, find a proprietary solution that fits your requirements.
Lastly, the panel talks about the difficulty in balancing adherence to the standards compared with the speed at which open source communities can move. They can easily deliver the full extent of the standard to date and then move on to fixing the remaining problems so far not addressed by the developing standard. Whilst this is good, they risk implementing in ways which may cause issues in the future when the standard finally catches up.
The panel session finishes with questions from the audience.
There are two phases to reducing streaming latency. One is to optimise the system you already have, the other is to move to a new protocol. This talk looks at both approaches achieving parity with traditional broadcast media through optimisation and ‘better than’ by using CMAF.
In this video from the Northern Waves 2019 conference, Koen van Benschop from Deutsche Telekom examines the large and low-cost latency savings you can achieve by optimising your current HLS delivery. With the original chunk sizes recommended by Apple being 10 seconds, there are still many services out there which are starting from a very high latency so there are savings to be had.
Koen explains how the total latency is made up by looking at the decode, encode, packaging and other latencies. We quickly see that the player buffer is one of the largest, the second being the encode latency. We explore the pros and cons of reducing these and see that the overall latency can fall to or even below traditional broadcast latency depending, of course, on which type (and which country’s) you are comparing it too.
While optimising HLS/DASH gets you down to a few seconds, there’s a strong desire for some services to beat that. Whilst the broadcasters themselves may be reticent to do this, not wanting to deliver online services quicker than their over-the-air offerings, online sports services such as DAZN can make latency a USP and deliver better value to fans. After all, DAZN and similar services benefit from low-second latency as it helps bring them in line with social media which can have very low latency when it comes to key events such as goals and points being scored in live matches.
Stefan Arbanowski from Fraunhofer leads us through CMAF covering what it is, the upcoming second edition and how it works. He covers its ability to use .m3u8 (from HLS) and .mpd (from DASH) playlist/manifest files and that it works both with fMP4 and ISO BMFF. One benefit from DASH is it’s Common Encryption standard. Using this it can work with PlayReady DRM, Fairplay and others.
Stefan then takes a moment to consider WebRTC. Given it proposes latency of less than one second, it can sound like a much better idea. Stefan outlines concerns he has about the ability to scale above 200,000 users. He then turns his attention back to CMAF and outlines how the stream is composed and how the player logic works in order to successfully play at low latency.