Video: IBC2019 SRT Open Source Technical Panel

SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology, the alliance of SRT users continues to grow as the technology continues to develop and add features. This panel, from IBC 2019, is an update on what’s new with SRT and how it’s being used daily in broadcast.

Marc Cymontowski starts with an overview of the new features of SRT, mentioning its active Github repository, pointing to recent advances in the encryption available, upcoming FEC and the beginnings of SMPTE ST 2022-7 like redundancy. He also takes a look at how SRT fares against RTMP, the venerable incumbent technology for contribution of streams over the internet. Official support for RTMP will be coming to an end next year, so there is much interest in what may replace it. Marc makes the case that for the same link, SRT tends to have a latency of a half to a third and also performs better at higher bitrates.

RTP, the Real-Time Transport Protocol, is an important feature when it comes to redundancy. By using RTP’s ability to stamp each packet, the receiver can take two identical RTP streams – say from two separate ISPs and fill in missing packets on one stream from the packets of the other stream. This is a very powerful way of ensuring reliability over the internet so Marc makes the point that using SRT doesn’t stop you using RTP.

Simen Frostad then takes to the stage to explain why Bridge Technologies has added SRT support and how the SRT Hub will be a very important step forward. Then it’s Leonardo Chaves’ turn who explains how broadcaster Globo is using SRT to transform its video workflows and reduce OPEX costs to one third satellite costs.

Steve Russell from Red Bee talks about how they use SRT to create new, or lower cost, circuits and services to their customers. They’re able to use the internet not only for contribution from events but also to safely get video in and out of the cloud.

With these use-cases in mind, the panel opens up to thirty minutes of wide-ranging technical and non-technical questions.

Watch Free Now!
Free registration required
Speakers

Brian Ring Brian Ring
SRT Evangelist,
Ring Digital
Simen Frostad Simen Frostad
Chairman & Co-Founder
Bridge Technologies
Steve Russell Steve Russell
Head of OTT & Media Management Portfolios,
Red Bee Media
Marc Cymontkowski Marc Cymontkowski
VP Engineering,
Haivision
Leonardo Chaves Leonardo Chaves
Exec. Manager of New Transmission Technologies,
Globo

Video: Quantitative Evaluation and Attribute of Overall Brightness in a HDR World

HDR has long being heralded as a highly compelling and effective technology as high dynamic range can improve video of any resolution and much better mimics the natural world. HDR continues its relatively slow growth into real-world use, but continues to show progress.

HDR is so compelling because it can feed our senses more light and it’s no secret that TV shops know we like nice, bright pictures on our TV sets. But the reality of production in HDR is that you have to contend with human eyes which have a great ability to see dark and bright images – but not at the same time. The total ability of the eye to simultaneously distinguish brightness is about 12 stops, which is only two thirds of its non-simultaneous total range.
 

 
The fact that our eyes constantly adapt and, let’s face it, interpret what they see, makes understanding brightness in videos tricky. There are dependencies on overall brightness of a picture at any one moment, the previous recent brightness, the brightness of local adjacent parts of the image, the ambient background and much more to consider.

Selios Ploumis steps into this world of varying brightness to creat a ways of quantitatively evaluating brightness for HDR. The starting place is the Average Picture Level (APL) which is what the SDR world uses to indicate brightness. With the greater dynamic range in HDR and the way this is implemented, it’s not clear that APL is up to the job.

Stelios explains his work in analysing APL in SDR and HDR and shows the times that simply taking the average of a picture can trick you into seeing two images as practically the same, whereas the brain clearly sees one as more ‘bright’ than the other. On the same track, he also explains ways in which we can work to differentiate signals better, for instance taking in to account the spread of the brightness values as opposed to APL’s normalised average of all pixels’ values.

The talk wraps up with a description of how the testing was carried out and a summary of the proposals to improve the quantitive analysis of HDR video.

Watch now!
Speakers

Stelios Ploumis Stelios Ploumis
PhD Research Candidate
MTT Innovation Inc.

Video: Avoiding Traps and Pitfalls When Designing SMPTE 2059-2 Networks

As the industry gains more and more experience in implementing PTP, AKA SMPTE 2059-2, timing systems it’s natural to share the experiences so we can all find the best way to get the job done.

Thomas Kernen is a staff architect at Mellanox with plenty of experience under his belt regarding PTP so he’s come to the IP Showcase at IBC 2019 to explain.

The talk starts by discussing what good timing actually is and acknowledging everyone’s enthusiasm going into a project for a well designed, fully functioning system. But, importantly, Thomas then looks at a number of real-world restrictions that come into projects which compromise our ability to deliver a perfect system.

Next Thomas looks at aspects of a timing strategy to be careful of. The timing strategy outlines how the timing of your system is going to work, whether that is message rates or managing hierarchy amongst many other possibilities.

The network design itself, of course, has an important impact on your system. This starts at the basics of whether you build a network which is, itself, PTP aware. In general, Thomas says, it should be PTP aware. However, for smaller networks, it may be practical to use without.

Security gets examined next, talking about using encrypted transports, access control lists, ensuring protect interfaces etc. with the aim of preventing unintended access, removing the ability to access physically – much of this is standard IT security, but it’s so often ignored that it’s important to point it out.

PTP is a system, it’s not a signal like B&B so monitoring is important. How will you know the health of your PTP distribution? You need to monitor on the network side, from the point of view of the deices themselves but also analyse the timing signals themselves, for instance, by comparing the timing signals between the main and reserve.

Finally, Thomas warns about designing redundancy systems since “Redundancy in PTP doesn’t exist.” and then finishes with some notes on properly completing a PTP project.

Watch now!

Speaker

Thomas Kernen Thomas Kernen
Staff Architect,
Mellanox Technologies

Video: WAVE (Web Application Video Ecosystem) Update

With wide membership including Apple, Comcast, Google, Disney, Bitmovin, Akamai and many others, the WAVE interoperability effort is tackling the difficulties web media encoding, playback and platform issues utilising global standards.

John Simmons from Microsoft takes us through the history of WAVE, looking at the changes in the industry since 2008 and WAVE’s involvement. CMAF represents an important milestone in technology recently which is entwined with WAVE’s activity backed by over 60 major companies.

The WAVE Content Specification is derived from the ISO/IEC standard, “Common media application format (CMAF) for segmented media”. CMAF is the container for the audio, video and other content. It’s not a protocol like DASH, HLS or RTMP, rather it’s more like an MPEG 2 transport stream. CMAF nowadays has a lot of interest in it due to its ability to deliver very low latency streaming of less than 4 seconds, but it’s also important because it represents a standardisation of fMP4 (fragmented MP4) practices.

The idea of standardising on CMAF allows for media profiles to be defined which specify how to encapsulate certain codecs (AV1, HEVC etc.) into the stream. Given it’s a published specification, other vendors will be able to inter-operate. Proof of the value of the WAVE project is the 3 amendments that John mentions issued from MPEG on the CMAF standard which have come directly from WAVE’s work in validating user requirements.

Whilst defining streaming is important in terms of helping in-cloud vendors work together and in allowing broadcasters to more easily build systems, it’s vital the decoder devices are on board too, and much work goes into the decoder-device side of things.

On top of having to deal with encoding and distribution, WAVE also specifies an HTML5 APIs interoperability with the aim of defining baseline web APIs to support media web apps and creating guidelines for media web app developers.

This talk was given at the Seattle Video Tech meetup.

Watch now!
Slides from the presentation
Check out the free CTA specs

Speaker

John Simmons John Simmons
Media Platform Architect,
Microsoft