Video: Performance Measurement Study of RIST


RIST solves a problem by transforming unmanaged networks into reliable paths for video contribution. This comes amidst increasing interest in using the public internet to contribute video and audio. This is partly because it is cheaper than dedicated data circuits, partly that the internet is increasingly accessible from many locations making it convenient, but also when feeding cloud-based streaming platforms, the internet is, by definition, part of the signal path.

Packet loss and packet delay are common on the internet and there are only two ways to compensate for them: One is to use Forward Error Correction (FEC) which will permanently increase your bandwidth by up to 25% so that your receiver can calculate which packets were missing and re-insert them. Or your receiver can ask for the packets to be sent again.
RIST joins a number of other protocols to use the re-request method of adding resilience to streams which has the benefit of only increasing the bandwidth needed when re-requests are needed.

In this talk, Ciro Noronha from Cobalt Digital, explains that RIST is an attempt to create an interoperable protocol for reliable live streaming – which works with any RTP stream. Protocols like SRT and Zixi are, to one extent or another, proprietary – although it should be noted that SRT is an open source protocol and hence should have a base-level of interoperability. RIST takes interoperability one stage further and is seeking to create a specification, the first of which is TR-06-1 also known as ‘Simple Profile’.

We then see the basics of how the protocol works and how it uses RTCP for singling. Further more RIST’s support for bonding is explored and the impact of packet reordering on stream performance.

The talk finishes with a look to what’s to come, in particular encryption, which is an important area that SRT currently offers over and above reliable transport.
Watch now!

To dig into SRT, check out this talk from Chris Michaels
For more on RIST, have a look at Kieran Kunhya’s talk and Rick Ackerman’s introduction to RIST.

Speaker

Ciro Noronha Ciro Noronha
Director of Technology, Compression Systems,
Cobalt Digital

Video: AV1/VVC Update

AV1 and VVC are both new codecs on the scene. Codecs touch our lives every day both at work and at home. They are the only way that anyone receives audio and video online and television. So all together they’re pretty important and finding better ones generates a lot of opinion.

So what are AV1 and VVC? VVC is one of the newest codecs on the block and is undergoing standardisation in MPEG. VVC builds on the technologies standardised by HEVC but adds many new coding tools. The standard is likely to enter draft phase before the end of 2019 resulting in it being officially standardised around a year later. For more info on VVC, check out Bitmovin’s VVC intro from Demuxed

AV1 is a new but increasingly known codec, famous for being royalty free and backed by Netflix, Apple and many other big hyper scale players. There have been reports that though there is no royalty levied on it, patent holders have still approached big manufacturers to discuss financial reimbursement so its ‘free’ status is a matter of debate. Whilst there is a patent defence programme, it is not known if it’s sufficient to insulate larger players. Much further on than VVC, AV1 has already had a code freeze and companies such as Bitmovin have been working hard to reduce the encode times – widely known to be very long – and create live services.

Here, Christian Feldmann from Bitmovin gives us the latest status on AV1 and VVC. Christian discusses AV1’s tools before discussing VVC’s tools pointing out the similarities that exist. Whilst AV1 is being supported in well known browsers, VVC is at the beginning.

There’s a look at the licensing status of each codec before a look at EVC – which stands for Essential Video Coding. This has a royalty free baseline profile so is of interest to many. Christian shares results from a Technicolor experiment.

Speakers

Christian Feldmann Christian Feldmann
Codec Engineer,
Bitmovin

Video: QUIC in Theory and Practice


Most online video streaming uses HTTP to deliver the video to the player in the same way web pages are delivered to the browser. So QUIC – a replacement for HTTP – will affect us professionally and personally.

This video explains how HTTP works and takes us on the journey to seeing why QUIC (which should eventually be called HTTP/3) speeds up the process of requesting and delivering files. Simply put there are ways to reduce the number of times messages have to be passed between the player and the server which reduces overall overhead. But one big win is its move away from TCP to UDP.

Robin Marx delivers these explanations by reference to superheroes and has very clear diagrams leading to this low-level topic being pleasantly accessible and interesting.

There are plenty of examples which show easy-to-see gains website speed using QUIC over both HTTP and HTTP/2 but QUIC’s worth in the realm of live streaming is not yet clear. There are studies showing it makes streaming worse, but also ones showing it helps. Video players have a lot of logic in them and are the result of much analysis, so it wouldn’t surprise me at all to see the state of the art move forward, for players to optimise for QUIC delivery and then all tests to show an improvement with QUIC streaming.

QUIC is coming, one way or another, so find out more.
Watch now!

Speaker

Robin Marx Robin Marx
Web Performance Researcher,
Hasslet University

Video: Optimizing ABR Encode, Compute & Control for Performance & Quality

Adaptive bitrate, ABR, is vital in effective delivery of video to the home where bandwidth varies over time. It requires creating several different renditions of your content at various bitrates, resolutions and even frame rate. These multiple encodes put a computational burden on the transcode stage.

Lowell Winger explains ways of optimising ABR encodes to reduce the computation needed to create these different versions. He explains ways to use encoding decisions from one version and use them in other encodes. This has a benefit of being able to use decisions made on high-resolution versions – which are benefiting from high definition to inform the decision in detail – on low-resolution content where the decision would otherwise be made with a lot less information.

This talk is the type of deep dive into encoding techniques that you would expect from the Video Engineering Summit which happens at Streaming Media East.

Watch now!

Speaker

Lowell Winger Lowell Winger
Former Senior Director of Engineering,
IDT Inc.

Video: Understanding Video Performance: QoE is not QoS

Mux’s Justin Sanford explains the difference between Quality of Service and Quality of Experience; the latter being about the entire viewer experience. Justin looks at ‘Startup time’ showing that it’s a combination of an number of factors which can include loading a web page showing the dependence of your player on the whole ecosystem.

Justin discusses rebuffering and what ‘quality’ is when we talk about streaming. Quality is a combination of encoding quality, resolution but also whether the playback judders.

“Not every optimisation is a tradeoff, however startup time vs. rebuffering is a canonical tradeoff.”

Justin Sanford, Mux

Finally we look at ways of dealing with this, including gathering analytics, standards for measuring quality of experience, and understanding the types of issues your viewers care most about.

From San Francisco Video Tech.

Watch now!

Speaker

Justin Sanford Justin Sanford
Product Manager,
Mux

Video: Building a Large OB Truck Using SMPTE ST 2110

OB vans have been notable early adopters of Video over IP, both in the form of SMPTE ST 2110 and ST 2022-6. The reasons are simple, all new vans are ‘green field’ sites, weight and space are at a premium and many need more weekly flexibility than SDI has been giving them.

In this case study, Hartmut Opfermann discusses design considerations for all IP large OB trucks dedicated for sports, music and entertainment production and explores the decisions that have been made for ORF’s new FU22 OB tuck including the drivers behind switching to IP technology and SMPTE ST 2110 for media transport.

Interesting to note is the proportion of SDI Vs IP in new IP installations. BBC Cardiff, for instance, has a minimum quota for IP-enabled endpoints but isn’t assuming it can reach 100%. There are few IP installations which are 100% IP.

In ORF’s truck we also see that, although the truck is fully based on IP technology, SDI-IP gateways have been provided to keep compatibility with existing baseband infrastructure. Keeping all internal processing in the IP domain simplifies cabling, reduces cable weight but, importantly, enables the use of flexible FPGA based processing platforms – functionality thus depends on software and can be changed on fly.

The broadcast control system provides a single point of control over complex infrastructure of the truck and provides a seamless experience for operators who used to work in the SDI domain. However, configuration and troubleshooting of IP systems requires a very different skillset, so training had to be provided to ORF engineering team.

Some other points discussed in this video are audio channel management, failover of PTP and B&B synchronisation and IP address management using the JT-NM’s TR 1001-1, which has been covered here on The Broadcast Knowledge before.

Watch now!

Speaker

Hartmut Opfermann Hartmut Opfermann
Head of Division Broadcast IT,
BFE Studio und Medien Systeme GmbH

Video: Breaking Barriers – How can the TV industry encourage more women into technology jobs?

Breaking Barriers

To mark the launch, today, of a new section of The Broadcast Knowledge highlighting what the industry is doing to promote a better gender balance in the broadcast industry, we have a panel discussion from the RTS about that very topic.

I’ve said it before, and again I implore everyone to take it upon yourself to do just one thing to improve diversity in gender, little or small. The numbers are clear that in technology, there is a large imbalance and, according to Rise director Carrie Wootten, Research shows that “having a more gender balanced structure leads to additional ideas, creativity, business development and crucially income generation.”

With experienced voices, from UK TV, TeenTech, Dr Maggie Aderin-Pocock, NEP Sound engineer Anna Patching and the deputy chair of Women in Film and Television, we hear questions and answers about how companies can find female candidates, and how individuals can advance their careers.

The message is that there are things people throughout a company can do to address gender balance, so watch to find out more.

Watch now!

Speakers

Maggie Philbin Chair: Maggie Philbin
CEO,
Teen Tech
Sinead Greenaway Sinead Greenaway
Chief Technology and Operations Officer,
UKTV
Dr Maggie Aderin-Pocock, Dr Maggie Aderin-Pocock,
Space Scientist, Science Educator & Presenter
Anna Patching Anna Patching
Sound Engineer & STEM ambassador
NEP
Sara Putt Sara Putt
Deputy Chair,
Women in Film & Television (UK)

Webinar: Talking to the TV: Transforming the viewing experience with voice control

Thursday 16th May 2019, 16:00 BST / 11am EDT / 8am PDT

Controlling services by voice is on the rise. Recently we have seen Google move all their Nest hardware control into Google Assistant and the abilities of Alexa and Siri continue to grow. All of these smart speakers and voice-controlled AI assistants have seen rapid adoption in homes, the UK being the biggest adopter with voice assistant devices now used in more than a quarter of all households.

With a shift away from the on-screen EPG and clunky remote controls to a world where any content is a voice command away, who owns the voice interface with the consumer and the vast amount of valuable data it creates? Does this put more power in the hands of the Silicon Valley tech giants as their voice assistants and AI algorithms become a new gatekeeper? And how should content owners respond?

This webinar explores the value of voice control for content, and finds the best strategies for broadcasters and platform operators to develop voice interfaces and maintain control of the user experience.

Register now!

Speakers

Patrick Byrden Patrick Byrden
Senior Director of Customer Solutions,
TiVo
Ashley Grossman Ashley Grossman
Senior Manager, Personalisation & Discovery,
Liberty Global
Morvarid Kashanipour Morvarid Kashanipour
Head of Product Design,
Com Hem

Video: Sub-Second Live Streaming: Changing How Online Audiences Experience Live Events

There are two main modern approaches to low-latency live streaming, one is CMAF which used fragmented MP4s to allow frame by frame delivery of chunks of data. Similar to HLS, this is becoming a common ‘next step’ for companies already using HLS. Keeping the chunk size down reduces latency, but it remains doubtful if sub-second streaming is practical in real world situations.

Steve Miller Jones from Limelight explains the WebRTC solution to this problem. Being a protocol which is streamed from the source to the destination, this is capable of sub-second latency, too, and seems a better fit. Limelight differentiate themselves on offering a scalable WebRTC streaming service with Adaptive Bitrate (ABR). ABR is traditionally not available with WebRTC and Steve Miller Jones uses this as an example of where Limelight is helping this technology achieve its true potential.

Comparing and contrasting Limelight’s solution with HLS and CMAF, we can see the benefit of WebRTC and that it’s equally capable of supporting features like encryption, Geoblocking and the like.

Ultimately, the importance of latency and the scalability you require may be the biggest factor in deciding which way to go with your sub-second live streaming.

Watch now!

Speakers

Steve Miller-Jones Steve Miller-Jones
VP Product Strategy,
Limelight Networks

Video: Securing NMOS Apps

The still-growing NMOS suite of specifications from AMWA defines ways in which your IP network can find and register new devices plugged in to it (e.g. camera, microphone etc.), manage their connections and control them. They fit neatly along side the SMPTE ST 2110 suite of standards which define the way that the essences (video, audio, metadata) are sent over networks intended for professional media.

As such, they are core to a network and as the market for uncompressed media products matures, the attention is on the details such as whether they scale and security.

In this talk, Simon Rankine from BBC R&D starts by explaining the objectives which means looking at the different aspects of security which is split into three; securing data transfer, ensuring data goes to the right place, ensuring only authorised people can act.

TLS, standing for Transport Layer Security, is the same protocol used for secure websites; those which start with https://. It is also referred to by the name of the protocol it replaced, SSL. Given the NMOS APIs are sent over HTTP, TLS is a perfect match for the use case. TLS provides not only the ability to encrypt the connection but also provides the basis for certificate exchange which allows us trust that the data is being sent to the right place. Simon then covers ciphers and TLS versions before talking about certificate management.

This talk was given at the IP Showcase at NAB 2019.

Watch now!

Speaker

Simon Rankine Simon Rankine
Research Engineer,
BBC R&D