Video: Current Status of ST 2110 over 25 GbE

IT still has catching up to do. The promise of video over IP and ST 2110 is to benefit from the IT industry’s scale and products, but when it comes to bandwidth, there are times when it isn’t there. This talk looks at 25 gigabit (25GbE) network interfaces to see how well they work and if they’ve arrived on the broadcast market.

Koji Oyama from M3L Inc. explains why the move from 10GbE to 25GbE makes sense; a move which allows more scalability with fewer cables. He then looks at the physical characteristics of the signals, both as 25GbE but also linked together into a 100GbE path.

 

We see that the connectors and adapters are highly similar and then look at a cost analysis. What’s actually available on the market now and what is the price difference? Koji also shows us that FPGAs are available with enough capacity to manage several ports per chip.

So if the cost seems to be achievable, perhaps the decision should come down to reliability. Fortunately, Koji has examined the bit error rates and shows the data which indicates that Reed Solomon protection is needed, called RS-FEC. Reed Solomon is a simple protection scheme which has been used in CDs, satellite transmissions and many other places where a light-weight algorithm for error recovery is needed. Koji goes into some detail here explaining RS-FEC for 25GbE.

Koji has also looked into timing both in synchronisation but also jitter and wander. He presents the results of monitoring these parameters in 10GbE and 25GbE scenarios.

Finishing up by highlighting the physical advantages of moving to 25GbE such as density and streams-per-port, Koji takes a moment to highlight many of the 25GbE products available at NAB as final proof that the 25GbE is increasingly available for use today.

Watch now!

Copy of the presentation

Speaker

Koji Oyama Koji Oyama
Director,
M3L

Video: Making Live Streaming More ‘Live’ with LL-CMAF

Squeezing streaming latency down to just a few seconds is possible with CMAF. Bitmovin guides us through what’s possible now and what’s yet to come.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. But the push to reduce latency further and further has resulted in CMAF which can be used to deliver streams with five to ten times lower latencies.

Paul MacDougall is a Solutions Architect with Bitmovin so is well placed to explain the application of CMAF. Starting with a look at what we mean by low latency, he shows that it’s still quite possible to find HLS latencies of up to a minute but more common latencies now are closer to 30 seconds. But 5 seconds is the golden latency which matches many broadcast mechanisms including digital terrestrial, so it’s no surprise that this is where low latency CMAF is aimed.

CMAF itself is simply a format which unites HLS and DASH under one standard. It doesn’t, in and of itself, mean your stream will be low latency. In fact, CMAF was born out of MPEG’s MP4 standard – officially called ISO BMFF . But you can use CMAF in a low-latency mode which is what this talk focusses on.

Paul looks at what makes up the latency of a typical feed discussing encoding times, playback latency and the other key places. With this groundwork laid, it’s time to look at the way CMAF is chunked and formatted showing that the smaller chunk sizes allow the encoder and player to be more flexible reducing several types of latency down to only a few seconds.

In order to take full advantage of CMAF, the play needs to understand CMAF and Paul explains these adaptations before moving on to the limitations and challenges of using CMAF today. One important change, for instance, is that chunked streaming players (i.e. HLS) have always timed the download of each chunk to get a feel for whether bandwidth was plentiful (download was quicker than time taken to play the chunk) or bandwidth was constrained (the chunk arrived slower than real-time). Based on this, the player could choose to increase or decrease the bandwidth of the stream it was accessing which, in HLS, means requesting a chunk from a different playlist. Due to the improvements in downloading smaller chunks and using real-time transfer techniques such as HTTP/1.1 Chunked Transfer the chunks are all arriving at the download speed. This makes it very hard to make ABR work for LL-CMAF, though there are approaches being tested and trialed not mentioned in the talk.

Watch now!

Speakers

Paul MacDougall Paul MacDougall
Solutions Architect,
Bitmovin

Video: AV1 in video collaboration

AV1 is famous for its promise to deliver better compression than HEVC but also for it being far from real-time. This talk has a demonstration of the world’s first real-time AV1 video call showing that speed improvement are on the way and, indeed, some have arrived.

Encoding is split into ‘tools’ so where you might hear of ‘h.264’ or ‘MPEG 2’, these are names for a whole set of different ways of looking at – and squeezing down – a picture. They also encompass the rules of how they should act together to form a cohesive encoding mechanism. (To an extent, such codecs tend to define only how the decode should happen, leaving encoding open to innovation.) AV1 contains many tools, many of which are complex and so require a lot of time even from today’s fast computers.

Cisco’s Thomas Davies, who created the BBC’s Dirac codec which is now standardised under SMPTE’s VC-2 standard, points out that whilst these tools are complex, AV1 also has a lot of them and this diversity of choice is actually a benefit for speed and in particular for the speed of software codecs.

After demonstrating the latency and bandwidth benefits of their live, bi-directional, AV1 implementation against AVC, Thomas looks at the deployment possibilities and of AV1. The talk finishes with a summary of what AV1 brings in benefits to sum up why this new effort, with the Alliance of Open Media, is worth it.

Watch now!

Speaker

Thomas Davies Thomas Davies
Principal Engineer,
Cisco Media Engineering, UK

Video: Broadcast and OTT monitoring: The challenge of multiple platforms


Is it possible to monitor OTT services to the same standard as traditional broadcast services? How can they be visualised, what are the challenges and what makes monitoring streaming services different?

As with traditional broadcast, some broadcasters outsource the distribution of streaming services to third parties. Whilst this can work well in broadcast, there any channel would be missing out on a huge opportunity if they didn’t also monitor some analytics of the viewer using their streaming service. So, to some extent, a broadcaster always wants to look at the whole chain. Even when the distribution is not outsourced and the OTT system has been developed and is run by the broadcaster, at some point a third party will have to be involved and this is typically the CDN and/or Edge network. A broadcaster would do well to monitor the video provided at all points through the chain including right up to the edge.

The reason for monitoring is to keep viewers happy and, by doing so, reduce churn. When you have analytics from a player telling you something isn’t right, it’s only natural to want too find out what went wrong and to know that, you will need monitoring in your distribution chain. When you have that monitoring, you can be much more pro-active in resolving issues and improve your service overall.

Jeff Herzog from Verizon Digital Media Services explains ways to achieve this and the benefits it can bring. After a primer on HLS streaming, he explains ways to monitor the video itself and also how to monitor everything but the video as a light-touch monitoring solution.

Jeff explains that because HLS is based on playlists and files being available, you can learn a lot about your service just by monitoring these small text files, parsing them and checking that all the files it mentions are available with minimal wait times. By doing this and other tricks, you can successfully gauge how well your service is working without the difficulty of dealing with large volumes of video data. The talk finishes with some examples of what this monitoring can look like in action.

This talk was given at the SMPTE Annual Technical Conference 2018.
For more OTT videos, check out The Broadcast Knowledge’s Youtube OTT playlist.
Speakers

Jeff Herzog Jeff Herzog
Senior Product Manger, Video Monitoring & Compliance,
Verizon Digital Media Services