Video: Timing Tails & Buffers

Timing and synchronisation have always been a fundamental aspect of TV and as we move to IP, we see that timing is just as important. Whilst there are digital workflows that don’t need to be synchronised against each other, many do such as studio productions. However, as we see in this talk from The Broadcast Bridge’s Tony Orme, IP networks make timing all the more variable and accounting for this is key to success.

To start with Tony looks at the way the OBs, also known as REMIs, are moving to IP and need a timing plane across all of the different parts of production. We see how traditionally synchronisation is needed and the effect of timing problems not only in missed data but also with all essences being sent separately synchronisation problems between them can easily creep in.

When it comes to IP timing itself, Tony explains how PTP is used to record the capture time of the media/essences and distribute through the system. Looking at the data on the wire and the interval between that and the last will show a distribution of, hopefully, a few microseconds variation. This variation gives rise to jitter which is a varying delay in data arrival. The larger the spread, the more difficult it will be to recover data. To examine this more closely, Tony looks at the reasons for and the impacts of congestion, jitter, reordering of data.

Bursting, to make one of these as an example, is a much overlooked issue on networks. While it can occur in many scenarios without any undue problems, microbusting can be a major issue and one that you need to look for to find. This surrounds the issue of how you decide that a data flow is, say, 500Mbps. If you had an encoder which sent data at 1Gbps for 5 minutes and no data for 5 minutes, then over the 10 minute window, the average bitrate would have been 500Mbps. This clearly isn’t a 500Mbps encoder, but how narrow do you need to have your measurement window to be happy it is, indeed, 500Mbps by all reasonable definitions? Do you need to measure it over 1 second, 1 millisecond? Behind microbursting is the tendency of computers to send whatever data they have as quickly as possible; if a computer has a 10Gbe NIC, then it will send at 10Gbps. What video receivers actually need is well spaced packets which always come a set time apart.

Buffers a necessary for IP transmission, in fact within a computer there are many buffers. So using and understanding buffers is very important. Tony takes us through the thought process of considering what buffers are and why we need them. With this groundwork laid, understanding their use and potential problems is easier and well illustrated in this talk. For instance, since there are buffers in many parts of the chain to send data from an application to a NIC and have it arrive at the destination, the best way to maximise the chances of having a deterministic delay in the Tx path is to insert PTP information almost at the point of egress in the NIC rather than in the application itself.

The talk concludes by looking at buffer fill models and the problems that come with streaming using TCP/IP rather then UDP/IP (or RTP). The latter being the most common.

Watch now!
Download the presentations!

Speakers

Tony Orme Tony Orme
Editor,
The Broadcast Bridge

Video: QoE Impact from Router Buffer sizing and Active Queue Management

Netflix take to the stage at Demux to tell us about the work they’ve been doing to understand and reduce latency by looking at the queue management of their managed switches. As Tony Orme mentioned yesterday, we need buffers in IP systems to allow synchronous parts to interact. Here, we’re looking at how the core network fabric’s buffers can get in
the way of the main video flows.

Te-Yuan Huang from Netflix explains their work in investigating buffers and how best to use them. She talks about the flows that occur due to the buffer models of standard switches i.e. waiting until the buffer is full and then dropping everything else that comes in until the buffer is emptied. There is an alternative method, Active Queue Management (AQM), called FQ-CoDel which drops packets based on probability before the buffer is dropped. By carefully choosing the probability, you can actually improve buffer handling and the impact it has on latency.

Te-Yuan shows us results from tests that her team has done which show that the FQ-CoDel specification does, indeed, reduce latency. After showing us the data, she summarises saying that FQ-CoDel improves playback and QOE.

Watch now!
Speaker

Te-Yuan Huang Te-Yuan Huang
Engineering Manager (Adaptive Streaming),
Netflix

Video: IP Fundamentals For Broadcast Seminar Part III

‘IP’ is such a frequently used term, that is real meaning and context easily gets lost. As we saw from Wayne’s first and seminars, IP stands on top of ethernet and the cabling needed to support the whole network stack. But as we see from the subtitle, this where we get to virtual addressing which, as an abstraction layer, offers us a lot of flexibility. IP, the Internet Protocol, is where much of what we refer to as ‘networking’ happens, so it’s important to understand.

Wayne Pecena, long-standing staff member at Texas A&M University, goes straight into to IPV4 packet types. In the world of SMPTE ST-2110 and SMPTE ST-2022, this is important as much media traffic is sent multicast which is different to unicast and broadcast traffic. These three methods of sending data each have pros and cons. Unicast is the most well-known whereby packets are sent directly from the sender to a specific receiving device. Broadcast is, as the term suggests, a way of sending from one computer to all computers. This is great when you’re shouting out to another device to find out some key information about the network, but it can lead to disaster if all senders are doing this. For media use, multicast is where it’s at, allowing a sender to send to a group of receiving devices each of which opt in to this stream, just like you can opt in to a newsletter.

Wayne digs in to how an IPv4 packet is constructed looking at all parts of the header including the source and destination IP addresses. This leads us into looking at how an IP address is constructed. The trick with IP addresses and moving data from one network to another, we learn is in understanding which machines are on your local network (in which case you can use layer 2 Ethernet to send them data) and those that aren’t (in which case you need to use IP to pass on your message to the other network). This is done using subnets which is explained along with classes of addresses and class-less notation.

Once you know how to tell which network an address is in, this leads to the need to pass information from one to another opening up the topic of Network Address Translation (NAT). The typical example of NAT is that a message might come in to a public IP address on port 3000 which would then be sent to the internal network to a defined internal address on port 80. Wayne explains how this works and runs through examples.

For a network to keep track of which physical interfaces are where and have which IP address requires an ARP table which has been mentioned in previous seminars because it bridges both layer 2 and layer 3. Now we’re at layer 3, it’s time to go in for another look ahead of examining how DHCP workshop it assigns DNS addresses and how DNS itself works.

The next section steps into the world of diagnosis with ping and the ICMP protocol on which it is based. This leads in to explaining how trace route works, based on changing the TTL of the packet. The TTL is the Time To Live, which one way that a network knows it can drop a packet. This exists to protect networks from having packets which live forever and are constantly circling the network. However the TTL, in this situation, can be used to probe information about the network. Wayne explains the pros and the cons of ping and traceroute.

The seminar finishes by a look at routers, routing tables, routing protocols like IGP, EGP, OSPF, EIGRP and their peers.

Watch now!
Speaker

Wayne Pecena Wayne Pecena
Director of Engineering, KAMU TV/FM at Texas A&M University
President, Society of Broadcast Engineers AKA SBE

Video: Codecs, standards and UHD formats – where is the industry headed?

Now Available On Demand
UHD transmissions have been available for many years now and form a growing, albeit slow-growing, percentage of channels available. The fact that major players such as Sky and BT Sports in the UK, NBCUniversal and the ailing DirecTV in the US, see fit to broadcast sports in UHD shows that the technology is trusted and mature. But given the prevalence of 4K in films from Netflix, Apple TV+ streaming is actually the largest delivery mechanism for 4K/UHD video into the home.

Following on from last week’s DVB webinar, now available on demand, this webinar from the DVB Project replaces what would have been part of the DVB World 2020 conference and looks at the work that’s gone into getting UHD to were it is now in terms of developing HEVC (also known as H.265), integrating it into broadcast standards plus getting manufacturer support. It then finishes by looking at the successor to HEVC – VVC (Versatile Video Codec)

The host, Ben Swchwarz from the Ultra HD Forum, first introduces Ralf Schaefer who explores the work that was done in order to make UHD for distribution a reality. He’ll do this by looking at the specifications and standards that were created in order to get us where we are today before looking ahead to see what may come next.

Yvonne Thomas from the UK’s Digital TV Group is next and will follow on from Ben by looking at codecs for video and audio. HEVC is seen as the go-to codec for UHD distribution. As the uncompressed bitrate for UHD is often 12Gbps, HEVC’s higher compression ratio compared to AVC and relatively wide adoption makes it a good choice for wide dissemination of a signal. But UHD is more than just video. With UHD and 4K services usually carrying sports or films, ‘next generation audio‘ is really important. Yvonne looks at the video and audio aspects of delivering HEVC and the devices that need to receive it.

Finally we look at VVC, also known as H.266, the successor to HEVC, also known as H.265. ATEME’s Sassan Pejhan gives us a look into why VVC was created, where it currently is within MPEG standardisation and what it aims to achieve in terms of compression. VVC has been covered previously on The Broadcast Knowledge in dedicated talks such as ‘VVC, EVC, LCEVC, WTF?’, ‘VVC Standard on the Final Stretch’, and AV1/VVC Update.

No Registration Necessary!

Watch now!
Speakers

Ben Schwarz Ben Schwarz
Communication Working Group Chair,
Ultra HD Forum
Ralf Schaefer Ralf Schaefer
VP Standards R&I
InterDigital Inc.
Yvonne Thomas Yvonne Thomas
Strategic Technologist
DTG (Digital TV Group)
Sassan Pejhan Sassan Pejhan
VP Technology,
ATEME