Video: Hardware Transcoding Solutions For The Cloud

Hardware encoding is more pervasive with Intel’s Quick Sync embedding CUDA GPUs inside GPUs plus NVIDIA GPUs have MPEG NVENC encoding support so how does it compare with software encoding? For HEVC, can Xilinx’s FPGA solution be a boost in terms of quality or cost compared to software encoding?

Jan Ozer has stepped up to the plate to put this all to the test analysing how many real-time encodes are possible on various cloud computing instances, the cost implications and the quality of the output. Jan’s analytical and systematic approach brings us data rather than anecdotes giving confidence in the outcomes and the ability to test it for yourself.

Over and above these elements, Jan also looks at the bit rate stability of the encodes which can be important for systems which are sensitive to variations such services running at scale. We see that the hardware AVC solutions perform better than x264.

Jan takes us through the way he set up these tests whilst sharing the relevant ffmpeg commands. Finally he shares BD plots and example images which exemplify the differences between the codecs.

Watch now!
Download the slides
Speaker

Jan Ozer Jan Ozer
Principal, Streaming Learning Center
Contributing Editor, Streaming Media

Video: Timing Tails & Buffers

Timing and synchronisation have always been a fundamental aspect of TV and as we move to IP, we see that timing is just as important. Whilst there are digital workflows that don’t need to be synchronised against each other, many do such as studio productions. However, as we see in this talk from The Broadcast Bridge’s Tony Orme, IP networks make timing all the more variable and accounting for this is key to success.

To start with Tony looks at the way the OBs, also known as REMIs, are moving to IP and need a timing plane across all of the different parts of production. We see how traditionally synchronisation is needed and the effect of timing problems not only in missed data but also with all essences being sent separately synchronisation problems between them can easily creep in.

When it comes to IP timing itself, Tony explains how PTP is used to record the capture time of the media/essences and distribute through the system. Looking at the data on the wire and the interval between that and the last will show a distribution of, hopefully, a few microseconds variation. This variation gives rise to jitter which is a varying delay in data arrival. The larger the spread, the more difficult it will be to recover data. To examine this more closely, Tony looks at the reasons for and the impacts of congestion, jitter, reordering of data.

Bursting, to make one of these as an example, is a much overlooked issue on networks. While it can occur in many scenarios without any undue problems, microbusting can be a major issue and one that you need to look for to find. This surrounds the issue of how you decide that a data flow is, say, 500Mbps. If you had an encoder which sent data at 1Gbps for 5 minutes and no data for 5 minutes, then over the 10 minute window, the average bitrate would have been 500Mbps. This clearly isn’t a 500Mbps encoder, but how narrow do you need to have your measurement window to be happy it is, indeed, 500Mbps by all reasonable definitions? Do you need to measure it over 1 second, 1 millisecond? Behind microbursting is the tendency of computers to send whatever data they have as quickly as possible; if a computer has a 10Gbe NIC, then it will send at 10Gbps. What video receivers actually need is well spaced packets which always come a set time apart.

Buffers a necessary for IP transmission, in fact within a computer there are many buffers. So using and understanding buffers is very important. Tony takes us through the thought process of considering what buffers are and why we need them. With this groundwork laid, understanding their use and potential problems is easier and well illustrated in this talk. For instance, since there are buffers in many parts of the chain to send data from an application to a NIC and have it arrive at the destination, the best way to maximise the chances of having a deterministic delay in the Tx path is to insert PTP information almost at the point of egress in the NIC rather than in the application itself.

The talk concludes by looking at buffer fill models and the problems that come with streaming using TCP/IP rather then UDP/IP (or RTP). The latter being the most common.

Watch now!
Download the presentations!

Speakers

Tony Orme Tony Orme
Editor,
The Broadcast Bridge

Video: QoE Impact from Router Buffer sizing and Active Queue Management

Netflix take to the stage at Demux to tell us about the work they’ve been doing to understand and reduce latency by looking at the queue management of their managed switches. As Tony Orme mentioned yesterday, we need buffers in IP systems to allow synchronous parts to interact. Here, we’re looking at how the core network fabric’s buffers can get in
the way of the main video flows.

Te-Yuan Huang from Netflix explains their work in investigating buffers and how best to use them. She talks about the flows that occur due to the buffer models of standard switches i.e. waiting until the buffer is full and then dropping everything else that comes in until the buffer is emptied. There is an alternative method, Active Queue Management (AQM), called FQ-CoDel which drops packets based on probability before the buffer is dropped. By carefully choosing the probability, you can actually improve buffer handling and the impact it has on latency.

Te-Yuan shows us results from tests that her team has done which show that the FQ-CoDel specification does, indeed, reduce latency. After showing us the data, she summarises saying that FQ-CoDel improves playback and QOE.

Watch now!
Speaker

Te-Yuan Huang Te-Yuan Huang
Engineering Manager (Adaptive Streaming),
Netflix

Video: IP Fundamentals For Broadcast Seminar Section 3

‘IP’ is such a frequently used term, that is real meaning and context easily gets lost. As we saw from Wayne’s first and seminars, IP stands on top of ethernet and the cabling needed to support the whole network stack. But as we see from the subtitle, this where we get to virtual addressing which, as an abstraction layer, offers us a lot of flexibility. IP, the Internet Protocol, is where much of what we refer to as ‘networking’ happens, so it’s important to understand.

Wayne Pecena, long-standing staff member at Texas A&M University, goes straight into to IPV4 packet types. In the world of SMPTE ST-2110 and SMPTE ST-2022, this is important as much media traffic is sent multicast which is different to unicast and broadcast traffic. These three methods of sending data each have pros and cons. Unicast is the most well-known whereby packets are sent directly from the sender to a specific receiving device. Broadcast is, as the term suggests, a way of sending from one computer to all computers. This is great when you’re shouting out to another device to find out some key information about the network, but it can lead to disaster if all senders are doing this. For media use, multicast is where it’s at, allowing a sender to send to a group of receiving devices each of which opt in to this stream, just like you can opt in to a newsletter.

Wayne digs in to how an IPv4 packet is constructed looking at all parts of the header including the source and destination IP addresses. This leads us into looking at how an IP address is constructed. The trick with IP addresses and moving data from one network to another, we learn is in understanding which machines are on your local network (in which case you can use layer 2 Ethernet to send them data) and those that aren’t (in which case you need to use IP to pass on your message to the other network). This is done using subnets which is explained along with classes of addresses and class-less notation.

Once you know how to tell which network an address is in, this leads to the need to pass information from one to another opening up the topic of Network Address Translation (NAT). The typical example of NAT is that a message might come in to a public IP address on port 3000 which would then be sent to the internal network to a defined internal address on port 80. Wayne explains how this works and runs through examples.

For a network to keep track of which physical interfaces are where and have which IP address requires an ARP table which has been mentioned in previous seminars because it bridges both layer 2 and layer 3. Now we’re at layer 3, it’s time to go in for another look ahead of examining how DHCP workshop it assigns DNS addresses and how DNS itself works.

The next section steps into the world of diagnosis with ping and the ICMP protocol on which it is based. This leads in to explaining how trace route works, based on changing the TTL of the packet. The TTL is the Time To Live, which one way that a network knows it can drop a packet. This exists to protect networks from having packets which live forever and are constantly circling the network. However the TTL, in this situation, can be used to probe information about the network. Wayne explains the pros and the cons of ping and traceroute.

The seminar finishes by a look at routers, routing tables, routing protocols like IGP, EGP, OSPF, EIGRP and their peers.

Watch now!
Speaker

Wayne Pecena Wayne Pecena
Director of Engineering, KAMU TV/FM at Texas A&M University
President, Society of Broadcast Engineers AKA SBE