Video: Things Developers Believe About Video Files (Proven Wrong by User Uploads)


For many transcoding workflows, efficiency or quality are the primary factors defining how they are created. But when ingesting user-generated videos like those uploaded to the online video platform, Vimeo, life gets difficult. Dealing with the wide variety of formats uploaded and the many edge cases in the way that otherwise normal AVC videos are delivered means throwing out any assumptions you ever had and analysing every aspect of the file.

Senior video encoding engineer, Derek Buitenhuis takes us through the many lessons he and his colleagues have learnt over the years. Don’t, he says, assume that properties don’t change between frames – sometimes they change in every single frame. Assuming that you have a single frame rate throughout the video is another ‘no no’ as there are many variable-frame rate videos.

Derek also looks at dealing with samples stamped with negative timestamps, the need for sample durations, the myriad of issues seeking through a file, the fun of having some frames that aren’t displayed and multiple-track videos.

Colour spaces, no surprise to anyone, cause handling difficulties for example if the bitstream colour properties are different to those in the container. As the talk finishes, we’re left considering old MPEG2 files that can have unavoidable banding, replicating looping MOV files, and dealing with QuickTime special effects channels that animate a fire on the screen.

Watch now!
Speakers

Derek Buitenhuis Derek Buitenhuis
Senior Video Encoding Engineer,
Vimeo

Video: Using AMWA IS-06 for Flow Control on Professional Media Networks

In IP networks multicast flow subscription is usually based on a combination of IGMP (Internet Group Management Protocol) and PIM (Protocol Independent Multicast) protocols. While PIM allows for very efficient delivery of IP multicast data, it doesn’t provide bandwidth control or device authorisation.

To solve these issues on SMPTE ST 2110 professional media networks the NMOS IS-06 specification has been developed. It relies on a Software-Defined Networking, where traffic management application embedded in each single switch or router is replaced by a centralised Network Controller. This controller manages and monitors the whole network environment, making it bandwidth aware.

NMOS IS-06 specification provides a vendor agnostic Northbound interface from Network Controller to Broadcast Controller. IS-06 in conjunction with IS-04 (Discovery and Registration) and IS-05 (NMOS Device Connection Management) allows Broadcast Controller to automatically set up media flows between endpoints on the network, reserve bandwidth for flows and enforce network security. Broadcast Controller is also able to request network topology information from Network Controller, which can be used to create a user friendly graphic representation of the flows in the network.

In this presentation Rob Porter from Sony Europe explains the basics of NMOS IS-06, showing in details how setting up media flows with this specification fits into the IS-04 / IS-05 workflow. Rob emphasise that all AMWA NMOS specifications are completely open and available to anyone, allowing for interoperability between broadcast and network devices from different manufacturers.

The next speaker, Sachin Vishwarupe from Cisco Systems, focuses on the future works on IS-06, including provisioning feedback (such as insufficient bandwidth, no route available from sender to receiver or no management connectivity), flow statistics, security and grouping (similar to ”salvo” in SDI world).

There is also a discussion on extension of IS-06 specification for Network Address Translation (NAT), which would help to resolve problems caused by address conflicts e.g. when sharing resources between facilities.

You can find the slides here.

Watch now!

Speakers

Rob Porter Rob Porter
Project Manager – Advanced Technology Team
Sony Europe
Sachin Vishwarupe
Principal Engineer
Cisco Systems

Video: Three Roads to Jerusalem

With his usual entertaining vigour, Will Law explains the differences to the three approaches to low-latency streaming: DASH, LHLS and LL-HLS from Apple. Likening them partly to religions that all get you to the same end, we see how they differ and some of the reasons for that.

Please note: Since this video was recorded, Apple has released a new draft of LL-HLS. As described in this great article from Mux, the update’s changes are

  • “Delivering shorter sub-segments of the video stream (Apple call these parts) more frequently (every 0.3 – 0.5s)
  • Using HTTP/2 PUSH to deliver these smaller parts, pushed in response to a blocking playlist request
  • Blocking playlist requests, eliminating the current speculative manifest request polling behaviour in HLS
  • Smaller, delta rendition playlists, which reduces playlist size, which is important since playlists are requested more frequently
  • Faster rendition switching, enabled by rendition reports, which allows clients to see what is happening in another playlist without requesting it in its entirety”[0]

Read the full article for the details and implications, some of which address some points made in the talk.

Furthermore, THEOplayer have released this talk explaining the changes and discussing implementation.

Anyone who saw last year’s Chunky Monkey video, will recognise Will’s near-Oscar-winning animation style as he sets the scene explaining the contenders to the low-latency streaming crown.

We then look at a bullet list of features across each of the three low latency technologies (note Apple’s recent update) which leads on to a discussion on chunked transfer delivery and the challenges of line-rate delivery. A simple view of the universe would say that the ideal way to have a live stream, encoded at a constant bitrate, would be to stream it constantly at that bitrate to the receiver. Whilst this is, indeed, the best way to go, when we stream we’re also keeping one eye on whether we need to change the bitrate. If we get more bandwidth available it might be best to upgrade to a better quality and if we suddenly have contested, slow wifi, it might be time for an emergency drop down to the lowest bitrate stream.

When you are delivered a stream as individual files, you can measure how long they take to download to estimate your available bandwidth. If a file can be downloaded at 1Gbps, then it should always arrive at 1Gbps. Therefore if it arrives at less than 1Gbps we know that there is a bandwidth restriction and can make adjustments. Will explains that for streams delivered with chunked transfer or in real time such as in LL-HLS, this estimation no longer works as the files simply are never available at 1Gbps. He then explains some of the work that has been undertaken to develop more nuanced ways of estimating available bandwidth. It’s well worth noting that the smaller the files you transfer, the less accurate the bandwidth estimation as TCP takes time to speed up to line rate so small 320ms-length video segments are not ideal for maximising throughput.

Continuing to look at the differences, we next look at request rates with DASH at 20 requests per second compared to LL-HLS at 720. This leads naturally to an analysis of the benefits of HTTP/2 PUSH technology used in LL-HLS and the savings that can offer. Will explores the implications, and some of the problems, with last year’s version of the LL-HLS spec, some of which have been mitigated since.

The talk concludes with some work Akamai has done to try and establish a single, common workflow with examples and a GitHub repository. Will shows how this works and the limitations of the approach and finishes with a look at the commonalities in approaches.

[0] From “Low Latency HLS 2: Judgment Day” https://mux.com/blog/low-latency-hls-part-2/

Watch now!
Speakers

Will Law Will Law
Chief Architect,
Akamai

Video: Online Streaming Primer

A trip down memory lane for some, a great intro to the basics of streaming for others, this video from IET Media looks at the history of broadcasting and how that has moved over the years to online streaming posing the question whether, with so many people watching online, is that broad enough to now be considered broadcast?

The first of a series of talks from IET Media, the video starts by highlighting that the recording of video was only practical 20 years after the first television broadcasts then talks about how television has moved on to add colour, resolution and move to digital. The ability to record video is critical to almost all of our use of media today. Whilst film worked well as an archival medium, it didn’t work well, at scale, for recording of live broadcasts. So in the beginning, broadcasting from one, or a few, transmitters was all there was.

Russell Trafford-Jones, from IET Media, then discusses the advent of streaming from its predecessor as file-based music in portable players, through the rise of online radio and how this naturally evolved into the urge to stream video in much the same way.

Being a video from the IET video, Russell then looks at the technology behind getting video onto a network and over the internet. He talks about cutting the stream into chunks, i.e. small files, and how sending files can create a seamless stream of data. One key advantage of this method is Adaptive BitRate (ABR) meaning being able to change from one quality level, to another which typically means changing bitrate to adapt to changing network conditions.

Finishing by talking about the standards available for online streaming, this talk is a great introduction to streaming and an important part of anyone’s foundational understanding of broadcast and streaming.

Watch now!

This video was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Russell Trafford-Jones Russell Trafford-Jones
Exec Member, IET Media
Manager, Support & Services, Techex
Editor, The Broadcast Knowledge