Whilst the encoding landscape is shifting, AVC (AKA H.264) still dominates many areas of video distribution so, for many, understanding what’s under the hood opens up a whole realm of diagnostics and fault finding that wouldn’t be possible without. Whilst many understand that MPEG video is built around I, B and P frames, this short talk offers deeper details which helps how it behaves both when it’s working well and otherwise.
Christian Timmerer, co-founder of Bitmovin, starts his lesson on AVC with the summary of improvements in AVC over the basic MPEG 2 model people tend to learn as a foundation. Improvements such as variable block size motion compensation, multiple reference frames and improved adaptive entropy coding. We see that, as we would expect the input can use 4:2:0 or 4:2:2 chroma sub-sampling as well as full 4:4:4 representation with 16×16 macroblocks for luminance (8×8 for chroma in 4:2:0). AVC can handle Pictures split into several slices which are self-contained sequences of macroblocks. Slices themselves can then be grouped.
Intra-prediction is the next topic where by an algorithm uses the information within the slice to predict a macroblock. This prediction is then subtracted from the actual block and coded thereby reducing the amount of data that needs to be transferred. The decoder can make the same prediction and reconstruct the full block from the data provided.
The next sections talk about motion prediction and the different sizes of macroblocks. A macroblock is a fixed area on the picture which can be described by a mixture of some basic patterns but the more complex the texture in the block, the more patterns need to be combined to recreate it. By splitting up the 16×16 block, we can often find a simpler way to describe the 8×8 or 8×16 shapes than if they had to encompass a whole 16×16 block.
B-frames are fairly well understood by many, but even if they are unfamiliar to you, Christian explains the concept whereby B-frames provide solely motion information of macroblocks both from frames before and after. This allows macroblocks which barely change to be ‘moved around the screen’ so to speak with minimal changes other than location. Whilst P and I frames provide new macroblocks, B-frames are intended just to provide this directional information. Christian explains some of the nuances of B-frame encoding including weighted prediction.
Quantisation is one of the most important parts of the MPEG process since quantisation is the process by which information is removed and the codec becomes lossy. Thus the way this happens, and the optimisations possible are key so Christian covers the way this happens before explaining the deblocking filter available. After splitting the picture up into so many macroblocks which are independently processed, edges between the blocks can become apparent so this filter helps smooth any artefacts to make them more pleasing to the eye. Christian finishes talking about AVC by exploring entropy encoding and thinking about how AVC encoding can and can’t be improved by adding more memory and computation to the encoder.
Tomorrow, December 11th, 8 AM PST / 11 AM EST / 4 PM GMT
The important aspects of writing and developing streaming apps aren’t always clear to the beginner and adding video to apps high on the list for many companies. This can be a very simple menu of videos to delivering premium content for paid subscribers. This webinar is perfect for web developers, independent coders, creative agencies, students and anyone who has a basic understanding of programming concepts but little-to-zero knowledge of video development.
In this talk, Bitmovin Developer Evangelist, Andrea Fassina and Technical Product Marketing Manager, Sean McCarthy will share a variety of lessons learned, on topics such as:
What are the most common video app requirements and why?
What are common beginner mistakes with video streaming?
What are the key components of a video streaming service?
How do you measure the quality of a streaming service?
What are some quick tips to quickly improve video experience?
Bitmovin have brought together Jan Ozer from the Streaming Learning Center, their very own Sean McCarthy and Carlos Bacquet from SSIM Wave to discuss how best to assess video quality.
Fundamental to assessing video quality, of course, is what we mean by quality, which artefacts are most problematic and what drives the importance of video quality.
Quality of streaming, of course, is interdependent on the quality of the experience in general. Thinking of an online streaming system as a whole, speed of playback, smooth playback on the player itself and rebuffing are all factors of perceived quality as much as the actual codec encoding quality itself which is what is more traditionally measured.
The webinar brings together experience in measuring quality, monitoring systems and ways in which you can derive your own testing to lock on to the factors which matter to you and your business.
See the related posts below for more from Jan Ozer
Real-world solutions to real-world streaming latency in this panel from the Content Delivery Summit at Streaming Media East. With everyone chasing reductions in latency, many with the goal of matching traditional broadcast latencies, there are a heap of tricks and techniques at each stage of the distribution chain to get things done quicker.
The panel starts by surveying the way these companies are already serving video. Comcast, for example, are reducing latency by extending their network to edge CDNs. Anevia identified encoding as latency-introducer number 1 with packaging at number 2.
Bitmovin’s Igor Oreper talks about Periscope’s work with low-latency HLS (LHLS) explaining how Bitmovin deployed their player with Twitter and worked closely with them to ensure LHLS worked seamlessly. Periscope’s LHLS is documented in this blog post.
The panel shares techniques for avoiding latency such as keeping ABR ladders small to ensure CDNs cache all the segments. Damien from Anevia points out that low latency can quickly become pointless if you end up with a low-latency stream arriving on an iPhone before Android; relative latency is really important and can be more so than absolute latency.
The importance of HTTP and the version is next up for discussion. HTTP 1.1 is still widely used but there’s increasing interest in HTTP 2 and QUIC which both handle connections better and reduce overheads thus reducing latency, though often only slightly.
The panel finishes with a Q&A after discussing how to operate in multi-CDN environments.