We’ve got used to a world of near-universal AVC/h.264 support, but in our desire to deliver better services, we need new codecs. VVC is nearing completion and is attracting increasing attention with its ability to deliver better compression than HEVC in a range of different situations.
Benjamin Bross from the Fraunhofer Institute talks at Mile High Video 2019 about what Versatile Video Coding (VVC) is and the different ways it achieves these results. Benjamin starts by introducing the codec, teasing us with details of machine learning which is used for block prediction and then explains the targets for the video codec.
Next we look at the bitrate curves showing how encoding has improved over the years and where we can expect VVC to fit in before showing results of testing the codec as it exists today which already shows improvement in compression. Encoding complexity and speed are also compared and as expected complexity has increased and speed has reduced. This is always a challenge at the beginning of a new codec standard, but is typically solved in due course. Benjamin also looks at the effect of resolution and frame rate on compression efficiency.
Every codec has sets of tools which can be tuned and used in certain combinations to deal with different types of content so as to optimise performance. VVC is no exception and Benjamin looks at some of the highlights:
Screen Content Coding – specific tools to encode computer graphics rather than ‘natural’ video. With the sharp edges on computer screens, different techniques can produce better results
Reference Picture Rescaling – allows resolution changes in the video stream. This can also be used to deliver multiple resolutions at the same time
Independent Sub Pictures – separate pictures available in same raster. Allows, for instance, sending large resolutions and allowing decoders to only decode part of the picture.
Microservices split large applications into many small, simple, autonomous sections. This can be a boon, but this simplicity hides complexity. Chris Lennon looks at both sides to find the true value in microservices.
By splitting up a program/service into many small blocks, each of those blocks become simpler so testing each block becomes simpler. Updating one block hardly affects the system as a whole leading to quicker and more agile development and deployment. In fact, using microservices has many success stories attributed to it. Less vocal are those who have failures or increased operational problems due to their use.
Like any technology, there are ‘right’ and ‘wrong’ times and places to deploy it. Chris, from MediAnswers, explains where he sees the break-even line between non-deploying and deploying microservices and explains his reasons which include hidden comlexity, your teams’ ability to deal with these many services and covers some of the fallacies at play which tend to act against you.
A group has started up within SMPTE who want to reduce the friction in implementing microservices which include general interoperability and also interoperability across OSes. This should reduce the work needed to get microservices from different vendors working together as one.
Chris explains the work to date and the plans for the future for this working group.
How can we overcome one of the last, big, problems in making CMAF a generally available: making ABR work properly.
ABR, Adaptive Bitrate is a technique which allows a video player to choose what bitrate video to download from a menu of several options. Typically, the highest bitrate will have the highest quality and/or resolution, with the smallest files being low resolution.
The reason a player needs to have the flexibility to choose the bitrate of the video is mainly due to changing network conditions. If someone else on your network starts watching some video, this may mean you can no longer download video quick enough to keep watching in full quality HD and you may need to switch down. If they stop, then you want your player to switch up again to make the most of the bitrate available.
Traditionally this is done fairly simply by measuring how long each chunk of the video takes to download. Simply put, if you download a file, it will come to you as quickly as it can. So measuring how long each video chunk takes to get to you gives you an idea of how much bandwidth is available; if it arrives very slowly, you know you are close to running out of bandwidth. But in low-latency streaming, your are receiving video as quickly as it is produced so it’s very hard to see any difference in download times and this breaks the ABR estimation.
He starts by explaining how players currently behave with low-latency ABR showing how they miss out on changing to higher/lower renditions. Then he looks at the differences on the server and for the player between non-low-latency and low-latency streams. This lays the foundation to discuss ACTE – ABR for Chunked Transfer Encoding.
ACTE is a method of analysing bandwidth with the assumption that some chunks will be delivered as fast as the network allows and some won’t be. The trick is detecting which chunks actually show the network speed and Ali explains how this is done and shows the results of their evaluation.
IMF is an interchange format designed for post-production/studios versioning requirements. It reduces storage required for multi-version projects but also provides for a standard way of exchanging metadata between companies.
Annie Chang covers the history briefly of IMF showing what it was aiming to achieve. IMF has been standardised through SMPTE as ST 2067 and has gained traction within the industry hence the continued interest in extending the standard. As with all modern standards, this has been created to be extensible, so Annie gives details on what is being added to it and where these endeavours have got to.