Microservices are a way of splitting up large programs and systems into many, many, smaller parts. Building up complex workflows from these single-function modules makes has many benefits including simplifying programming and testing, upgrading your system seamlessly with no downtime, scalability and the ability to run on the cloud. Microservices were featured last week on The Broadcast Knowledge. Microservices do present challenges, such as orchestrating hundreds of processes into a coherent media workflow.
The EBU is working with SMPTE and the Open Services Alliance for Media on a cloud-agnostic open source project called MCMA, Media Cloud Microservice Architecture. The MCMA project isn’t a specification, rather it a set of software providing tools to enable a move to microservices. We hear from Alexandre Rouxel from the EBU and Loïc Barbou from Bloomberg that this project started out of a need from some broadcasters to create a scalable infrastructure that could sit on a variety of cloud infrastructure.
What is a service? Created a standard idea of a service that contains standard operations. Part of the project is a set of libraries that work with NodeJS and .net which deal with the code needed time and time again such as logging, handling data repositories, security etc. Joost Rovers explains how the Job Processor and Service Registry work together to orchestrate the media workflows and ensure there’s a list of every microservice available, and how to communicate with it. MCMA places shims in front of cloud services on GCP, AWS, Azure etc in order that each service looks the same. Joost outlines the libraries and modules available for MCMA and how they could be used.
As the amount of video consumed on the internet continues to grow, technologies that unify over-the-air broadcast with internet delivery. Doing this should allow a seamless mix meaning viewers can choose a service without knowing how it’s arriving at their TV, mobile device or laptop. This is the principle behind DVB-I and HbbTV.
In this webinar, Peter MacAvock and Peter Lanigan join moderator Dr. Jörn Krieger to answer questions about how DVB-I works and how the two organisations work together. To set the scene, Peter Lanigan explains what DVB-I is and where it sits within DVB’s other technologies.
Famous for the widespread technologies of DVB-T, -S and -C which underpin much of the world’s broadcasting, DVB have recently developed a broadcast-focused version of MPEG DASH called DVB-DASH on which DVB-I is built. Where there -T in DVB-T is for terrestrial broadcast and the -S in DVB-S for satellite broadcast, the -I in DVB-I stands for internet. Built upon the DVB-DASH standard DVB-I delivers services over the Internet to devices with broadband access whether that’s raw internet or over operator-managed networks. Most importantly, this isn’t just about TVs, but any device.
DVB-I aims to offer a way unify over-the-air broadcast with internet delivery. The apps used to deliver services to smartphones, tablets and desktops tend to create segregation as each provider delivers their own app. However, there is a benefit to removing the need for each broadcaster needing to maintain their app on all the many platforms. By unifying delivery, DVB-I also makes life easier for manufacturers who can deliver a single, consistent experience. Finally, it opens up a market for more general apps which deliver a TV experience without being tied to one broadcaster opening up more business models and a route to independent innovation.
‘Service Lists’ are the fundamental currency of DVB-I. Service discovery is therefore a critical aspect of DVB-I which was first defined in 2019 and updated in 2020. Service discovery is a technical, commercial and legal problem all of which are addressed in the DVB-I Service Discovery and Programmed Metadata standard which provides ways in which clients can access Service Lists and Service List Registries.
Another important aspect of delivery is targetted advertising since advertising underpins the business model of many broadcasters. DVB-TA defines targetted advertising for linear TV and is now being updated to include DVB-I. With DVB-TA, adverts are delivered to the receiver/device over IP based on various criteria and then triggered at the appropriate time as specified by the A178-1 signalling spec.
Ahead of the Q&A, Peter MacAvock introduces the HbbTV organisation explaining how and why it works closely with DVB to generate specifications that drive Hybrid TV forward. Also a member organisation, HbbTV and DVB share many interests but where the DVB’s remit within broadcast is wider than the device-centric HbbTV scope, HbbTV also has a wider scope than DVB since STBs and other devices are in use outside of broadcasting, for instance in retail. Importantly, HbbTV has replaced MHP as DVB’s hybrid TV solution. DVB and HbbTV are sharing the task of making DVB-DASH content and validation tools available to their members.
The Q&A covers controlling of the quality of delivery, getting around the internet’s different reliability compared to RF. They also address scalability with reference to DVB-ABR Multicast. There’s a question on avoiding illegal channels being included in service lists which both Peters acknowledge is a conversation ‘in progress’ for which the technical means exist, but speficially how to implement them is still in discussion a lot of which surrounds ways to establish trust between the device and the service list registars.
The Q&A finishes by discussing whether telcos/ISPs are interested in adopting DVB-ABR Muilticast, compatability between DVB-I and HbbTV as well as 5G broadcast mode.
Published last year, high-throughput JPEG 2000 (HTJ2K) is an update to the J2K we know well in the broadcast industry making it much faster. Whilst JPEG 2000 has found a home in low-latency broadcast contribution, it’s also part of the archive exchange format (AXF) because, unlike most codecs, JPEG 2000 has a mathematically lossless mode. HTJ2K takes JPEG 2000 and replaces some of the compression with a much faster algorithm allowing for much faster decoding of well 10 to 28 times faster in many circumstances.
The codec market seems waking up to the fact that multiple types of codec are needed to support the thousands of use cases that we have in the Media and Entertainment and beyond. It’s generally well known that codecs live in a world where they are optimising bitrate at the expense of latency and quality. But the advent of MPEG 5 Part 2, also known as LCEVC show that there is value in optimising to reduce complexity of encoding. In some ways, this is similar to saying reduce the latency, but in the LCEVC example, the aim is to allow low-power or low-complexity equipment to deal with HD or UHD video where otherwise that might not have been possible. With HTJ2K we have a similar situation where it’s worth getting 10x more throughput when managing and processing your archive at the expense of 5% more bitrate.
This talk from the EBU’s Network Technology Seminar hears from Pierre-Anthony Lemieux and Michael Smith who explain the need for this codec and the advantages. One important fact is that the encoding itself hasn’t been changed, just some of the maths around it. This means that you can take previously encoded files and process them into HTJ2K without changing any of the video data. This allows lossy J2K files to be converted without any degradation due to re-encoding and minimises conversion time for lossless files. Another motivator for this codec is cloud workflows where speed of compression is important to reduce costs. Michael Smith also explores the similarities and differences of High-Throughput J2K with JPEG XS
Andy Bechtolsheim from ARISTA Networks gives us an in-depth look at the stats surrounding online streaming before looking closer to home at uncompressed SMPTE ST 2110 productions within the broadcaster premises. Andy tracks the ascent of online streaming with over 60% of internet traffic being video. Recently, the number of consumer devices which have been incorporating streaming functions, whether a Youtube/Netflix app or a form of gaming live streaming has only continued to grow. Within 5 years, it’s estimated that each US household, on average, will be paying for over three and a quarter SVOD subscriptions.
SARS-CoV-2 has had its effect on streaming with Netflix already achieving their 2023 subscriber number targets and the 8-month-old Disney+ already having over 50 million subscribers over the 15 territories they had launched in by May; it’s currently forecast that there will be 1.1 billion SVOD subscriptions in 2025 globally.
The television still retains pride of place in the US both in terms of linear TV share and the place to consume video in general, but Andy shows that the number of households with a subscription to linear TV has dropped over 17% and will likely below 25% by 20203. As he draws his analysis to a close, he points out how significant an effect age has on viewing. Two years ago viewing of TV by over 65s in the US had increased by 8% whereas that of under 24s had fallen by a half.
An example of the incredible density available using IP to route video.
The second part of Andy’s keynote talk at the 2020 EBU Network Technology Seminar covers The Future of IP Networking. In this, he summarises the future developments in network infrastructure, IP production and remote production. Looking at the datacentre, Andy shows that 2017 was the inflexion point where 100G networking took over 40G in deployed numbers. The next big stop, 400G, has just started to take off but is early and may not make 100G numbers for a while. 800 gig links are forecast to start being available in 2022. This is enabled, asserts Andy, by the exponential growth in speed of the underlying chips within switches.
Andy shows us an example of a 1U switch which has a throughput of over 1024 UHD streams. If we compare this with a top-end SDI router solution, we see that a system that can switch 1125×1125 3G HD signals takes two 26RU racks. Taking 4 signals per UHD signal, the 1U switch has 3.6 times the throughput than a 52U SDI system. He then gives a short primer on 400G standards such as 400G for fibre, copper etc. along with the distance they will reach.
Now looking towards The New IP Television Studio Andy lays out how many SDI streams you can get into 100G and 400G links. For standard 3G HD, 128 will fit into 400G. Andy discusses the reduction in size of routers and of cabling before talking about examples such as CBC. Finally, he points out that with fibre, round trip times for 1000km can be as low as 10ms meaning that, any European event can be covered by remote production using uncompressed video such as the FIS World Ski Championships. We’ve seen, here on The Broadcast Knowledge that even if you can’t use uncompressed video, using JPEG XS is a great, low-latency way of linking 2110 workflows and achieving remote production.
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE