Andy Bechtolsheim from ARISTA Networks gives us an in-depth look at the stats surrounding online streaming before looking closer to home at uncompressed SMPTE ST 2110 productions within the broadcaster premises. Andy tracks the ascent of online streaming with over 60% of internet traffic being video. Recently, the number of consumer devices which have been incorporating streaming functions, whether a Youtube/Netflix app or a form of gaming live streaming has only continued to grow. Within 5 years, it’s estimated that each US household, on average, will be paying for over three and a quarter SVOD subscriptions.
SARS-CoV-2 has had its effect on streaming with Netflix already achieving their 2023 subscriber number targets and the 8-month-old Disney+ already having over 50 million subscribers over the 15 territories they had launched in by May; it’s currently forecast that there will be 1.1 billion SVOD subscriptions in 2025 globally.
The television still retains pride of place in the US both in terms of linear TV share and the place to consume video in general, but Andy shows that the number of households with a subscription to linear TV has dropped over 17% and will likely below 25% by 20203. As he draws his analysis to a close, he points out how significant an effect age has on viewing. Two years ago viewing of TV by over 65s in the US had increased by 8% whereas that of under 24s had fallen by a half.
An example of the incredible density available using IP to route video.
The second part of Andy’s keynote talk at the 2020 EBU Network Technology Seminar covers The Future of IP Networking. In this, he summarises the future developments in network infrastructure, IP production and remote production. Looking at the datacentre, Andy shows that 2017 was the inflexion point where 100G networking took over 40G in deployed numbers. The next big stop, 400G, has just started to take off but is early and may not make 100G numbers for a while. 800 gig links are forecast to start being available in 2022. This is enabled, asserts Andy, by the exponential growth in speed of the underlying chips within switches.
Andy shows us an example of a 1U switch which has a throughput of over 1024 UHD streams. If we compare this with a top-end SDI router solution, we see that a system that can switch 1125×1125 3G HD signals takes two 26RU racks. Taking 4 signals per UHD signal, the 1U switch has 3.6 times the throughput than a 52U SDI system. He then gives a short primer on 400G standards such as 400G for fibre, copper etc. along with the distance they will reach.
Now looking towards The New IP Television Studio Andy lays out how many SDI streams you can get into 100G and 400G links. For standard 3G HD, 128 will fit into 400G. Andy discusses the reduction in size of routers and of cabling before talking about examples such as CBC. Finally, he points out that with fibre, round trip times for 1000km can be as low as 10ms meaning that, any European event can be covered by remote production using uncompressed video such as the FIS World Ski Championships. We’ve seen, here on The Broadcast Knowledge that even if you can’t use uncompressed video, using JPEG XS is a great, low-latency way of linking 2110 workflows and achieving remote production.
It’s never been easy building a large OB van. Keeping within axel weight, getting enough technology in and working within a tight project timeline, not to mention keeping the expanding sections cool and water-tight is no easy task. Add on that social distancing thanks to SARS-CoV-2 and life gets particularly tricky.
This project was intriguing before Covid-19 because it called for two identical SMPTE ST-2110 IP trucks to be built, explains Geert Thoelen from NEP Belgium. Both are 16-camera trucks with 3 EVS each. The idea being that people could walk into truck A on Saturday and do a show then walk into truck B on Sunday and work in exactly the same show but on a different match. Being identical, when these trucks will be delivered to Belgium public broadcaster RTBF, production crews won’t need to worry about getting a better or worse truck then the other programmes.. The added benefit is that weight is reduced compared to SDI baseband. The trucks come loaded with Sony Cameras, Arista Switches, Lawo audio, EVS replays and Riedel intercoms. It’s ready to take a software upgrade for UHD and offers 32 frame-synched and colour-corrected inputs plus 32 outputs.
Broadcast Solutions have worked with NEP Belgium for many years, an ironically close relationship which became a key asset in this project which had to be completed under social distancing rules. Working open book and having an existing trust between the parties, we hear, was important in completing this project on time. Broadcast Solutions separated internet access for the truck to access the truck as it was being built with 24/7 remote access for vendors.
Axel Kühlem fro broadcast solutions address a question from the audience of the benefits of 2110. He confirms that weight is reduced compared to SDI by about half, comparing like for like equipment. Furthermore, he says the power is reduced. The aim of having two identical trucks is to allow them to be occasionally joined for large events or even connected into RTBF’s studio infrastructure for those times when you just don’t have enough facilities. Geert points out that IP on its own is still more expensive than baseband, but you are paying for the ability to scale in the future. Once you count the flexibility it affords both the productions and the broadcaster, it may well turn out cheaper over its lifetime.
The Broadcast Knowledge has documented over 100 videos and webinars on SMPTE ST 2110. It’s a great suite of standards but it’s not always simple to implement. For smaller systems, many of the complications and nuances don’t occur so a lot of the deeper dives into ST 2110 and its associated specifications such as NMOS from AMWA focus on the work done in large systems in tier-1 broadcasters such as the BBC, tpc and FIS Skiing for SVT.
ProAV, the professional end of the AV market, is a different market. Very few companies have a large AV department if one at all. So the ProAV market needs technologies which are much more ‘plug and play’ particularly those in the events side of the market. To date, the ProAV market has been successful in adopting IP technology with quick deployments by using heavily proprietary solutions like ZeeVee, SDVoE and NDI to name a few. These achieve interoperability by having the same software or hardware in each and every implementation.
IPMX aims to change this by bringing together a mix of standards and open specifications: SMPTE ST 2110, NMOS specs and AES. Any individual or company can gain access and develop a service or product to meet them.
Andreas gives a brief history of IP to date outlining how AES67, ST 2110, ST 2059 and the IS specifications, his point being that the work is not yet done. ProAV has needs beyond, though complementary to, those of broadcast.
AES67 is already the answer to a previous interoperability challenge, explains Andreas, as the world of audio over IP was once a purely federated world of proprietary standards which had no, or limited, interoperability. AES67 defined a way to allow these standards to interoperate and has now become the main way audio is moved in SMPTE 2110 under ST 2110-30 (2110-31 allows for AES3). Andreas explains the basics of 2110, AES, as well as the NMOS specifications. He then shows how they fit together in a layered design.
Andreas brings the talk to a close looking at some of the extensions that are needed, he highlights the ability to be more flexible with the quality-bandwidth-latency trade-off. Some ProAV applications require pixel perfection, but some are dictated by lower bandwidth. The current ecosystem, if you include ST 2110-22’s ability to carry JPEG-XS instead of uncompressed video allows only very coarse control of this. HDMI, naturally, is of great importance for ProAV with so many HDMI interfaces in play but also the wide variety of resolutions and framerates that are found outside of broadcast. Work is ongoing to enable HDCP to be carried, suitably encrypted, in these systems. Finally, there is a plan to specify a way to reduce the highly strict PTP requirements.
Good timing is essential in production for AES67 audio and SMPTE ST 2110. Delivering timing is no longer a matter of delivering a signal throughout your facility, over IP timing is bidirectional and forms a system which should be monitored and managed. Timing distribution has always needed design and architecture, but the detail and understanding needed are much more. At the beginning of this talk, Andreas Hildebrand explains why we need to bother with such complexity, after all, we got along very well for many years without it! Non-IP timing signals are distributed on their own cables as part of their own system. There are some parts of the chain which can get away without timing signals, but when they are needed, they are on a separate cable. With IP, having a separate network for distribution of timing doesn’t make sense so, whether you have an analogue or digital timing signal, that needs to be moving into the IP domain. But how much accuracy in timing to you need? Network devices already widely use NTP which can achieve an accuracy of less than a millisecond. Andreas explains that this isn’t enough for professional audio. At 48Khz, AES samples happen at an accuracy of plus or minus 10 microseconds with 192KHz going down to 2.5 microseconds. As your timing signal has to be less than the accuracy you need, this means we need to achieve nanosecond precision.
Daniel Boldt from timing specialists Meinberg is the focus of this talk explaining how we achieve this nano-second precision. Enter PTP, the Precision Time Protocol. This is a cross-industry standard from the IEEE uses in telcoms, power, finance and in many others wherever a network and its devices need to understand the time. It’s not a static standard, Daniel explains, and it’s just about to see its third revision which, like the last, adds features.
Before finding out about the latest changes, Daniel explains how PTP works in the first place; how is it possible to accurately derive time down to the nanosecond over a network which will have variable propagation times? We see how timestamps are introduced into the network interface controller (NIC) at the last moment allowing the timestamps to be created in hardware which removes some of the variable delays that is typical in software. This happens, Daniel shows, in the switch as well as in the server network cards. This article will refer to either a primary clock or a grand master. Daniel steps us through the messages exchanged between the primary and secondary clock which is the interaction at the heart of the protocol. The key is that after the primary has sent a timestamp, the secondary sends its timestamp to the primary which replies saying the time it received the secondary the reply. The secondary ends up with 4 timestamps that it can combine to determine its offset from the primary’s time and the delay in receiving messages. Applying this information allows it to correct the clock very accurately.
Most broadcasters would prefer to have more than one grandmaster clock but if there are multiple clocks, how do you choose which to sync from? Timing systems have long used strata whereby clocks are rated based on accuracy, either for internal accuracy & stability or by what they are synched to. This is also true for PTP and is part of the considerations in the ‘Best Master Clock Algorithm’. The BMCA starts by allowing a time source to assess its own accuracy and then search for better options on the network. Clocks announce themselves to the network and by listening to other announcements, a clock can decide if it should become a primary clock if, for instance, it hears no announce messages at all. For devices which should never be a grand primary, you can force them never to decide to become grand masters. This is a requisite for audio devices participating in ST 2110-3x.
Passing PTP around the network takes some care and is most easily done by using switches which understand PTP. These switches either run a ‘boundary clock’ or are ‘transparent clocks’. Daniel explores both of these scenarios explaining how the boundary clock switch is able to run multiple primary and secondary clocks depending on what is connected on each interface. We also see what work the switches have to do behind the scenes to maintain timing precision in transparent mode. In summary, Daniel summaries boundary clocks as being good for hierarchical systems and scales well but requires continuous monitoring whereas transparent clocks are simpler to deploy and require minimal monitoring. The main issue with transparent clocks is that they don’t scale well as all your timing messages are still going back to one main clock which could get overwhelmed.
SMPTE 2022-7 has been a very successful standard as its reliance only on RTP has allowed it to be widely applicable to compressed and uncompressed IP flows. It is often used in 2110 networks, too, where two separate networks are run and brought together at the receiving device. That device, on a packet-by-packet basis, is free to derive its audio/video stream from either network. This requires, however, exactly the same timing on both networks so Daniel looks at an example diagram where this PTP sharing is shown.
PTP’s still evolving and in this next section, Daniel takes us through some of the coming improvements which are also outlined at Meinberg’s blog. These are profile isolation, multi-domain clocks, security improvements and more.
Andreas takes the final section of the webinar to explain how we use PTP in media networks. All receivers will have the same clock which could be derived from GPS removing the need to distribute PTP between sites. 2110 is based on RTP which requires a timestamp to be added to every packet delivered to the network. RTP is a wrapper around IP packets which includes a timestamp which can be derived from the media clock counter.
Andreas looks at how accurate RTP delivery is achieved, dealing with offset values, populating the timestamp from the PTP clock for realties streams and he explains how the playout delay is calculated from the link offset. Finally, he shows the relatively simple process of synchronisation art the playout device. With all the timestamps in the system, synchronising playback of audio, video and metadata using buffers can be achieved fairly easily. Unfortunately, timestamps are easily destroyed by secondary processing (for instance loudness adjustment for an audio stream). Clearly, if this happened, synchronisation at the receiver would be broken. Whilst this will be addressed by out-of-band messaging in future standards, for now, this is managed by a broadcast controller which can take delay information from processing stages and distribute this to receivers.
Head of Software Development,
RAVENNA Technology Evangelist,
Subscribe to get daily updates
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE