Video: Live Media Production – The Ultimate End Game

A lot of our time on this website is devoted to understanding the changes we are going through now, but we don’t adopt technology for the sake of it. Where’s this leading and what work is going on now to forge our path? Whilst SMPTE ST 2110 and the associated specifications aren’t yet a mature technology in that sense SDI, we’re past the early adopter phase and we can see which of the industry’s needs aren’t yet met.

Andy Rayner from Nevion is here to help us navigate the current technology space and understand the future he and Nevion envision. The beginning of the video shows the big change in process from the workflows of the 90s where the TV station moved to sports events to now where we bring the event to the broadcaster in the form of a light connectivity truck turning up and deploying cameras at the event leaving most people either at home or back at base doing the production there. Andy has been involved in a number of implementations enabling this such as at Discovery’s Eurosport where the media processing is done in two locations separate from the production rooms around Europe.

 

 

Generalising around the Discovery case study, Andy shows a vision of how many companys will evolve their workflows which includes using 5G, public and private clouds as appropriate and including control surfaces being at home. To get there, Andy lays out the work within AMWA and SMPTE creating the specifications and standards that we need. He then shows how the increasing use of IT in live production, the already IT-based NLE workflows are able to integrate much better.

Looking to the future, Andy explains the work ongoing to specify a standard way of getting video into and out of the cloud including specifying a way of carrying 2110 on the WAN, helping RIST and formalising the use of JPEG XS. Andy anticipates a more standardised future where a best of breed system is possible down to individual logical components like ‘video keyer’ and ‘logo insertion’ could be done by separate software but which seamlessly integrate. Lastly, Andy promises us that work is underway to improve timing within 2110 and 2110-associated workflows.

Watch now!
Speaker

Andy Rayner Andy Rayner
Chief Technologist
Nevion

Video: Insight into Current Trends of IP Production & Cloud Integration

When we look at the parts of our workflows that work well, we usually find standards underneath. SDI is pretty much a solved problem and has been delivering video since before the 90s, albeit with better reliability as time has gone on. MPEG Transport Streams are another great example of a standard that has achieved widespread interoperability. These are just two examples given by John Mailhot from Imagine Communications as he outlines the standards which have built the broadcast industry to what it is today, or perhaps to what it was in 2005. By looking at past successes, John seeks to describe the work that the industry should be doing now and into the future as technology and workflows evolve at a pace.

John’s point is that in the past we had some wildly successful standards in video and video transport. For logging, we relied on IT-based standards like SNMP and Syslog and for control protocols, the wild west was still in force with some defacto standards such as Probel’s SW-P-08 router protocol and the TSL UMD protocol dominating their niches.

 

 

The industry is now undergoing a number of transformations simultaneously. We are adopting IP-based transport both compressed and uncompressed (though John quickly points out SDI is still perfectly viable for many). We are moving many workloads to the cloud and we are slowly starting to up our supported resolutions along with moving some production to HDR. All of this work, to be successful should be based on standards, John says. And there are successes in there such as AMWA’s NMOS specifications which are the first multi-vendor, industry-wide control protocol. Technically it is not a standard, but in this case, the effect is close to the same. John feels that the growth of our industry depends on us standardising more control protocols in the future.

John spends some time looking at how the move to IP, UHD, HDR and Cloud have played into the Live Production and Linear Playout parts of the broadcast chain. Live production, as we’ve heard previously is starting to embrace IP now, lagging playout deployments. Whereas playout usually lags production in UHD and HDR support since it’s more important to acquire video now in UHD & HDR even if you can’t transmit it to maximise its long-term value.

John finishes by pointing out that Moore’s law’s continued may not be so clear in CPUs but it’s certainly in effect within optics and network switches and routers. Over the last decade, switches have gone from 10 gig to 50 to 100 and now to 400 gig. This long term cost reduction should be baked into the long-term planning for companies embarking on an IP transformation project.

Watch now!
Speaker

John Mailhot John Mailhot
CTO,
Imagine Communications

Video: AES67 Beyond the LAN

It can be tempting to treat a good quality WAN connection like a LAN. But even if it has a low ping time and doesn’t drop packets, when it comes to professional audio like AES67, you can help but unconver the differences. AES67 was designed for tranmission over short distances meaning extremely low latency and low jitter. However, there are ways to deal with this.

Nicolas Sturmel from Merging Technologies is working as part of the AES SC-02-12M working group which has been defining the best ways of working to enable easy use of AES67 on the WAN wince the summer. The aims of the group are to define what you should expect to work with AES67, how you can improve your network connection and give guidance to manufacturers on further features needed.

WANs come in a number of flavours, a fully controlled WAN like many larger broadacsters have which is fully controlled by them. Other WANs are operated on SLA by third parties which can provide less control but may present a reduced operating cost. The lowest cost is the internet.

He starts by outlining the fact that AES67 was written to expect short links on a private network that you can completely control which causes problems when using the WAN/internet with long-distance links on which your bandwidth or choice of protocols can be limited. If you’re contributing into the cloud, then you have an extra layer of complication on top of the WAN. Virtualised computers can offer another place where jitter and uncertain timing can enter.

Link

The good news is that you may not need to use AES67 over the WAN. If you need precise timing (for lip-sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily have a varying round trip time of, say, 16 to 40ms. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms. This variation in timing is known as the Packet Delay Variation (PDV).

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

Nocals finishes by summarising that your solution will need to be sent over unicast IP, possibly in a tunnel, each end locked to a GNSS, high buffers to cope with jitter and, perhaps most importantly, the output of a workflow analysis to find out which tools you need to deploy to meet your actual needs.

Watch now!
Speaker

Nicolas Sturmel Nicolas Sturmel
Network Specialist,
Merging Technologies

Video: Native Processing of Transport Streams to/from Uncompressed IP

As much as the move to IP hasn’t been trivial for end-users, it’s been all the harder for vendors who have had to learn all the same lessons as end-users, but also press the technology into action. Whilst broadcast is building on the expertise, success and scale of the IT industry, we are also pushing said technology to its limits and, in some cases, in ways not yet seen by the IT industry at large.

Kieran Kunhya from encoder and decoder vendor Open Broadcast Systems, explains to us the problems faced in making this work for software-based systems. As we heard earlier this week on The Broadcast Knowledge, the benefits of moving functions away from bespoke hardware are the ability to move your workflows more easily into data centres or even the cloud. Indeed, flexibility is one important factor for OBS which is why they are a software-first company. Broadcast workflows have been traditionally static and still, today, tends to only do one thing so a move to software removes the dependence on specific, custom chips.

The move to IP has many benefits, as Kieran outlines next. In today’s pandemic, a big benefit is simply not needing a person to go and move an SDI cable. But freeing ourselves from SDI, we hear, is more than just that. Kieran acknowledges that SDI achieves ultra-low delay in the realm of microseconds to move gigabits of video, but this comes at a high price. Each cable only carries one signal and only in one direction, but more critically routers top out at 1152×1152 in size. Whilst this does seem like a large number, larger operators are finding this is is simply not enough as they continue to both expand their offerings and also merge (compare Comcast’s NBC and Sky businesses).

The industry, by looking towards higher bandwidth and more scalable technologies for video has solved many of these problems. The bandwidth routing capability of IT switches can be in the terabits with each port being 100 or 400Gbps. Each cable works bidirectionally and, typically, carries multiple signals. This not leaves the infrastructure future-proof to moves, say, to 8K video but enables much denser routing of signals well above 1152×1152. The result of Kieran’s work is 64 channel encoding/decoding in 2U which can replace up to a full rack of traditional equipment.

This success hasn’t come without a lot of work. The timings are very tight and getting standard servers to deliver 100% of packets onto a network within 20 microseconds takes hard-won knowledge. Kieran explains that two of the keys to success are using kernel bypass techniques where he’s able to write directly into the memory space the NIC uses rather than the traditional method which would take the data via the Linux kernel. Secondly, he uses SIMD CPU instructions directly. This can speed up code by up to twenty times compared to C and only needs to be done once per CPU generation.

Once these techniques are harnessed, OBS still has to deal with the variety of unusual pixel formats, the difficulty of reference counting with many small buffers, uncompressed audio which has low bitrate and short 125 microsecond packets. Coupled with other equipment which doesn’t verify checksums, doesn’t use timestamps and doesn’t necessarily hadn’t 16 channel flows, making this work is tough but Kieran’s very clear the benefits of uncompressed IP video are worth it.

Watch now!
Speakers

Kieran Kunhya Kieran Kunhya
Founder & CEO
Open Broadcast Systems