Video: Reinventing Intercom with SMPTE ST 2110-30

Intercom systems form the backbone of any broadcast production environment. There have been great strides made in the advancement of these systems, and matrix intercoms are very mature solution now, with partylines, IFBs and groups, wide range of connectivity options and easy signal monitoring. However, they have flaws as well. Initial cost is high and there’s lack of flexibility as system size is limited by the matrix port count. It is possible to trunk multiple frames, but it is difficult, expensive and takes rack space. Moreover, everything cables back to a central matrix which might be a single point of failure.

In this presentation, Martin Dyster from The Telos Alliance looks at the parallels between the emergence of Audio over IP (AoIP) standards and the development of products in the intercom market. First a short history of Audio over IP protocols is shown, including Telos Livewire (2003), Audinate Dante (2006), Wheatstone WheatNet (2008) and ALC Networks Ravenna (2010). With all these protocols available a question of interoperability has arisen – if you try to connect equipment using two different AoIP protocols it simply won’t work.

In 2010 The Audio Engineering Society formed the x192 Working Group which was the driving force behind the AES67. This standard was ratified in 2013 and allowed interconnecting audio equipment from different vendors. In 2017 SMPTE adapted AES67 as the audio format for ST 2110 standard.

Audio over IP replaces the idea of connecting all devices “point-to-point” with multicast IP flows – all devices are connected via a common fabric and all audio routes are simply messages that go from one device to another. Martin explains how Telos were inspired by this approach to move away from the matrix based intercoms and create a distributed system, in which there is no central core and DSP processing is built in intercom panels. Each panel contains audio mix engines and a set of AES67 receivers and transmitters which use multicast IP flows. Any ST 2110-30 / AES67 compatible devices present on the network can connect with intercom panels without an external interface. Analog and other baseband audio needs to be converted to ST 2110-30 / AES67.

Martin finishes his presentation by highlighting advantages of AoIP intercom systems, including lower entry and maintenance cost, easy expansion (multi studio or even multi site) and resilient operation (no single point of failure). Moreover, adaptation of multicast IP audio flows removes the need for DAs, patch bays and centralised routers, which reduces cabling and saves rack space.

Watch now!

Download the slides.

If you want to refresh your knowledge about AES67 and ST2110-30, we recomend the Video: Deep Dive into SMPTE ST 2110-30, 31 & AES 67 Audio presentation by Leigh Whitcomb.

Speaker

Martin Dyster
VP Business Development
The Telos Alliance

Video: NMOS and ST 2110 Pro AV Roadmap

ProAV and Broadcast should be best buddies, but only a relatively few companies sell into both. This is because there are legitimate differences in what we need. That being said, interoperability is a helpful end goal for any industry. Whilst proprietary solutions can help kickstart new technologies and be a positive disruptive force, standardisation is almost always beneficial to the industry in the medium to long term.

Whilst broadcast is happy to live with 4:2:2 colour subsampling in much of its workflow, then deliver in 4:2:0, this is often not an option for ProAV who need to take full 4:4:4 4K at 60fps and throw it on a monitor. Whilst 4:4:4 has, technically been possible over SDI for a while, adoption even in the broadcast market has been small.

There are many opportunities for both industries to learn from each other, but it’s hard to overstate the difference in approach of the SMPTE 2110 and NMOS approach to the problem of media over IP compared to the SDVoE model. The former relies on detailed documentation published publicly for anyone who is willing to buy the standard to implement in any way they see fit be that in software or hardware. The latter specifies a chip which has a documented API that does all of the heavy lifting with no option for self-implementation. The fact that the same chip is used everywhere provides the guarantee of interoperability.

One technology which has bridged the gap between ProAV and broadcast is NDI from Vizrt’s Newtek which uses the same binary software application wherever it’s implemented thus providing, like in SDVoE, the interoperability required. The same is true for SRT although they have just released their first draft for IETF standardisation.

In this talk, PESA CTO Scott Barella examines the many existing standards within ProAV and examines their needs such as HDCP. Whilst HDCP, the High-bandwidth Digital Content Protection mechanism, has often been grappled with by broadcasters, it is at least a standard. And it’s a standard that any vendor will have to deal with if they want their equipment to be widely used in the industry. Similarly the requirement for full-frame rate, full-colour UHD is not simply done within many boxes.

The use of PTP within SMPTE’s ST 2110 suite works perfectly in the studio, is arguably not necessary in much of ‘the cloud’ and is widely considered too complex for a ProAV environment. Scott explains that he has thoughts on how to simplify it to make it more practical and taking into account the different use cases.

Secondary interfaces are crucial in much ProAV whereby USB, RS 232 serial and GPI/GPO need to be transported along with the media. And whilst security and encryption are increasingly important for the broadcast industry as it comes to grips with the fact that all broadcasters are vulnerable to hacking attempts, their requirements are not as stringent as the military’s which drives a notable part of the ProAV market. All of these aspects are being considered as part of the ongoing work the Scott is involved with.

Watch now! and download the presentation
Speaker

Scott Barella Scott Barella
CTO, PESA
AIMS co-chair.

Video: CPAC Case Study – Replacement of a CWDM System with an IP System

For a long time now, broadcasters have been using dark fibre and CWDM (Coarse Wavelength Division Multiplexing) for transmission of multiple SDI feeds to and from remote sites. As an analogue process, WDM is based on a concept called Frequency Division Multiplexing (FDM). The bandwidth of a fibre is divided into multiple channels and each channel occupies a part of the large frequency spectrum. Each channel operates at a different frequency and at a different optical wavelength. All these wavelengths (i.e., colours) of laser light are combined and de-combined using a passive prism and optical filters.

In this presentation Roy Folkman from Embrionix shows what advantages can be achieved by moving from CWDM technology to real-time media-over-IP system. The recent project for CPAC (Cable Public Affairs Channel) in Canada has been used as an example. The scope of this project was to replace an aging CWDM system connecting government buildings and CPAC Studios which could carry 8 SDI signals in each direction with a single dark fibre pair. The first idea was to use a newer CWDM system which would allow up to 18 SDI signals, but quite quickly it became apparent that an IP system could be implemented at similar cost.

As this was an SDI replacement, SMPTE ST 2022-6 was used in this project with a upgrade path to ST 2110 possible. Roy explains that, from CPAC point of view, using ST 2022-6 was a comfortable first step into real-time media-over-IP which allowed for cost reduction and simplification (no PTP generation and distribution required, re-use of existing SDI frame syncs and routing with audio breakaway capability). The benefits of using IP were: increased capacity, integrated routing (in-band control) and ease of future expansion.

A single 1RU 48-port switch on each side and a single dark fibre pair gave the system a capacity of 48 HD SDI signals in each direction. SFP gateways with small Embronix enclosures have been used to convert SDI outs of cameras to IP fibre – that also allowed to extend the distance between the cameras and the switch above SDI cabling limit of 100 meters. SFP gateway modules converting IP to SDI have been installed directly in the switches in both sites.

Roy finishes his presentation with possible future expansion of the system, such as migration to ST 2110 (firmware upgrade for SFP modules), increased capacity (by adding additional dark fibres ands switches), SDI and IP routing integration with unified control system (NMOS), remote camera control and addition of processing functions to SFP modules (Multiviewers, Up/Down/CrossConversion, Compression).

Watch now!

Download the slides.

Speaker

Roy Folkman 
VP of Sales
Embrionix

Video: The Basics of SMPTE ST 2110 in 60 Minutes

SMPTE ST 2110 is a growing suite of standards detailing uncompressed media transport over networks. Now at 8 documents, it’s far more than just ‘video over IP’. This talk looks at the new ways that video can be transported, dealing with PTP timing, creating ‘SDPs’ and is a thorough look at all the documents.

Building on this talk from Ed Calverley which explains how we can use networks to carry uncompressed video, Wes Simpson goes through all the parts of the ST 2110 suite explaining how they work and interoperate as part of the IP Showcase at NAB 2019.

Wes starts by highlighting the new parts of 2110, namely the overview document which gives a high level overview of all the standard docs, the addition of compressed bit-rate video carriage and the recommended practice document for splitting a single video and sending it over multiple links; both of which are detailed later in the talk.

SMPTE ST 2110 is fundamentally different, as highlighted next, in that it splits up all the separate parts of the signal (i.e. video, audio and metadata) so they can be transferred and processed separately. This is a great advantage in terms of reading metadata without having to ingest large amounts of video meaning that the networking and processing requirements are much lighter than they would otherwise be. However, when essences are separated, putting them back together without any synchronisation issues is tricky.

ST 2110-10 deals with timing and knowing which packets of one essence are associated with packets of another essence at any particular point in time. It does this with PTP, which is detailed in IEEE 1588 and also in SMPTE ST 2059-2. Two standards are needed to make this work because the IEEE defined how to derive and carry timing over the network, SMPTE then detailed how to match the PTP times to phases of media. Wes highlights that care needs to be used when using PTP and AES67 as the audio standard requires specific timing parameters.

The next section moves into the video portion of 2110 dealing with video encapsulation on the networks pixel grouping and the headers needed for the packets. Wes then spends some time walking us through calculating the bitrate of a stream. Whilst for most people using a look-up table of standard formats would suffice, understanding how to calculate the throughput helps develop a very good understanding of the way 2110 is carried on the wire as you have to take note not only of the video itself (4:2:2 10 bit, for instance) but also the pixel groupings, UDP, RTP and IP headers.

Timing of packets on the wire isn’t anything new as it is also important for compressed applications, but it is of similar importance to ensure that packets are sent properly paced on wire. This is to say that if you need to send 10 packets, you send them one at a time with equal time between them, not all at once right next to each other. Such ‘micro bursting’ can cause problems not only for the receiver which then needs to use more buffers, but also when mixed with other streams on the network it can affect the efficiency of the routers and switches leading to jitter and possibly dropped packets. 2110-21 sets standards to govern the timing of network pacing for all of the 2110 suite.

Referring back to his warning earlier regarding timing and AES67, Wes now goes into detail on the 2110-30 standard which describes the use of audio for these uncompressed workflows. He explains how the sample rates and packet times relate to the ability to carry multiple audios with some configurations allowing 64 audios in one stream rather than the typical 8.

‘Essences’, rather than media, is a word often heard when talking about 2110. This is an acknowledgement that metadata is just as important as the media described in 2110. It’s sent separately as described by 2110-40. Wes explains the way captions/subtitles, ad triggers, timecode and more can be encapsulated in the stream as ancillary ‘ANC’ packets.

2110-22 is an exciting new addition as this enables the use of compressed video such as VC-2 and JPEG-XS which are ultra low latency codecs allowing the video stream to be reduced by half, a quarter or more. As described in this talk the ability to create workflows on a single IP infrastructure seamlessly moving into and out of compressed video is allowing remote production across countries allowing for equipment to be centralised with people and control surfaces elsewhere.

Noted as ‘forthcoming’ by Wes, but having since been published, is RP 2110-23 which adds back in a feature that was lost when migrating from 2022-6 into 2110 – the ability to send a UHD feed as 4x HD feeds. This can be useful to allow for UHD to be used as a production format but for multiviewers to only need to work in HD mode for monitoring. Wes explains the different modes available. The talk finishes by looking at RTP timestamps and SDPs.

Watch now!
The slides for this talk are available here
Speakers

Wes Simpson Wes Simpson
President,
Telecom Product Consulting