Video: What is NMOS? with a Secure Control Case Study

Once you’ve implemented SMPTE ST 2110‘s suite of standards on your network, you’ve still got all your work ahead of you in order to implement large-scale workflows. How are you doing to discover new devices? How will you make or change connections between devices? How will you associate audios to the video? Creating a functioning system requires an whole ecosystem of control protocols and information exchange which is exactly what AMWA, the Advanced Media Workflow Association has been working on for many years now.

Jed Deame from Nextera introduces the main specifications that have been developed to work hand-in-hand with uncompressed workflows. All prefixed with IS- which stands for ‘Interface Specificaion’, they are IS-04, IS-05, IS-08, IS-09 and IS-10. Between them they allow you to discover new devices, create connections between then, manage the association of audio with video as well as manage system-wide information. Each of these, Jed goes through in turn. The only relevant ones which are skipped are IS-06 which allows devices to communicate northbound to an SDN controller and IS-07 which manages GPI and tally information.

Jed sets the scene by describing an example ST-2110 setup with devices able to join a network, register their presence and be quickly involved in routing events. He then looks at the first specification in today’s talk, NMOS IS-04. IS-04’s job is to provide an API for nodes (cameras, monitors etc.) to use when they start up to talk to a central registry and lodge some details for further communication. The registry contains a GUID for every resource which covers nodes, devices, sources, flows, senders and receivers. IS-04 also provides a query API for controllers (for instance a control panel).

While IS-04 started off very basic, as it’s moved to version 1.4, it’s added HTTPS transport, paged queries and support for connection management with IS-05 and IS-06. IS-04 is a foundational part of the system allowing each element to have an identity, track when entities are changes and update clients accordingly.

IS-05 manages connections between senders and receivers allowing changes to be immediate or set for the future. It allows, for example, querying of a sender to get the multicast settings and provides for sending that to a receiver. Naturally, when a change has been made, it will update the IS-04 registry.

IS-08 helps manage the complexity which is wrought by allowing all audios to flow separately from the video. Whilst this is a boon for flexibility and reduces much unnecessary processing (in extracting and recombining audio) it also adds a burden of tracking which audios should be used where. IS-08 is the answer from AMWA on how to manage this complexity. This can be used in association with BCP-002 (Best Current Practice) which allows for essences in the IS-04 registry to be tagged showing how they were grouped when they were created.

Jed looks next at IS-09 which he explains provides a way for global facts of the system to be distributed to all devices. Examples of this would be whether HTTPS is in use in the facility, syslog servers, the registration server address and NMOS versions supported.

Security is the topic of the last part of talk. As we’ve seen, IS-04 already allows for encrypted API traffic, and this is mandated in the EBU’s TR-1001. However BCP 003 and IS-10 have also been created to improve this further. IS-10 deals with authorisation to make sure that only intended controllers, senders and receivers are allowed access to the system. And it’s the difference between encryption (confidentiality) and authorisation which Jed looks at next.

It’s no accident that security implementations in AMWA specifications shares a lot in common with widely deployed security practices already in use elsewhere. In fact, in security, if you can at all avoid developing your own system, you should avoid it. In use here is the PKI system and TLS encryption we use on every secure website. Jed steppes through how this works and the importance of the cipher suite which lives under TLS.

The final part of this talk is a case study where a customer required encrypted control, an authorisation server, 4K video over 1GbE, essence encryption, unified routing interface and KVM capabilities. Jed explains how this can all be achieved with the existing specifications or an extension non top of them. Extending the encryption methods for the API to essences allowed them to meet the encryption requirements and adding some other calls on top of the existing NMOS provided a unified routing interface which allowed setting modes on equipment.

Watch now!
For more information, download these slides from a SMPTE UK Section meeting on NMOS
Speakers

Jed Deame Jed Deame
CEO,
Nextera Video

Video: Case Study FIS Ski World Championship

There’s a lot to learn when it comes to implementing video over IP, so it’s healthy to stand back from the details and see a working system in use to understand how the theory becomes reality. There’s been a clear change in the tone of conversation at the IP Showcase over the years as we’ve shifted from ‘trust us, this could work’ to ‘this is what it looks like!’ That’s not to say there’s not plenty to be done, but this talk about an uncompressed 2110 remote production workflow is great example of how the benefits of IP are being realised by broadcasters.
Robert Erickson is with Grass Valley specialising in sports such as the FIS Alpine World Ski Championships which were in the city of Åre in Sweden some 600km from Stockholm where Sweden’s public broadcaster SVT is based. With 80 cameras at the championships to be remotely controlled over an uncompressed network, this was no small project. Robert explains the two locations were linked by a backbone of two 100Gbps circuits.

The principle behind SVT’s project was to implement a system which could be redeployed, wouldn’t alter the viewers’ experience and would reduce staff and equipment on site. Interestingly the director wanted to be on-site meaning that the production was then split between much of the staff being in Stockholm, which of course was where most of the equipment was, and Åre. The cameras were natively IP, so no converters were needed in the field.

Centralisation was the name of the game, based in Stockholm, producing an end-to-end IP chain. Network switching was provided by Arista which aggregated the feeds of the cameras and brought them to Stockholm where the CCUs were located. Robert highlights the benefits of this approach which include the use of COTS switches, scalability and indifference as to the circuits in use. We then have a look inside the DirectIP connection which is a 10gig ‘pipe’ carrying 2022-6 camera and return feeds along with control and talkback, replicating the functionality of a SMPTE fibre in IP.

To finish up, Robert talks about the return visions, including multivewers, which were sent back to Åre. A Nimbra setup was used to take advantage of a lower-bandwidth circuit using JPEG 2000 to send the vision back. In addition, it carried the data to connect the vision mixer/switcher at Åre with the switch at Stockholm. This was the only point at which noticeable latency was introduced to the tune of around 4 frames.

Watch now!
Download the presentation
Speakers

Robert Erickson Robert Erickson
Strategic Account Manager Sports and Venues,
Grass Valley

Video: AES67 & ST 2110 Deeper Dive – The Audio Files

A deeper dive here, in the continuing series of videos looking at AES67, SMPTE ST 2110 and Ravenna. Andreas Hildebrand from ALC Networx is back to investigate the next level down on how AES67 and ST 2110 operate and how they can be configured. The talk, however, remains accessible throughout and starts with an reminder of what AES67 is and why it exists. This is was also covered in his first talk.

After explaining the AES67 was created as a way for multiple audio-over-IP standards to interoperate, Andreas looks at the stack, stepping through it to explain each element. The first topic is timing. He explains that every device on the AES67 network is not only governed by PTP, but it’s also runs its own clock which is called the Local Clock. From the Local Clock, the device then also creates a Media Clock which is based on the Local Clock time but is used to crate any frequency needed for the media (48KHz, for instance). Finally an RTP clock is kept for transmission over the network.

The next item featured on the stack is encoding. AES67 is baed on linear audio, also known as PCM. AES67 ensures that 48KHz, 16 & 24-bit audio is supported on all devices and allows up to 8 channels per stream. Importantly, Andreas explains the different versions of packet time which are supported, 1ms being mandatory which allows 48 samples of 48 KHz audio into teach IP packet.

SDP – Session Description Protocol is next which describes in a simple text file what’s in the AES67 stream giving its configuration. Then Andreas looks at what Link Offset is and examines its role in determining latency and the types of latency it’s been made to compensate for. He then talks you through working out what latency setting you need to use including taking into account the number of switches in a network and our frame size.

SMPTE ST 2110 is the focus for the last part of the talk. This, Andreas explains, is a way of moving, typically uncompressed, professional media (also known as essences) around a network for live production with very low latency. It sends audio separately to the video and uses AES67 to do so. This is defined in standard ST 2110-30. However, there are some important configurations for AES67 which are mandated in order to be compatible which Andreas explains. One example is forcing all devices to be slave only, another is setting the RTP clock offset to zero. Andreas finishes the talk by summarising what parts of ST 2110 and AES67 overlap including discussing the frame sizes supported.

Watch now!
Download the presentation
Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX Gmbh.

Video: Building Television Systems in a Time of Multiple Technology Transitions

Major technology transitions can be hard to keep up with, and when you have a project requiring you decide which one to go with, it can seem unmanageable. This panel put together by SMPTE New York looks gives the view from System Integrators on how to make this work and cover their experience with a wide range of new technologies.

SMPTE ST 2110 is an entire paradigm shift

John Humphrey
John Turner kicked off explaining the reasoning for using SDI over SMPTE ST 2110 in some circumstances. For that project, his client had a fixed space so wouldn’t see the benefits of 2110 in terms of expansion. Their workflow already worked well in SDI and at the time, the costs of 2110 would have been higher. Overall, the project went with SDI, was successful and they are a happy customer. Karl Paulsen agreed that new technology shouldn’t be ‘for the sake of it’ and added that whilst individual products with a new technology may be stable, that’s not certain to be the case when interoperating within a whole system. As such, this puts the implementation time up meaning the incumbent technologies do tend to get chosen when time is at a premium.

Turning to 5G, Karl answered the question “what are the transformational technologies”. For some applications, for instance, back of the camera RF in a stadium, 5G is a major leap compared to microwave packs, but early on in a technology’s life, like we are with 5G, it’s a matter of working out where it does and where it doesn’t work well. In time, it will probably adapt to some of those other use cases that it wasn’t suited for initially. John Turner highlighted the elements that ATSC 3.0 transforms in a big way. From an RF perspective, its modulation is much stronger and more flexible, that it’s able to drive new business models.

John Mailhot’s view on the transformational challenge is ‘the people’. He puts forward the idea that the technical constraints of router size and max cable length, to name two examples, embedded themselves into the routines, assumptions and architectures that people embody in their work. With SMPTE ST-2110, most of these constraints are removed. This means you are a lot freer to work out the workflows the business wants. The challenge here is to have the imagination and fortitude to forge the right workflow without getting paralysed by choice.

“SMPTE ST 2110 is an entire paradigm shift”, John Humphrey

After responding to the moderator’s question on how much turmoil these transitions are causing, Mark Schubin summarises the situation by saying we need to work out which of the technologies is like a fridge (replacing previous technologies), a microwave (used as well as a conventional oven) and an induction cooker (requires change in cookware, little adoption). John Humphrey adds that ST 2110 is a technology which viewers don’t notice since the visual quality is the same. HDR, is the opposite so they need different approaches.

During the last 45 minutes, the panel took questions from the audience covering how to hire talent, the perspective of younger people on technology, programming specifically made for smartphones, ATSC 3.0 implementation, reliability of home internet, PTP and more.

Watch now!
Speakers

Mark Schubin Mark Schubin
Consultant & Explainer
John Humphrey John Humphrey
VP, Business Development,
Hitachi Kokusai Electric America Ltd.
Karl Paulsen Karl Paulsen
CTO,
Diversified
John Turner John Turner
Principal Engineer
Turner Engineering Inc.
John Mailhot John Mailhot
Systems Architect for IP Convergence
Imagine Communications