Video: AES67 & ST 2110 Deeper Dive – The Audio Files

A deeper dive here, in the continuing series of videos looking at AES67, SMPTE ST 2110 and Ravenna. Andreas Hildebrand from ALC Networx is back to investigate the next level down on how AES67 and ST 2110 operate and how they can be configured. The talk, however, remains accessible throughout and starts with an reminder of what AES67 is and why it exists. This is was also covered in his first talk.

After explaining the AES67 was created as a way for multiple audio-over-IP standards to interoperate, Andreas looks at the stack, stepping through it to explain each element. The first topic is timing. He explains that every device on the AES67 network is not only governed by PTP, but it’s also runs its own clock which is called the Local Clock. From the Local Clock, the device then also creates a Media Clock which is based on the Local Clock time but is used to crate any frequency needed for the media (48KHz, for instance). Finally an RTP clock is kept for transmission over the network.

The next item featured on the stack is encoding. AES67 is baed on linear audio, also known as PCM. AES67 ensures that 48KHz, 16 & 24-bit audio is supported on all devices and allows up to 8 channels per stream. Importantly, Andreas explains the different versions of packet time which are supported, 1ms being mandatory which allows 48 samples of 48 KHz audio into teach IP packet.

SDP – Session Description Protocol is next which describes in a simple text file what’s in the AES67 stream giving its configuration. Then Andreas looks at what Link Offset is and examines its role in determining latency and the types of latency it’s been made to compensate for. He then talks you through working out what latency setting you need to use including taking into account the number of switches in a network and our frame size.

SMPTE ST 2110 is the focus for the last part of the talk. This, Andreas explains, is a way of moving, typically uncompressed, professional media (also known as essences) around a network for live production with very low latency. It sends audio separately to the video and uses AES67 to do so. This is defined in standard ST 2110-30. However, there are some important configurations for AES67 which are mandated in order to be compatible which Andreas explains. One example is forcing all devices to be slave only, another is setting the RTP clock offset to zero. Andreas finishes the talk by summarising what parts of ST 2110 and AES67 overlap including discussing the frame sizes supported.

Watch now!
Download the presentation
Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX Gmbh.

Video: Building Television Systems in a Time of Multiple Technology Transitions

Major technology transitions can be hard to keep up with, and when you have a project requiring you decide which one to go with, it can seem unmanageable. This panel put together by SMPTE New York looks gives the view from System Integrators on how to make this work and cover their experience with a wide range of new technologies.

SMPTE ST 2110 is an entire paradigm shift

John Humphrey
John Turner kicked off explaining the reasoning for using SDI over SMPTE ST 2110 in some circumstances. For that project, his client had a fixed space so wouldn’t see the benefits of 2110 in terms of expansion. Their workflow already worked well in SDI and at the time, the costs of 2110 would have been higher. Overall, the project went with SDI, was successful and they are a happy customer. Karl Paulsen agreed that new technology shouldn’t be ‘for the sake of it’ and added that whilst individual products with a new technology may be stable, that’s not certain to be the case when interoperating within a whole system. As such, this puts the implementation time up meaning the incumbent technologies do tend to get chosen when time is at a premium.

Turning to 5G, Karl answered the question “what are the transformational technologies”. For some applications, for instance, back of the camera RF in a stadium, 5G is a major leap compared to microwave packs, but early on in a technology’s life, like we are with 5G, it’s a matter of working out where it does and where it doesn’t work well. In time, it will probably adapt to some of those other use cases that it wasn’t suited for initially. John Turner highlighted the elements that ATSC 3.0 transforms in a big way. From an RF perspective, its modulation is much stronger and more flexible, that it’s able to drive new business models.

John Mailhot’s view on the transformational challenge is ‘the people’. He puts forward the idea that the technical constraints of router size and max cable length, to name two examples, embedded themselves into the routines, assumptions and architectures that people embody in their work. With SMPTE ST-2110, most of these constraints are removed. This means you are a lot freer to work out the workflows the business wants. The challenge here is to have the imagination and fortitude to forge the right workflow without getting paralysed by choice.

“SMPTE ST 2110 is an entire paradigm shift”, John Humphrey

After responding to the moderator’s question on how much turmoil these transitions are causing, Mark Schubin summarises the situation by saying we need to work out which of the technologies is like a fridge (replacing previous technologies), a microwave (used as well as a conventional oven) and an induction cooker (requires change in cookware, little adoption). John Humphrey adds that ST 2110 is a technology which viewers don’t notice since the visual quality is the same. HDR, is the opposite so they need different approaches.

During the last 45 minutes, the panel took questions from the audience covering how to hire talent, the perspective of younger people on technology, programming specifically made for smartphones, ATSC 3.0 implementation, reliability of home internet, PTP and more.

Watch now!
Speakers

Mark Schubin Mark Schubin
Consultant & Explainer
John Humphrey John Humphrey
VP, Business Development,
Hitachi Kokusai Electric America Ltd.
Karl Paulsen Karl Paulsen
CTO,
Diversified
John Turner John Turner
Principal Engineer
Turner Engineering Inc.
John Mailhot John Mailhot
Systems Architect for IP Convergence
Imagine Communications

Video: How to Build an SRT Streaming Flow from Encoder to Edge

SRT is an enabler for contribution over the internet – whether point to point, or cloud egress/ingress. In recent weeks here on The Broadcast Knowledge we have seen different takes on how SRT, short for Secure Reliable Transport, and RIST can be used including from Open Broadcast Systems.

Here, Karel Boek, CEO of Raskenlund a Norwegian consultancy company for streaming, explains SRT and builds a workflow as a live demo showing how you can implement it quickly and easily. He starts by explaining where SRT sits and what it’s for. SRT makes contribution over the internet possible because it has a very light-touch away of recovering missing packets which are inevitable on internet links. Karel covers Haivision’s creation of SRT and the SRT Alliance that has grown out of that which now boast 350 members. The protocol being Open Source – and now an IETF Draft – means that a lot of companies have been happy to adopt the protocol. There are frequent plugfests, one has just concluded, where vendors test compatibility with the increasing set of features offered in SRT.

‘Secure’ is the ‘S’ in SRT’s name which is because the stream can be easily encrypted as part of the protocol. This is an important aspect in enabling sports and enterprise contribution in the cloud given the security that no-one can watch the feed before it gets to its destination.

‘Reliable’ is the key offer for SRT as that’s the number one problem with the internet and other networks where not all packets get delivered. TCP/IP is a great protocol on which most webpages are delivered. It’s fantastic for file delivery since every single packet gets acknowledged and there really isn’t any way that a file can get to the other end without being completely intact. Live streams can’t afford the overhead of counting in and counting out every packet so SRT’s ability to request only the missing packets is very important. It should be noted that this ability is also in Zixi, ARQ and RIST.

Karel compares SRT with other protocols including RTMP, MPEG-2 Transport Streams amongst others. He is careful to separate HLS, MPEG-DASH and WebRTC as ‘last-mile protocols’ in order to make a differentiation between those which content providers use to move video around as part of production and those which are used for distribution. RTMP’s use is still notable but diminishing particularly in Europe and the American markets. But the idea of MPEG-TS over UDP is still the best way to deliver within a building. Outside of the building, you would then want to protect it at least with FEC, with SMPTE 2022-7 or, better, with a protocol such as RIST or SRT. Karel mentions the details of the Simple Profile of RIST which was, by design, missing the features Karel notes are absent. We’ve heard here on The Broadcast Knowledge that these features have been delivered as planned in the Main Profile.

In the final part of this talk, Karel builds, live, an example workflow which combines both Wowza and SRTHub to create an end-to-end workflow. This is a great way of demonstrating how quickly you can create a workflow with SRT. There are plenty of SRT-enabled encoders and senders which is one of the ways we can judge the success of the SRT Alliance. Similarly whilst Haivision’s SRTHub is a useful product which brings things together in the cloud or on-prem, but Techex’s MWEdge and Videoflow’s DVG can do similar or more, each with their own advantages.

Overwell the takeaway from this talk from Raskenlund is that internet contribution is sorted, it’s now for your to choose how to do it and with whom. To that end, the talk ends with a Q&A from people wondering exactly that.

Watch now!
Speaker

Karel Boek Karel Boek
CEO,
Raskenlund

Video: ATSC 3.0 Seminar Part III

ATSC 3.0 is the US-developed set of transmission standards which is fully embracing IP technology both over the air and for internet-delivered content. This talk follows on from the previous two talks which looked at the physical and transmission layers. Here we’re seeing how IP throughout has benefits in terms of broadening choice and seamlessly moving from on-demand to live channels.

Richard Chernock is back as our Explainer in Chief for this session. He starts by explaining the driver for the all-IP adoption which focusses on the internet being the source of much media and data. The traditional ATSC 1.0 MPEG Transport Stream island worked well for digital broadcasting but has proven tricky to integrate, though not without some success if you consider HbbTV. Realistically, though, ATSC see that as a stepping stone to the inevitable use of IP everywhere and if we look at DVB-I from DVB Project, we see that the other side of the Atlantic also sees the advantages.

But seamlessly mixing together a broadcaster’s on-demand services with their linear channels is only benefit. Richard highlights multilingual markets where the two main languages can be transmitted (for the US, usually English and Spanish) but other languages can be made available via the internet. This is a win in both directions. With the lower popularity, the internet delivery costs are not overburdening and for the same reason they wouldn’t warrant being included on the main Tx.

Richard introduces ISO BMFF and MPEG DASH which are the foundational technologies for delivering video and audio over ATSC 3.0 and, to Richard’s point, any internet streaming services.

We get an overview of the protocol stack to see where they fit together. Richard explains both MPEG DASH and the ROUTE protocol which allows delivery of data using IP on uni-directional links based on FLUTE.

The use of MPEG DASH allows advertising to become more targeted for the broadcaster. Cable companies, Richard points out, have long been able to swap out an advert in a local area for another and increase their revenue. In recent years companies like Sky in the UK (now part of Comcast) have developed technologies like Adsmart which, even with MPEG TS satellite transmissions can receive internet-delivered targeted ads and play them over the top of the transmitted ads – even when the programme is replayed off disk. Any adopter of ATSC 3.0 can achieve the same which could be part of a business case to make the move.

Another part of the business case is that ATSC not only supports 4K, unlike ATSC 1.0, but also ‘better pixels’. ‘Better pixels’ has long been the way to remind people that TV isn’t just about resolution. ‘Better pixels’ includes ‘next generation audio’ (NGA), HDR, Wide Colour Gamut (WCG) and even higher frame rates. The choice of HEVC Main 10 Profile should allow all of these technologies to be used. Richard makes the point that if you balance the additional bitrate requirement against the likely impact to the viewers, UHD doesn’t make sense compared to, say, enabling HDR.

Richard moves his focus to audio next unpacking the term NGA talking about surround sound and object oriented sound. He notes that renderers are very advanced now and can analyse a room to deliver a surround sound experience without having to place speakers in the exact spot you would normally need. Options are important for sound, not just one 5.1 surround sound track is very important in terms of personalisation which isn’t just choosing language but also covers commentary, audio description etc. Richard says that audio could be delivered in a separate pipe (PLP – discussed previously) such that even after the
video has cut out due to bad reception, the audio continues.

The talk finishes looking at accessibility such as picture-in-picture signing, SMPTE Timed Text captions (IMSC1), security and the ATSC 3.0 standards stack.

Watch now!
Speaker

Richard Chernock Richard Chernock
Former CSO,
Triveni Digital