Video: 5G Technology

5G seems to offer so much, but there is a lot of nuance under the headlines. Which of the features will telcos actually provide? When will the spectrum become available? How will we cope with the new levels of complexity? Whilst for many 5G will simply ‘work’, when broadcasters look to use it for delivering programming, they need to look a few levels deeper.

In this wide-ranging video from the SMPTE Toronto Section, four speakers take us through the technologies at play and they ways they can be implemented to cut through the hype and help us understand what could actually be achieved, in time, using 5G technology.

Michael J Martin is first up who covers topics such as spectrum use, modulation, types of cells, beam forming and security. Regarding spectrum, Michael explains that 5G uses three frequency bands, the sub 1GHz spectrum that’s been in use for many years, a 3Ghz range and a millimetre range at 26Ghz.

“It’s going to be at least a decade until we get 5G as wonderful as 4G is today.”

Michael J Martin
Note that some countries already use other frequencies such as 1.8GHz which will also be available.The important issue is that the 26Ghz spectrum will typically not be available for over a year, so 5G roll-out starts in some of the existing bands or in the 3.4Ghz spectrum. A recurring theme in digital RF is the use of OFDM which has long been used by DVB and has been adopted by ATSC 3.0 as their modulation, too. OFDM allows different levels of robustness so you can optimise reach and bandwidth.

Michael highlights a problem faced in upgrading infrastructure to 5G, the amount of towers/sites and engineer availability. It’s simply going to take a long time to upgrade them all even in a small, dense environment. This will deal with the upgrade of existing large sites, but 5G provides also for smaller cells, (micro, pico and femto cells). These small cells are very important in delivering the millimetre wavelength part of the spectrum.

Network Slicing
Source: Michael J. Martin, MICAN Communications

We look at MIMO and beam forming next. MIMO is an important technology as it, effectively, collects reflected versions of the transmitted signals and processes them to create stronger reception. 5G uses MIMO in combination with beam forming where the transmitter itself electronically manipulates the transmitter array to focus the transmission and localise it to a specific receiver/number of receivers.

Lastly, Michael talks about Network Slicing which is possibly one of the most anticipated features of 5G by the broadcast community. The idea being that the broadcaster can reserve its own slice of spectrum so when sharing an environment with 30,000 other receivers, they will still have the bandwidth they need.

Our next speaker is Craig Snow from Huawei outlines how secondary networks can be created for companies for private use which, interestingly, partly uses separate frequencies from public network. Network slicing can be used to separate your enterprise 5G network into separate networks fro production, IT support etc. Craig then looks at the whole broadcast chain and shows where 5G can be used and we quickly see that there are many uses in live production as well as in distribution. This can also mean that remote production becomes more practical for some use cases.

Craig moves on to look at physical transmitter options showing a range of sub 1Kg transmitters, many of which have in-built Wi-Fi, and then shows how external microwave backhaul might look for a number of your buildings in a local area connecting back to a central tower.

Next is Sayan Sivanathan who works for Bell Mobility and goes in to more detail regarding the wider range of use cases for 5G. Starting by comparing it to 4G, highlighting the increased data rates, improved spectrum efficiency and connection density of devices, he paints a rosy picture of the future. All of these factors support use cases such as remote control and telemetry from automated vehicles (whether in industrial or public settings.)  Sayan then looks at the deployment status in the US, Europe and Korea. He shows the timeline for spectrum auction in Canada, talks through photos of  5G transmitters in the real world.

Global Mobile Data Traffic (Exabytes per month)
Source: Ericsson Mobility Report, Nov 2019

Finishing off today’s session is Tony Jones from MediaKind who focuses in on which 5G features are going to be useful for Media and Entertainment. One is ‘better video on mobile’. Tony picks up on a topic mentioned by Michael at the beginning of the video: processing at the edge. Edge processing, meaning having compute power at the closest point of the network to your end user allows you to deliver customised manifest and deal with rights management with minimal latency.

Tony explains how MediaKind worked with Intel and Ericsson to deliver 5G remote production for the 2018 US Open. 5G is often seen as a great way to make covering golf cheaper, more aesthetically pleasing and also quicker to rig.

The session ends with a Q&A

Watch now!
Speakers

Michael J Martin Michael J Martin
MICAN Communications
Blog: vividcomm.com
Tony Jones Tony Jones
Principal Technologist
MediaKind Global
Craig Snow Craig Snow
Enterprise Accounts Director,
Huawei
Sayan Sivanathan Sayan Sivanathan
Senior Manager – IoT, Smart Cities & 5G Business Development
Bell Mobility

Video: What is NMOS? with a Secure Control Case Study

Once you’ve implemented SMPTE ST 2110‘s suite of standards on your network, you’ve still got all your work ahead of you in order to implement large-scale workflows. How are you doing to discover new devices? How will you make or change connections between devices? How will you associate audios to the video? Creating a functioning system requires an whole ecosystem of control protocols and information exchange which is exactly what AMWA, the Advanced Media Workflow Association has been working on for many years now.

Jed Deame from Nextera introduces the main specifications that have been developed to work hand-in-hand with uncompressed workflows. All prefixed with IS- which stands for ‘Interface Specificaion’, they are IS-04, IS-05, IS-08, IS-09 and IS-10. Between them they allow you to discover new devices, create connections between then, manage the association of audio with video as well as manage system-wide information. Each of these, Jed goes through in turn. The only relevant ones which are skipped are IS-06 which allows devices to communicate northbound to an SDN controller and IS-07 which manages GPI and tally information.

Jed sets the scene by describing an example ST-2110 setup with devices able to join a network, register their presence and be quickly involved in routing events. He then looks at the first specification in today’s talk, NMOS IS-04. IS-04’s job is to provide an API for nodes (cameras, monitors etc.) to use when they start up to talk to a central registry and lodge some details for further communication. The registry contains a GUID for every resource which covers nodes, devices, sources, flows, senders and receivers. IS-04 also provides a query API for controllers (for instance a control panel).

While IS-04 started off very basic, as it’s moved to version 1.4, it’s added HTTPS transport, paged queries and support for connection management with IS-05 and IS-06. IS-04 is a foundational part of the system allowing each element to have an identity, track when entities are changes and update clients accordingly.

IS-05 manages connections between senders and receivers allowing changes to be immediate or set for the future. It allows, for example, querying of a sender to get the multicast settings and provides for sending that to a receiver. Naturally, when a change has been made, it will update the IS-04 registry.

IS-08 helps manage the complexity which is wrought by allowing all audios to flow separately from the video. Whilst this is a boon for flexibility and reduces much unnecessary processing (in extracting and recombining audio) it also adds a burden of tracking which audios should be used where. IS-08 is the answer from AMWA on how to manage this complexity. This can be used in association with BCP-002 (Best Current Practice) which allows for essences in the IS-04 registry to be tagged showing how they were grouped when they were created.

Jed looks next at IS-09 which he explains provides a way for global facts of the system to be distributed to all devices. Examples of this would be whether HTTPS is in use in the facility, syslog servers, the registration server address and NMOS versions supported.

Security is the topic of the last part of talk. As we’ve seen, IS-04 already allows for encrypted API traffic, and this is mandated in the EBU’s TR-1001. However BCP 003 and IS-10 have also been created to improve this further. IS-10 deals with authorisation to make sure that only intended controllers, senders and receivers are allowed access to the system. And it’s the difference between encryption (confidentiality) and authorisation which Jed looks at next.

It’s no accident that security implementations in AMWA specifications shares a lot in common with widely deployed security practices already in use elsewhere. In fact, in security, if you can at all avoid developing your own system, you should avoid it. In use here is the PKI system and TLS encryption we use on every secure website. Jed steppes through how this works and the importance of the cipher suite which lives under TLS.

The final part of this talk is a case study where a customer required encrypted control, an authorisation server, 4K video over 1GbE, essence encryption, unified routing interface and KVM capabilities. Jed explains how this can all be achieved with the existing specifications or an extension non top of them. Extending the encryption methods for the API to essences allowed them to meet the encryption requirements and adding some other calls on top of the existing NMOS provided a unified routing interface which allowed setting modes on equipment.

Watch now!
For more information, download these slides from a SMPTE UK Section meeting on NMOS
Speakers

Jed Deame Jed Deame
CEO,
Nextera Video

Video: Building A Studio

The fundamentals of building a studio are the same whether for TV or Radio. You want to keep sound out…and in. This has forever been a challenge which doesn’t stop when the room’s built. Before it’s pressed into use, you have to lay it out correctly, considering the equipment, acoustic treatments and keep it cool.

Fortunately, experts from the BBC and Global are here to talk us through it at this Masterclass from Radio TechCon. Dave Walters from the BBC kicks off explaining how the aim of isolating your studio from physical vibration both through the structure and through gaps in the walls, floor or ceiling. Once isolated from the outside, the task is to manage the sound in the room and that calls for acoustic treatment. Dave goes through the options for lining the ceiling and walls showing that there’s acoustic treatment at all budgets. Dave finishes by highlighting that the aim is to dissipate sound and not let it bounce around. This means reflective surfaces such as glass windows need to be angled so they don’t directly point at any other hard surface.

With a deadened acoustic and a quiet atmosphere, your studio is ready to be occupied. Stephen Clarke from Global talks through laying out the studio taking into account what people do and don’t want to see. The presenter, for instance, will want to see through to the control room for visual cues during the programme, but it’s best to keep guests pointed away without distraction. This can also extend to the placement of TVs, computers and other equipment. Equipment, of course, is a concern in itself. As it generates heat and, often noise, it’s best to minimise in-studio equipment which can be done with a KVM system. Stephen talks us through a photo of the Today studio to see these principles in action.

To finish up, Global’s Simon Price talks about making holes in the studio that Dave managed to isolate. The inconvenient truth is that people need oxygen, generate heat and generate odour. Any one of those three is a good reason to put air con into the studio so Simon explains the use of baffles in ducting used to introduce the air. This absorbs sound from the air’s movement and also any external sounds that happen to come in. Simon concludes by explaining safe electrical distribution for studios keeping wiring to a minimum and reducing fire risk.

Before leaving, the team have just enough time to answer a question about studios with large amounts of glass and how to choose how ‘dead’ you want the reverb in the studio to be asking ‘can you go too far’ in minimising sound.

Watch now!
Speakers

Dave Walters Dave Walters
Head of Systems and Services: TV, Radio & Archive
BBC
Stephen Clarke Stephen Clarke
Broadcast Engineer,
Global Radio
Simon Price Simon Price
Broadcast Engineering Manager,
Global Radio