Video: Network Design for Live Production

The benefits of IP sound great, but many are held back with real-life concerns: Can we afford it? Can we plug the training gap? and how do we even do it? This video looks at the latter; how do you deploy a network good enough for uncompressed video, audio and metadata? The network needs to deal with a large number of flows, many of which are high bandwidth. If you’re putting it to air, you need reliability and redundancy. You need to distribute PTP timing, control and maintain it.

Gerard Philips from Arista talks to IET Media about the choices you need to make when designing your network. Gerard starts by reminding us of the benefits of moving to IP, the most tangible of which is the switching density possible. SDI routers can use a whole rack to switch over one thousand sources, but with IP Gerard says you can achieve a 4000-square router within just 7U. With increasingly complicated workflows and with the increasing scale of some broadcasters, this density is a major motivating factor in the move. Doubling down on the density message, Gerard then looks at the difference in connectivity available comparing SDI cables which have signal per cable, to 400Gb links which can carry 65 UHD signals per link.

Audio is always ahead of video when it comes to IP transitions so there are many established audio-over-IP protocols, many of which work at Layer 2 over the network stack. Using Layer 2 has great benefits because there is no routing which means that discovering everything on the network is as simple as broadcasting a question and waiting for answers. Discovery is very simple and is one reason for the ‘plug and play’ ease of NDI, being a layer 2 protocol, it can use mDNS or similar to query the network and display sources and destinations available within seconds. Layer 3-based protocols don’t have this luxury as some resources can be on a separate network which won’t receive a discovery request that’s simply broadcast on the local network.

Gerard examines the benefits of layer 2 and explains how IGMP multicast works detailing the need for an IGMP querier to be in one location and receiving all the traffic. This is a limiting factor in scaling a network, particularly with high-bandwidth flows. Layer 3, we hear, is the solution to this scaling problem bringing with it more control of the size of ‘failure domains’ – how much of your network breaks if there’s a problem.

The next section of the video gets down to the meat of network design and explains the 3 main types of architecture: Monolithic, Hub and spoke and leaf and spoke. Gerard takes time to discuss the validity of all these architectures before discussing coloured networks. Two identical networks dubbed ‘Red’ and ‘Blue’ are often used to provide redundancy in SMPTE ST 2110, and similar uncompressed, networks with the idea that the source generates two identical streams and feeds them over these two identical networks. The receiver receives both streams and uses SMPTE ST 2022-7 to seamlessly deal with packet loss. Gerard then introduces ‘purple’ networks, ones where all switch infrastructure is in the same network and the network orchestrator ensures that each of the two essence flows from the source takes a separate route through the infrastructure. This means that for each flow there is a ‘red’ and a ‘blue’ route, but overall each switch is carrying a mixture of ‘red’ and ‘blue’ traffic.

The beauty of using IGMP/PIM for managing traffic over your networks is that the network itself decides how the flows move over the infrastructure. This makes for a low-footprint, simple installation. However, without the ability to take into account individual link capacity, the capacity of the network in general, bitrate of individual flows and understanding the overall topology, there is very control over where your traffic is which makes maintenance and fault-finding hard and, more generally, what’s the right decision for one small part of the network is not necessarily the right decision for the flow or for the network as a whole. Gerard explains how Software-Defined Networking (SDN) address this and give absolute control over the path your flows take.

Lastly, Gerard looks at PTP, the Precision Time Protocol. 2110 relies on having the PTP in the flow, in the essence allowing flows of separate audio and video to have good lip-sync and to avoid phase errors when audio is mixed together (where PTP has been used for some time). We see different architectures which include two grandmaster clocks (GMs), discuss whether boundary clocks (BCs) or transparent clocks (TCs) are the way to go and examine the little security that is available to stop rogue end-points taking charge and becoming grandmaster themselves.

Watch now!
Speakers

Gerard Phillips Gerard Phillips
Systems Engineer,
Arista

On-Demand Webinar: How to Prove Value with AI and Machine Learning

This webinar is now available online.

We’ve seen AI entering our lives in many ways over the past few years and we know that this will continue. Artificial Intelligence and Machine Learning are techniques that are so widely applicable they will touch all aspects of our lives before too many more years have passed. So it’s natural for us to look at the broadcast industry and ask “How will AI help us?” We’ve already seen machine learning entering into codecs and video processing showing that up/downscaling can be done better by machine learning than with the traditional ‘static’ algorithms such as bicubic, lanczos and nearest neighbour. This webinar examines the other side of things; how can we use the data available within our supply chains and from our viewers to drive efficiencies and opportunities for better monetisation?

There isn’t a strong consensus on the difference between AI and Machine learning. One is that that Artificial Intelligence is a more broad term of smart computing. Others say that AI has a more real-time feedback mechanism compared to Machine Learning (ML). ML is the process of giving a large set of data to a computer and giving it some basic abilities so that it can learn for itself. A great example of this is the AI network monitoring services available that look at all the traffic flowing through your organisation and learn how people use it. It can then look for unusual activity and alert you. To do this without fixed thresholds (which for network use really wouldn’t work) is really not feasible for humans, but computers are up to that task.

For conversations such as this, it usually doesn’t matter how the computer achieves it, AI, ML or otherwise. The points how can you simplify content production? How can you get better insights into the data you have? How can you speed up manual tasks?

David Short from IET Media moderates this session with Steve Callanan who’s company WIREWAX is working to revolutionise video creation, asset management and interactive video services joined by Hanna Lukashevich from Fraunhofer IDMT (Institute for Digital Media Technology) who uses machine learning to understand and create music and sound. Grant Franklin Totten completes the panel with his experience at Al Jazeera who have been working on using AI in broadcast since 2018 as a way to help maintain editorial and creative compliance as well as detecting fake news and bias checking.

Watch now!
Speakers

David Short Moderator: David Short
Vice Chair,
IET Media Technical Network
Steve Callanan Steve Callanan
Founder,
WIREWAX
Hanna Lukashevich Hanna Lukashevich
Head of Semantic Music Technologies,
Fraunhofer IDMT
Grant Franklin Totten Grant Franklin Totten
Head of Media & Emerging Platforms,
Al Jazeera Media Network

Meeting: IBC 2018 Review

Date: Wednesday 10th October, 2018.  18:00 for 18:30 start.
Location: IET, Savoy Place, London, WC2R 0BL

If you were unable to make it to IBC this year, this RTS event that will bring you up to speed on the highlights of the Exhibition and the Conference.

The panel of experts will guide you through the through the most exciting exhibitors and give you an overview of the hottest sessions and timely topics featured in the Conference which this year had more than 400 Speakers over five days.

Register now!

Speakers:

Muki Kahan Chair – Muki Kulhan
Executive Digital Producer/Managing Director, Muki-International
Keith Underwood Keith Underwood
Chief Operating Officer, Channel 4
David Short David W A Short
Vice Chair of the IET Multimedia Communications Network
Aradhna Tayal Aradhna Tayal
Director Radio TechCon/IBC’s ‘What Caught My Eye’ Social Media speaker
James Lovell James Lovell
Territory Account Manager for UK Media, Cisco

Register now!

Meeting: The IET President’s Address – A story of unseen engineering: digital TV compression

11th October 2018, 18:00 BST
Location: IET London, Savoy Place

Whilst many of us in the broadcast industry know current technology well, we would be wrong to overlook learning from the past and few can say we remember it all. This talk Former BT Chief Science Officer, Mike Carr, and current President of the IET promises to be a great reminder of the achievements of the past and why, for better or for worse, they have given us the technological landscape we work in today.

This Presidential address will overview the highlights and evolution of video compression engineering, starting with the relative simple schemes of the late 1970’s through to latest sophisticated techniques demonstrating how digital compression has played such a key part in enabling video as we use it today.

The talk is free to attend at Savoy Place, near Embankment, Central London. To register, you need to sign up for a free IET account. Following the talk is an optional paid dinner. Access to the talk is free and requires only registration!

Speaker:

Mike Carr Mike Carr is the former Chief Science Officer for BT and responsible for the company’s world-leading research and commercial exploitation unit, including patent licensing and corporate venturing activities

During his first 15 years with BT’s Labs his career has focused on the research, development and practical design of real-time audio/visual and multimedia communications systems.

He has several patents to his name in the field of video compression and is the holder of two prestigious BT awards; the Martlesham Medal for R&D (1992) and the BT Gold medal (1994) for leading multimedia product developments.

In 1998 he was elected President of the Digital Audio-Visual Council (DAVIC) a non-profit association based in Switzerland and representing 160 companies in more than 25 countries, focused on developing specifications for audio-visual systems. From 1999 Mike was based in Silicon Valley, California, USA where he established BT’s US Technology office and Corporate Venturing activity.

Mike is a Fellow of the Royal Academy of Engineering. He received an OBE for “services to innovation” in 2017.