Video: IP Systems – The Big Picture

Early adopters of IP are benefiting from at least one of density, flexibility and scalability which are some of the promises of the technology. For OB vans, the ability to switch hundreds of feeds within only a couple rack units is incredibly useful, for others being able to quickly reconfigure a room is very valuable. So whilst IP isn’t yet right for everyone, those that have adopted it are getting from it benefits which SDI can’t deliver. Unfortunately, there are aspects of IP which are more complex than older technology. A playback machine plugged into an SDI router needed no configuration. However, the router and control system would need to be updated manually to say that a certain input was now a VT machine. In the IP world, the control system can discover the new device itself reducing manual intervention. In this situation, the machine also needs an IP configuration which can be done manually or automatically. If manual, this is more work than before. If automatic, this is another service that needs to be maintained and understood.

 

 

Just like the IT world is built on layers of protocols, standards and specifications, so is a modern broadcast workflow. And like the OSI model which helps break down networking into easy to understand, independent layers such as cabling (layer 1), point to point data links (layer 2), the network layer (3) etc. It’s useful to understand IP systems in a similar way as this helps reduce complexity. The ‘Networked Media System Big Picture’ is aimed at helping show how a professional IP media system is put together and how the different parts of it are linked – and how they are not linked. It allows a high-level view to help explain the concepts and enables you to add detail to explain how each and every protocol, standard and specification are used and their scope. The hope is that this diagram will aid everyone in your organisation to speak in a common language and support conversations with vendors and other partners to avoid misunderstandings.

Brad Gilmer takes us through the JT-NM’s diagram which shows that security is the bottom layer for the whole system meaning that security is all-encompassing and important to everything. Above the security layer is the monitoring layer. Naturally, if you can’t measure how the rest of your system is behaving, it’s very hard to understand what’s wrong. For lager systems, you’ll be wanting to aggregate the data and look for trends that may point to worsening performance. Brad explains that next are the control layer and the media & infrastructure layer. The media and infrastructure layer contains tools and infrastructure needed to create and transport professional media.

Towards the end of this video, Brad shows how the diagram can be filled in and highlighted to show, for instance, the work that AMWA has done with NMOS including work in progress. He also shows the parts of the system that are within the scope of the JT-NM TR 1001 document. These are just two examples of how to use the diagram to frame and focus discussions demonstrating the value of the work undertaken.

Watch now!
Speaker

Brad Gilmer Brad Gilmer
Executive Director, Video Services Forum
Executive Director, Advanced Media Workflow Association (AMWA)
Wes Simpson Moderator: Wes Simpson
LearnIPVideo.com

Video: It all started with a Reddit post…

A lively conversation today on updating workflows, upskilling staff, when to embrace the cloud…and when not to. Started by a discussion on Reddit, we hear from CEO of Canadian service provider Nextologies, Sasha Zivanovic and co-founder of Nxtedition, Robert Nagy. The discussion, hosted by Adam Leah, starts by tackling the question of how to deal with legacy workflows. The initial disagreement seems to come from the two approaches. Robert’s pragmatic approach acknowledges that legacy workflows can be functional and dysfunctional and the decision on whether to start again or transition lies in whether your current workflow works without constant human intervention or not. Sasha agrees that dysfunctional workflows, ones that fall apart if key people are away, need to be dismantled and reworked at the earliest opportunity. Otherwise, he feels that education is key in ensuring that you teach people how to use the new technologies available and how to create good, robust workflows on which you can really base your future business.

Indeed, for Sasha education is the key because, in his words, ‘there is no 1-800 Amazon’. Being progressive and moving your workflow into the cloud may be the right way forward, but understanding that the cloud providers are only providing infrastructure means that if any little thing doesn’t work, you will need your own staff to explain it and resolve the problem. Even big players who may have access to named Engineers will still have far too many smaller issues that they themselves will have to deal in order to allow their named resources at the cloud provider to work on the higher priority/bigger problems and designs being discussed. Moreover, lack of education is more likely to lead people simply to go with what’s easy namely making something work using free/low-cost hardware and software. Sasha’s point isn’t that free things are bad, but that often the solutions based on getting OBS up and running are often not robust and may accept more compromises such as latency of image quality, than needed.

 

 

Robert and Sasha go on to discuss this question of what quality is good enough directly, both advising against superfluous quality as much as recommending avoiding workflows that under spec the stream. Quality needs to come down to your brand, the video’s context and the technical capability of the workflow. To speak to the latter, both Robert and Sasha point out the folly in demanding archives and contribution happen in the ‘house format’ such as 25Mbps. Such bitrates may make a lot of sense on-prem, but for streaming or some cloud workflows are counterproductive and don’t deliver a better result to the viewer. Your brand does need to be considered in order to set a lower bar for the content, but usually, the venue of your video is more important, agree Robert and Sasha, where a YouTube Story would attract a different quality to a Vimeo post to a long-form OTT asset.

The larger concern raised in this conversation is the ‘bifurcation’ of the market. Looking at this from a service provider’s point of view, Sasha sees that the tech companies have increased the size of the market which positive. But with that comes problems. The ease of access to the cloud increases the ability for small players to participate but there is still a high-end place in the market where tier-1 broadcasters play who do benefit from the cloud, but still requires a high investment in time and design to create it along with high Opex. This doesn’t mean overall there is no cost-benefit to those broadcasters, often there is and sometimes it’s not cost they are optimising for. But it’s the gap that concerns Sasha, where those not engaging like tier-1 broadcasters tend to graduate to the bottom end of the market which has much lower revenues than before. Whilst The Broadcast Knowledge would suggest this is where Sasha can prove the worth of his company, anchoring the bottom of the market at a low cost does reduce the opportunities for companies such as Nextologies to charge a sufficient amount to cover costs and maintain competitiveness. Robert and Sasha both agree that that success with clients nowadays is achieved through partnering with them and following, helping and encouraging them on their journey. The value of such a long-term design or product partner is worth more than any single workflow.

Watch now!
Speakers

Sasha Zivanovic Sasha Zivanovic
CEO
Nextologies
Robert Nagy Robert Nagy
Lead Developer & Co-founder,
nxtedition
Adam Leah Moderator: Adam Leah
Creative Director,
nxtedition

Video: Insight into Current Trends of IP Production & Cloud Integration

When we look at the parts of our workflows that work well, we usually find standards underneath. SDI is pretty much a solved problem and has been delivering video since before the 90s, albeit with better reliability as time has gone on. MPEG Transport Streams are another great example of a standard that has achieved widespread interoperability. These are just two examples given by John Mailhot from Imagine Communications as he outlines the standards which have built the broadcast industry to what it is today, or perhaps to what it was in 2005. By looking at past successes, John seeks to describe the work that the industry should be doing now and into the future as technology and workflows evolve at a pace.

John’s point is that in the past we had some wildly successful standards in video and video transport. For logging, we relied on IT-based standards like SNMP and Syslog and for control protocols, the wild west was still in force with some defacto standards such as Probel’s SW-P-08 router protocol and the TSL UMD protocol dominating their niches.

 

 

The industry is now undergoing a number of transformations simultaneously. We are adopting IP-based transport both compressed and uncompressed (though John quickly points out SDI is still perfectly viable for many). We are moving many workloads to the cloud and we are slowly starting to up our supported resolutions along with moving some production to HDR. All of this work, to be successful should be based on standards, John says. And there are successes in there such as AMWA’s NMOS specifications which are the first multi-vendor, industry-wide control protocol. Technically it is not a standard, but in this case, the effect is close to the same. John feels that the growth of our industry depends on us standardising more control protocols in the future.

John spends some time looking at how the move to IP, UHD, HDR and Cloud have played into the Live Production and Linear Playout parts of the broadcast chain. Live production, as we’ve heard previously is starting to embrace IP now, lagging playout deployments. Whereas playout usually lags production in UHD and HDR support since it’s more important to acquire video now in UHD & HDR even if you can’t transmit it to maximise its long-term value.

John finishes by pointing out that Moore’s law’s continued may not be so clear in CPUs but it’s certainly in effect within optics and network switches and routers. Over the last decade, switches have gone from 10 gig to 50 to 100 and now to 400 gig. This long term cost reduction should be baked into the long-term planning for companies embarking on an IP transformation project.

Watch now!
Speaker

John Mailhot John Mailhot
CTO,
Imagine Communications

Video: The Future of Live HDR Production

HDR has long been hailed as the best way to improve the image delivered to viewers because it packs a punch whatever the resolution. Usually combined with a wider colour gamut, it brings brighter highlights, more colours with the ability to be more saturated. Whilst the technology has been in TVs for a long time now, it’s continued to evolve and it turns out doing a full, top tier production in HDR isn’t trivial so broadcasters have been working for a number of years now to understand the best way to deliver HDR material for live sports.

Leader has brought together a panel of people who have all cut their teeth implementing HDR in their own productions and ‘writing the book’ on HDR production. The conversation starts with the feeling that HDR’s ‘there’ now and is now much more routinely than before doing massive shows as well as consistent weekly matches in HDR.
 

 
Pablo Garcia Soriano from CORMORAMA introduces us to light theory talking about our eyes’ non-linear perception of brightness. This leads to a discussion of what ‘Scene referred’ vs ‘Display referred’ HDR means which is a way of saying whether you interpret the video as describing the brightness your display should be generating or the brightness of the light going into the camera. For more on colour theory, check out this detailed video from CVP or this one from SMPTE.

Pablo finishes by explaining that when you have four different deliverables including SDR, Slog-3, HLG and PQ, the only way to make this work, in his opinion, is by using scene-referred video.

Next to present is Prin Boon from PHABRIX who relates his experiences in 2019 working on live football and rugby. These shows had 2160p50 HDR and 1080i25 SDR deliverables for the main BT Programme and the world feed. Plus there were feeds for 3rd parties like the jumbotron, VAR, BT Sport’s studio and the EPL.

2019, Prin explains, was a good year for HDR as TVs and tablets were properly available in the market and behind the scenes, Stedicam now had compatible HDR rigs and radio links could now be 10-bit. Replay servers, as well, ran in 10bit. In order to produce an HDR programme, it’s important to look at all the elements and if only your main stadium cameras are HDR, you soon find that much of the programme is actually SDR originated. It’s vital to get HDR into each camera and replay machine.

Prin found that ‘closed-loop SDR shading’ was the only workable way of working that allowed them to produce a top-quality SDR product which, as Kevin Salvidge reminds us is the one that earns the most money still. Prin explains what this looks like, but in summary, all monitoring is done in SDR even though it’s based on the HDR video.

In terms of tips and tricks, Prin warns about being careful with nomenclature not only in your own operation but also in vendor specified products giving the example of ‘gain’ which can be applied either as a percentage or as dB in either the light or code space, all permutations giving different results. Additionally, he cautions that multiple trips to and from HDR/SDR will lead to quantisation artefacts and should be avoided when not necessary.
 

 
The last presentation is from Chris Seeger and Michael Drazin from NBC Universal talk about the upcoming Tokyo Olympics where they’re taking the view that SDR should look the ‘same’ as HDR. To this end, they’ve done a lot of work creating some LUTs (Look Up Tables) which allow conversion between formats. Created in collaboration with the BBC and other organisations, these LUTs are now being made available to the industry at large.

They use HLG as their interchange format with camera inputs being scene referenced but delivery to the home is display-referenced PQ. They explain that this actually allows them to maintain more than 1000 NITs of HDR detail. Their shaders work with HDR, unlike the UK-based work discussed earlier. NBC found that the HDR and SDR out of the CCU didn’t match so the HDR is converted using the NBC LUTs to SDR. They caution to watch out for the different primaries of BT 709 and BT 2020. Some software doesn’t change the primaries and therefore the colours are shifted.

NBC Universal put a lot of time into creating their own objective visualisation and measurement system to be able to fully analyse the colours of the video as part of their goal to preserve colour intent even going as far as to create their own test card.

The video ends with an extensive Q&A session.

Watch now!
Speakers

Chris Seeger Chris Seeger
Office of the CTO, Director, Advanced Content Production Technology
NBC Universal
Michael Drazin Michael Drazin
Director Production Engineering and Technology,
NBC Olympics
Pablo Garcia Soriano Pablo Garcia Soriano
Colour Supervisor, Managing Director
CROMORAMA
Prinyar Boon Prinyar Boon
Product Manager, SMPTE Fellow
PHABRIX
Ken Kerschbaumer Moderator: Ken Kerschbaumer
Editorial Director,
Sports Video Group
Kevin Salvidge
European Regional Development Manager,
Leader