JPEG XS is a brand-new, ultra-low latency standard delivering JPEG 2000 quality with 1000x lower latency; microseconds instead of milliseconds. This mezzanine compression standard promises compression ratios of up to 10:1, resolutions of up to 8K plus HDR and features frame rates from 24 to 120 fps.
Jean-Baptiste Lorent from intoPIX shows how JPEG-XS can be used with SMPTE ST-2110 stack. Part -22 of ST 2110 allows for transport of compressed video essence as an alternative to uncompressed essence – all the other elementary streams stay the same, just the video RTP payload changes. This approach saves a lot of bandwidth and keeps all the existing advantages of moving from SDI to IP at the same time.
Based on TICO which arrived in products four or more years ago allowing HD products to support UHD workflows, JPEG XS was also designed for visually lossless quality and maintaining that quality over multiple re-encoding stages. The combination of very-low microsecond-latency and relatively low bandwidth makes it ideal for remote production of live events.
There continues to be fervent activity in codec development and it’s widely expected that there won’t be a single successor to AVC (h.264). Vying for one of the spots is AV1 but also MPEG’s VVC.
In this talk at SMPTE 2018, Julien Le Tanou from MediaKind compares the coding tools used by VVC and AV1 and explains the methodology he uses to compare the two codecs. We see the increase in decoding time compared to HEVC required for VVC as well as the famously slow AV1. We also see the bitrate savings with VVC performing better.
Julien also presents subjective results which are not correlated to the objective results and explains reasons for this.
Delivering an all-IP truck is no mean feat. tpc explains what they learnt, what went well and how they succeeded in delivering a truck which takes no longer to fire up than a traditional SDI truck.
A common questions among people considering a move to IP is ‘do I need to?’ and ‘how can I get ready?’. Here at The Broadcast Knowledge we always say ‘find a small project, get it working, learn what goes wrong and then plan the one you really wanted to do.’ The Swiss broadcasting service provider ‘Technology and Production Centre’, known as ‘tpc’, has done just that.
tpc is currently working on the Metechno project – a large all-IP news, sports and technology center for Swiss radio and television. In order to acquire necessary experience with the SMPTE ST 2110 standard, tpc designed the UHD1 OB van ahead of time which has been used in TV production for 6 months now. In this video, Andreas Lattmann shares the vision of the Metechno Project and, critically, his experiences related to the design and use of the truck.
The UHD1 is a 24-camera OB van with all IP core based on Arista switches with non-blocking architecture. It is the equivalent of an 184-square UHD SDI system however, it can be expanded by adding additional line cards to network switches. The truck is format agnostic, supporting both HD and UHD formats in HDR and SDR. IP gateways are incorporated for SDI equipment.
The SMPTE ST 2110 specification separates video and audio into discrete essence streams which boosts efficiency and flexibility, but we hear in this talk that more attention to latency (lip sync) is required compared to SDI systems. Andreas talks about the flexibility this truck provides with up-/down-conversion, color-correction for any video plus how IP has enabled full flexibility in what can be routed to the multiviewer screens.
Anderas spends some time discussing redundancy and how IP enables full redundancy – an improvement over many SDI infrastructures and how SMPTE’s ST 2022-7 standard makes this possible.
The main GUI is based on a Lawo VSM control system which aims to deliver a familiar experience for operators who used to work in the SDI domain. Network training has been provided for all operators because troubleshooting has changed significantly with the introduction of essences over IP. This is not least because NMOS IS-04 and 05 standards were not mature enough during design of the truck, so all IP connections had to be managed manually. With more than 50 thousand IP addresses in this system, AMWA’s NMOS IS-04 which manages discovery and registration and IS-05 which manages the setup and take-down of connections would have helped significantly in the lean management of the truck.
Lattmann emphasizes importance of using open standards like SMPTE ST 2110 instead of proprietary solutions. That allows you to choose the best components and not rely on a single manufacturer.
The learning’s the Andreas presents us involve difficulties with PTP, IP training, the benefits of flexibility. From a video point of view, Andreas presents his experiences with HDR->SDR workflows, focussing in HDR and UHD.
Lars Borg explains to us what problems the SMPTE ST 2094 standard sets out to solve. Looking at the different types of HDR and Wide Colour Gamut (WCG) we quickly see how many permutations there are and how many ways there are to get it wrong.
ST 2094 carries the metadata needed to manage the colour, dynamic range and related data. In order to understand what’s needed, Lars takes us through the details of the HDR implementations, touching on workflows and explaining how the ability of your display affects the video.
We then look at midtones and dynamic metadata before a Q&A.
This talk is very valuable in understanding the whole HDR, WCG ecosystem as much as it is ST 2094.
This talk is part of a series of talks on ATSC 3.0 we’re featuring here on The Broadcast Knowledge. ATSC 3.0 is a big change in terrestrial television transmission because even over the air, the signal is IP.
In this talk, Joe Seccia from GatesAir, a company famed for its transmission systems, talks us through where the US (and Seoul) is on its way to deploying this technology.
With major US broadcasters having pledged to be on air with ATSC 3.0 by the end of 2020, trials are turning in to deployments and this is a report back on what’s been going on.
Joe covers the history of previous tests and trials before taking us through the architecture of a typical system. After explaining the significance of the move to IP, Joe also covers other improvements such as using OFDM modulation and thus being able to use a single frequency network (SFN). This combination of technologies improves reception and coverage over the 8VSB transmissions which went before it.
We also hear about the difference between home and broadcast gateways in the system as well as the Early Alert System Augmentation features which allow a broadcaster to ‘wake up’ TVs and other devices when disasters strike or are predicted.
Display technology has always been deeply intertwined with broadcasting. After all, when John Logie Baird first demonstrated his working television, he had to invent both the camera and the display device, then known as at the televisor. He himself worked tirelessly on improving television and less than 20 years after his black and white debut was working on a colour television which used two CRT (Cathode Ray Tubes) to produce its picture culminating in the world’s first demonstration of a colour TV in 1944 – incidentally discovering, demonstrating and patenting 3D TV on the way!
So it is today that the displays define what we can show to viewers. Is there any point in mastering a video to show at 10,000 NITs if there is no display that can show something so bright? Pushing all of Europe and the US’s television programmes to 8K resolution is of limited benefit when 8K TVs are in limited supply and in few homes.
This talk looks at the state of the art of display technology seeing where it’s being used and how. Digital Signage is covered and of course this is where the high brightness technology is developed, for signs outside, some of which could influence more conventional TVs on which we want to watch HDR (High Dynamic Range) video.
When OLED technology first came along it was quickly slated as a great option TVs and yet all these years later we see that its adoption in large panels is low. This shows the difficulty, sometimes, in dealing with the technical challenges of great technologies. We now see OLEDs in wearable devices and smaller screens. The number of the screens is quickly increasing as IoT devices, watches and other electronics start to adopt full screens instead of just flashing LEDs. This increase in manufacturing should lead to renewed investment in this field potentially allowing OLEDs to be incorporated in to full-sized, large TVs.
The talk finished with a look at the TV market covering quantum dots and what people really mean when they mention ‘LED TVs’.
This webinar is from the Society for Information Display and is produced in partnership with SMPTE.
With the SMPTE 2110 suite of standards largely published and the related AMWA IS-04 and -05 specifications stable, people’s minds are turning to how to implement all these standards bringing them together into a complete working system.
The JT-NM TR-1001-1 is a technical recommendation document which describes a way of documenting how the system will work – for instance how do new devices on the network start up? How do they know what PTP domain is in use on the network?
John Mailhot starts by giving an overview of the standards and documents available, showing which ones are published and which are still in progress. He then looks at each of them in turn to summarise its use on the network and how it fits in to the system as a whole.
Once the groundwork is laid, we see how the JT-NM working group have looked at 5 major behaviours and what they have recommended for making them work in a scalable way. These cover things like DNS discovery, automated multicast address allocation and other considerations.
At The Broadcast Knowledge, we’re continuing to cut through the hype and get to the bottom of blockchain. Now part of the NAB drinking game along with words like AI and 5G, it’s similarly not going away. The principle of blockchain is useful – just not useful everywhere.
So what can broadcasters do with Blockchain, and – given this is a SMPTE talk – what can film studios do with it? It’s doubtless that blockchain really makes secure, trusted systems possible so the mind immediately jumps to using it to ensure all the files needed to create films are distributed securely and with an audit trail.
Here, Steve Wong looks at this but explores the new possibilities this creates. He starts with the basics on what blockchain is and how it works, but soon moves in to how this could work for Hollywood explaining what could exist and what already does.
‘Better Pixels’ is the continuing refrain from the large number of people who are dissatisfied with simply increasing resolution to 4K or even 8K. Why can’t we have a higher frame-rate instead? Why not give us a wider colour gamut (WCG)? And why not give us a higher dynamic range (HDR)? Often, they would prefer any of these 3 options over higher resolution.
Dynamic Range is the word given to describe how much of a difference there is between the smallest possible signal and the strongest possible signal. In audio, what’s the quietest things that can be heard verses the loudest thing that can be heard (without distortion). In video, what’s the difference between black and white – after all, can your TV fully simulate the brightness and power of our sun? No, what about your car’s headlights? Probably not. Can your TV go as bright as your phone flashlight – well, now that’s realistic.
So let’s say your TV can go from a very dark black to being as bright as a medium-power flashlight, what about the video that you send your TV? When there’s a white frame, do you want your TV blasting as bright as it can? HDR allows producers to control the brightness of your display device so that something that is genuinely very bright, like star, a bright light, an explosion can be represented very brightly, whereas something which is simply white, can have the right colour, but also be medium brightness. With video which is Standard Dynamic Range (SDR), there isn’t this level of control.
For films, HDR is extremely useful, but for sports too – who’s not seen a football game where the sun leaves half the pitch in shadow and half in bright sunlight? With SDR, there’s no choice but to have one half either very dark or very bright (mostly white) so you can’t actually see the game there. HDR enabled the production crew to let HDR TVs show detail in both areas of the pitch.
HLG, which stands for Hybrid Log-Gamma is the name of a way of delivering HDR video. It’s been pioneered, famously, by Japan’s NHK with the UK’s BBC and has been standardised as ARIB STV-B67. In this talk, NHK’s Yuji Nagata helps us navigate working with multiple formats; HDR HLG -> SDR, plus converting from HLG to Dolby’s HDR format called PQ.
The reality of broadcasting is that anyone who is producing a programme in HDR will have to create an SDR version at some point. The question is how to do that and when. For live, some broadcasters may need to fully automate this. In this talk, we look at a semi-automated way of doing this.
HDR is usually delivered in a Wide Colour Gamut signal such as the ITU’s BT.2020. Converting between this colour space and the more common BT.709 colour space which is part of the HD video standards, is also needed on top of the dynamic range conversion. So listen to Yugi Nagata’s talk to find out NHK’s approach to this.
NHK has pushed very hard for many years to make 8K broadcasts feasible and has in recent times focussed on tooling up in time for the the 2020 Olympics. This talk was given at the SMPTE 2017 technical conference, but is all the more relevant now as NHK up the number of 8K broadcasts approaching the opening ceremony. This work on HDR and WCG is part of making sure that the 8K format really delivers an impressive and immersive experience for those that are lucky enough to experience it. This work on the video goes hand in hand with their tireless work with audio which can deliver 22.2 multichannel surround.
SMPTE Time Code started off in the 1970s and has evolved yet in some ways remained unchanged. It is key to electronic video editing, and has found application in many other fields and industries including live events. This is Armin Van Buren explaining how he uses SMPTE timecode in his live DJ sets.
The more we push technology, the more we demand form timecode, so now there is the TLX project (Time Label, eXtensible) which seeks to define a new labelling system.
The webinar will provide an overview of the emerging design, and is intended to provide a preview for potential users, and to encourage participation by those with expertise to offer.
Peter Symes, the host, covers:
The history of timing
What SMPTE ST 12 is and its evolution
The concept of TLX
Use of PTP & provision for extensibility
What types of data can TLX convey
Watch now to hone your knowledge of the SMPTE timecode that already exists and to get ready to understand TLX.