Video: High-Throughput JPEG 2000 (HTJ2K) for Content Workflows

Published last year, high-throughput JPEG 2000 (HTJ2K) is an update to the J2K we know well in the broadcast industry making it much faster. Whilst JPEG 2000 has found a home in low-latency broadcast contribution, it’s also part of the archive exchange format (AXF) because, unlike most codecs, JPEG 2000 has a mathematically lossless mode. HTJ2K takes JPEG 2000 and replaces some of the compression with a much faster algorithm allowing for much faster decoding of well 10 to 28 times faster in many circumstances.

The codec market seems waking up to the fact that multiple types of codec are needed to support the thousands of use cases that we have in the Media and Entertainment and beyond. It’s generally well known that codecs live in a world where they are optimising bitrate at the expense of latency and quality. But the advent of MPEG 5 Part 2, also known as LCEVC show that there is value in optimising to reduce complexity of encoding. In some ways, this is similar to saying reduce the latency, but in the LCEVC example, the aim is to allow low-power or low-complexity equipment to deal with HD or UHD video where otherwise that might not have been possible. With HTJ2K we have a similar situation where it’s worth getting 10x more throughput when managing and processing your archive at the expense of 5% more bitrate.

This talk from the EBU’s Network Technology Seminar hears from Pierre-Anthony Lemieux and Michael Smith who explain the need for this codec and the advantages. One important fact is that the encoding itself hasn’t been changed, just some of the maths around it. This means that you can take previously encoded files and process them into HTJ2K without changing any of the video data. This allows lossy J2K files to be converted without any degradation due to re-encoding and minimises conversion time for lossless files. Another motivator for this codec is cloud workflows where speed of compression is important to reduce costs. Michael Smith also explores the similarities and differences of High-Throughput J2K with JPEG XS

Watch now!
Speakers

Pierre-Anthony Lemieux Pierre-Anthony Lemieux
Sandflow Consulting
Michael Smith
Wavelet Consulting

Video: Esports for Broadcasters – Part III

In the last of three sessions on esports, the RTS Thames Valley looks at how vendors for the traditional sports market can adapt and serve this quickly growing market.

Guillaume Neveux from EVS sets the scene talking about the current viewing figures (44 million concurrent peak viewers for League of Legends) and revenue predictions of a 40% increase over the next three years. This is built on sponsorship and, like TV, this takes the form of ad insertion, and programme sponsorship (i.e. logo on screen) to name but two options. Esports has an advantage as they can control the whole world the sport takes place in. This means that advertising signs can be placed, live, on objects in the live stream which are seen by the viewers but not by the players, something which has been attempted in traditional sports but has yet to become common.

Guillaume also looks at how Twitch and YouTube Gaming work, commenting that one of their big differences from traditional sports is the chat room which scrolls next to the game itself. This lends a significant feeling of community to the game which is seldom replicated in traditional sports broadcasting. In general, esports is free to watch. Freemium subscriptions allow you to reduce the number of adverts seen and also improve the chat options.

The next part of the talk spotlights some of the roles unique to esports. The Caster is analogous to a commentator. They are there to weave a story, to explain what’s happening on screen and to add colour to the even by explaining more about what’s happening, about the people and about the game itself. Streamers are individuals who stream themselves playing computer games who, like YouTube personalities, can have extremely large audiences. An Observer is someone who moves around the game world but is invisible to the players, they are analogous to camera operators in that they can control their own view of the world and are also responsible for choosing which views from the players are seen. Essentially they are like a sub vision mixer feeding specific shots into the main programme as well as, in some circumstances, creating dedicated streams of shots for secondary streams. Graphics operators are just as important as in other types of programmes although aspect ratios are all the more tricky and this also involves integration into the game engines.

Guillaume also covers the equipment used by esports broadcasters. EVS is a premium brand with products honed to a very specific market. Guillaume explains that although the equipment may seem expensive, the efficiencies derived from buying equipment designed for your workflow a notable compared to creating similar workflows out of other equipment typically due to the added complexity, maintenance and workflow fit. At the end of the day, much of what traditional sports and esports needs is similar – slowmo, replays, graphics insertion – so only some modifications were needed to the EVS products to make them fit into the needed workflows.

Watch now!
Speakers

Guillaume Neveux Guillaume Neveux
Business Development Manager EMEA,
EVS

Video: Hybrid SDI/ST 2110 Workflows

It’s no secret that SDI is still the way to go for some new installations. For all the valid interest in SMPTE’s ST 2110, the cost savings are only realised either on a large scale or in the case that a system needs continuous flexibility (such as an OB truck) or scalability in the future. Those installations which have gone IP still have some SDI lying around somewhere. Currently, there are few situations where there is an absolute ‘no SDI’ policy because there are few business cases which can afford it.

Looking at the current deployments of broadcast 2110, we have large, often public, broadcasters who are undergoing a tech refresh for a building and can’t justify such as massive investment in SDI or they are aiming to achieve specific savings such as Discovery’s Eurosport Transformation Project which is an inspirational, international project to do remote production for whole buildings. We also have OB trucks who benefit significantly from reduced cabling, higher density routing and flexibility. For a more detailed view on 2110 in trucks, watch this video from NEP. In these scenarios, there is nearly always SDI still involved. Some equipment doesn’t yet work fully in 2110, some doesn’t yet work at all and while there are IP versions of some products, the freelance community still needs to learn how to use the new products or work in the new workflows. If you have a big enough project, you’ll hit the ‘vendor not yet ready’ problem, if you have an OB-truck or similar, you are likely to have to deal with the freelance experience issue. Both are reducing, but are still real and need to be dealt with.

Kevin Salvidge from Leader joins the VSF’s Wes Simpson to share his experience of these SDI/IP mixed workflows, many of which are in OB trucks so also include mixed HDR workflows. He starts by talking about PTP and GPS discussing how timing needs to be synced between locations. He then takes a closer look at the job of the camera shaders who make sure all the cameras have the same colour, exposure etc. Kevin talks about how live production in HDR and SDR work touching on the problem of ‘immediacy’. Shaders need to swap between cameras quickly and are used to the immediate switch that SDI can provide. IP can’t offer quite the same immediacy, Kevin says that some providers have added delays into the SDI switches to match the IP switch times within the same truck. This helps set expectations and stop operators pressing two or more times to get a switch made.

Kevin finishes his talk on the topic of synchronising analogue timing signals with PTP. Kevin shows us the different tools you can use to monitor these signals such as a display of PTP timing against B&B timing, a BMCA data readout of data from the PTP grandmasters to check if the BMCA algorithm is working correctly, PTP delay time, packet inter-arrival time, path delay, traffic shaping monitoring. He then closes with a Q&A talking about the continued prevalence of SDI, what ‘eye patterns’ are in the IP world and increasing HDR roll-outs.

Watch now!
Speaker

Kevin Slavidge
European Regional Development Manager
Leader Europe Ltd.
Wes Simpson Moderator: Wes Simpson
President, Telcom Product Consulting
Owner, LearnIPVideo.com

Video: Low Latency, Real-Time Streaming & WebRTC

Can any stream be too low-latency? For some matching broadcast latency, is all they need. But for others, particularly for gaming, gambling or more interactive services, sub-second is a must and they are happy to swap out parts of their technology stack to make that happen. WebRTC is often seen as the best choice for anyone wanting to go achieve an almost instant stream. Started by Google in 2011 for video conferencing applications, WebRTC hit a 1.0 release in 2018 and has been adopted by a number of companies catering to the broadcast market.

WebRTC stands out among the plethora of streaming protocols since it is an actual stream of data and not a series of files transferred just in time. Traditionally buffers have been heavily used in streaming because it was so hard to get data to the player when the mainstream internet was starting out in the 90s and as the mobile internet was establishing itself 10 years later. Whilst those buffers are very helpful in dealing with delayed data, they are a big set back in delivering a low-latency stream. With WebRTC, there is very little buffering, so when using the protocol you have to understand that you may not get all your data delivered and if packets are missing glitches will be seen. This is one significant difference since MPEG DASH and HLS will either show you a blank screen or a perfect rendition of the file chunk that was sent thanks to TCP. This is an example of the compromises of going to sub-second latency; there are no second chances to get the packet again. And whilst this compromise may be a great exchange for an auction site or betting service, for other streaming services, it may be better to use CMAF with 3-second latency.

In this talk, Limelight Networks Video Architect Andrew Crowe introduces WebRTC and explains how it can be deployed. He starts by talking about the video codecs it contains. VP9 has recently been added to the options and for a long time, it was a VP8 technology. Andrew explains how the codecs it carries does have a knock-on effect on its compatibility with browsers. UDP is the underlying technology to all low-latency technologies since the bureaucracy of TCP/IP gets in the way of real-time media streams. Andrew also explains how security pervades WebRTC from its use of DTLS (which is like HTTPS/TLS for UDP) to secure RTP and SRTCP.

The last part of the talk discusses the architectures that CDN LimeLight uses to enable large-scale WebRTC streams including the need to get through firewalls. Andrew discusses how some features of the technology suit small-scale events, but can’t be used with thousands of viewers. He also discusses how adaptive bitrate streams can be delivered, although not within WebRTC itself, there is enough information to implement ABR in addition to the standard stream.

Watch now!
Speakers

Andrew Crowe Andrew Crowe
Video Architect,
Limelight Networks