Video: Live production: Delivering a richer viewing experience

How can large sports events keep an increasingly sophisticated audience entertained and fully engaged? The technology of sports coverage has pushed broadcasting forwards for many years and there’s no change. More than ever there is a convergence of technologies both at the event and delivering to the customers which is explored in this video.

First up is Michael Cole, a veteran of live sports coverage, now working for the PGA European Tour and Ryder Cup Europe. As the event organisers – who host 42 golfing events throughout the year – they are responsible for not just the coverage of the golf, but also a whole host of supporting services. Michael explains that they have to deliver live stats and scores to on-air, on-line and on-course screens, produce a whole TV service for the event-goers, deliver an event app and, of course run a TV compound.

One important aspect of golfing coverage is the sheer distances that video needs to cover. Formerly that was done primarily with microwave links and whilst RF still plays an important part of coverage with wireless cameras, the long distances are now done by fibre. However as this takes time to deploy each time and is hard to conceal in otherwise impeccably presented courses, 5G is seeing a lot of interest to validate its ability to cut rigging time and costs along with making the place look tidier in front of the spectators.

Michael also talks about the role of remote production. Many would see this an obvious way to go, but remote production has taken many years to slowly be adopted. Each broadcaster has different needs so getting the right level of technology available to meet everyone’s needs is still a work in progress. For the golfing events with tens of trucks, and cameras, Michael confirms that remote production and cloud is a clear way forward at the right time.

Next to talk is Remo Ziegler from VizRT who talks about how VizRT serves the live sports community. Looking more at the delivery aspect, they allow branding to be delivered to multiple platforms with different aspect ratios whilst maintaining a consistent look. Whilst branding is something that, when done well, isn’t noticed by viewers, more obvious examples are real-time, photo-realistic rendering for in-studio, 3D graphics. Remo talks next about ‘Augmented Reality’, AR, which can be utilised by placing moving 3D objects into a video making them move and look part of the picture as a way of annotating the footage to help explain what’s happening and to tell a story. This can be done in real time with camera tracking technology which takes into account the telemetry from the camera such as angle of tilt and zoom level to render the objects realistically.

The talk finishes with Chris explaining how viewing habits are changing. Whilst we all have a sense that the younger generation watch less live TV, Chris has the stats showing the change from people 66 years+ for whom ‘traditional content’ comprises 82% of their viewing down to 16-18 year olds who only watch 28%, the majority of the remainder being made up from SCOD and ‘YouTube etc.’.

Chris talks about the newer cameras which have improved coverage both by improving the technical ability of ‘lower tier’ productions but also for top-tier content, adding cameras in locations that would otherwise not have been possible. He then shows there is an increase in HDR-capable cameras being purchased which, even when not being used to broadcast HDR, are valued for their ability to capture the best image possible. Finally, Chris rounds back on Remote Production, explaining the motivations of the broadcasters such as reduced cost, improved work-life balance and more environmentally friendly coverage.

The video finishes with questions from the webinar audience.

Watch now!
Speakers

Michael Cole Michael Cole
Chief Technology Officer,
PGA European Tour & Ryder Cup Europe
Remo Ziegler Remo Ziegler
Vice President, Product Management, Sports,
Vizrt
Chris Evans Chris Evans
Senior Market Analyst,
Futuresource Consulting

Video: Beam hopping in DVB-S2X

Beam hopping is the relatively new ability of a satellite to move its beam so that it’s transmitting to a different geographical area every few milliseconds. This has been made possible by the advance of a number of technologies inside satellite which make this fast, constant, switching possible. DVB is harnessing this new capability to more efficiently deliver bandwidth to different areas.

This talk starts off with a brief history of DVB-S2 moving to DVB-S2X and the successes of that move. But we then see that geographically within a wide beam, two factors come in to play: The satellite throughput is limited by the amplifiers and TWTs, plus certain areas within the beam needed more throughput than others. By dynamically pointing a more focused beam using ferrite switches and steerable antennae – to name but two technologies at play – we see that up to 20% of unmet demand could be addressed.

The talk continues with the ESA’s Nader Alagha explaining some of the basics of directing beams talking about moving from cells and clusters (geographical areas) and then how ‘dwell times’ are the amount of time spent at each cell. He then moves on to give an overview of the R&D work underway and the expected benefits.

A little like in older CRT Televisions a little gap needs to be put into the signal to cover the time the beam is moving. For analogue television this is called ‘blanking’, for stellites this is an ‘idle sequence’. Each time a beam hopes, the carrier frequency, bandwidth and number of carriers can change.

Further topics explored are implementing this on geostationary and non-geostationary satellites, connecting hopping strategy to traffic and the channel models that can be created around beam hopping. The idea of ‘superframes’ is detailed where frames that need to be decoded with a very low SNR, the information is spread out and duplicated an number of times. This is supported in beam hopping with some modifications which require some pre-ambles and the understanding of fragmentation of these frames.

The talk closes discussing looking at future work and answering questions from the webinar attendees.

Watch now!
Speakers

Nader Alagha Nader Alagha
Senior Communications Engineer,
ESA ESTEC (European Space Research and Technology Centre)
Peter Nayler Peter Nayler
Business Manager,
EASii IC
Avi Freedman
Director of System Engineering,
Satixfy

Video: S-Frame in AV1: Enabling better compression for low latency live streaming.

Streaming is such a success because it manages to deliver video even as your network capacity varies while you are watching. Called ABR (Adaptive Bitrate), this short talk asks how we can allow low-latency streams to nimbly adapt to network conditions whilst keeping the bitrate low in the new AV1 codec.

Tarek Amara from Twitch explains the idea in AV1 of introducing S-Frames, sometimes called ‘switch frames’, which take the role of the more traditional I or IDR frames. If a frame is marked as an IDR frame, this means the decoder knows it can start decoding from this frame without worrying that it’s referencing some data that came before this frame. By doing this, you can allow frequent points at which a decoder can enter a stream. IDR frames are typically I frames which are the highest bandwidth frames, by a large proportion. This is because they are a complete rendition of a frame without any of the predictions you find in P and B frames.

Because IDR frames are so large, if you want to keep overall bandwidth down, you should reduce the number of them. However, reducing the number of frames reduces the number if ‘in points’ for for the stream meaning a decoder then has to wait longer before it can start displaying the stream to the viewer. An S-Frame brings the benefits of an IDR in that it still marks a place in the stream where the decoder can join, free of dependencies on data previously sent. But the S-Frame is takes up much less space.

Tarek looks at how an S-Frame is created, the parameters it needs to obey and explains how the frames are signalled. To finish off he presents tests run showing the bitrate improvements that were demonstrated.
Watch now!
Speaker

Tarek Amara Tarek Amara
Engineering Manager, Video Encoding,
Twitch

Webinar: RAVENNA and its Relationship to AES67 and SMPTE ST 2110


This webinar is now available on-demand

This first in a series of webinars, this will have a broad scope covering the history of audio networking in, the development of RAVENNA the the consequent developments of AES67 and ST2110. Whether you’re new to or already familiar with RAVENNA and/or AES67 & ST2110 you’ll benefit from this webinar either as revision or as an excellent starting point for understanding the landscape of Audio-over-IP standards and technologies.

This webinar is presented Andreas Hildebrand who has previously appeared on The Broadcast Knowledge giving insight into The Audio Parts of ST 2110, ST 2110-30 and NMOS IS-08 — Audio Transport and Routing amongst others.

This talk looks at how audio IP works and the benefits of using an IP system. Since the invention of RAVENNA, the AES and SMPTE have moved to using audio over IP so the walk will examine how RAVENNA and SMPTE place the audio data on the network and get that to the decoders. Interoperability between systems is important but can only happen if certain parameters are correct, something that Andreas will mention but will also be a subject for future webinars.

Whether you want to revise the basics from an expert or learn them for the first time, now’s the time to register.

Watch now!

Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX GmbH