Video: Live production: Delivering a richer viewing experience

How can large sports events keep an increasingly sophisticated audience entertained and fully engaged? The technology of sports coverage has pushed broadcasting forwards for many years and there’s no change. More than ever there is a convergence of technologies both at the event and delivering to the customers which is explored in this video.

First up is Michael Cole, a veteran of live sports coverage, now working for the PGA European Tour and Ryder Cup Europe. As the event organisers – who host 42 golfing events throughout the year – they are responsible for not just the coverage of the golf, but also a whole host of supporting services. Michael explains that they have to deliver live stats and scores to on-air, on-line and on-course screens, produce a whole TV service for the event-goers, deliver an event app and, of course run a TV compound.

One important aspect of golfing coverage is the sheer distances that video needs to cover. Formerly that was done primarily with microwave links and whilst RF still plays an important part of coverage with wireless cameras, the long distances are now done by fibre. However as this takes time to deploy each time and is hard to conceal in otherwise impeccably presented courses, 5G is seeing a lot of interest to validate its ability to cut rigging time and costs along with making the place look tidier in front of the spectators.

Michael also talks about the role of remote production. Many would see this an obvious way to go, but remote production has taken many years to slowly be adopted. Each broadcaster has different needs so getting the right level of technology available to meet everyone’s needs is still a work in progress. For the golfing events with tens of trucks, and cameras, Michael confirms that remote production and cloud is a clear way forward at the right time.

Next to talk is Remo Ziegler from VizRT who talks about how VizRT serves the live sports community. Looking more at the delivery aspect, they allow branding to be delivered to multiple platforms with different aspect ratios whilst maintaining a consistent look. Whilst branding is something that, when done well, isn’t noticed by viewers, more obvious examples are real-time, photo-realistic rendering for in-studio, 3D graphics. Remo talks next about ‘Augmented Reality’, AR, which can be utilised by placing moving 3D objects into a video making them move and look part of the picture as a way of annotating the footage to help explain what’s happening and to tell a story. This can be done in real time with camera tracking technology which takes into account the telemetry from the camera such as angle of tilt and zoom level to render the objects realistically.

The talk finishes with Chris explaining how viewing habits are changing. Whilst we all have a sense that the younger generation watch less live TV, Chris has the stats showing the change from people 66 years+ for whom ‘traditional content’ comprises 82% of their viewing down to 16-18 year olds who only watch 28%, the majority of the remainder being made up from SCOD and ‘YouTube etc.’.

Chris talks about the newer cameras which have improved coverage both by improving the technical ability of ‘lower tier’ productions but also for top-tier content, adding cameras in locations that would otherwise not have been possible. He then shows there is an increase in HDR-capable cameras being purchased which, even when not being used to broadcast HDR, are valued for their ability to capture the best image possible. Finally, Chris rounds back on Remote Production, explaining the motivations of the broadcasters such as reduced cost, improved work-life balance and more environmentally friendly coverage.

The video finishes with questions from the webinar audience.

Watch now!
Speakers

Michael Cole Michael Cole
Chief Technology Officer,
PGA European Tour & Ryder Cup Europe
Remo Ziegler Remo Ziegler
Vice President, Product Management, Sports,
Vizrt
Chris Evans Chris Evans
Senior Market Analyst,
Futuresource Consulting

Video: Video Compression Basics

Video compression is used everywhere we look. So often is it not practical to use uncompressed video, that everything in the consumer space video is delivered compressed so it pays to understand how this works, particularly if part of your job involves using video formats such as AVC, also known as H.264 or HEVC, AKA H.265.

Gisle Sælensminde from Vizrt takes us on this journey of creating compressed video. He starts by explaining why we need uncompressed video and then talks about containers such as MPEG-2 Transport Streams, mp4, MOV and others. He explains that the container’s job is partly to hold metadata such as the framerate, resolution and timestamps among a long list of other things.

Gisle takes some time to look at the past timeline of codecs in order to understand where we’re going from what went before. As many use the same principles, Gisle looks at the different type of frames inside most compressed formats – I, P and B frames which are used in set patterns known as GOPs – Group(s) of Pictures. A GOP defines how long is between I frames. In the talk we learn that I frames are required for a decoder to be able to tune in part way through a feed and still start seeing some pictures. This is because it’s the I frame which holds a whole picture rather than the other types o frame which don’t.

Colours are important, so Gisle looks at the way that colours are represented. Many people know about defining colours by looking at the values of Red, Green and Blue, but fewer about YUV. This is all covered in the talk so we know about conversion between the two types.

Almost synonymous with codecs such as HEVC and AVC are Macroblocks. This is the name given to the parts of the raster which have been spit up into squares, each of which will be analysed independently. We’ll look at who these macro blocks are used, but Gisle also spends some time looking to the future as both HEVC, VP9 and now AV1 use variable-size macro block analysis.

A process which happens throughout broadcast is chroma subsampling. This topic, whereby we keep more of the luminance channel than colours, is explored ahead of looking at DCTs – Discrete Cosine Transforms – which are foundational to most video codecs. We see that by analysing these macro blocks with DCTs. we can express the image in a different way and even cut down on some of the detail we get from DCTs in order to reduce the bitrate.

Before some very useful demos looking at the result of varying quantisation across a picture, the difference signal between the source and encoded picture plus deblocking technology to hide some of the artefacts which can arise from DCT-based codecs when they are pushed for bandwidth.

Gisle finishes this talk at Media City Bergen by taking a number of questions from the floor.

Watch now!
Speaker

Gisle Sælensminde Gisle Sælensminde
Senior Software Engineer,
Vizrt

Video: HTTP/2 – Abstraction, protocol design, and practical use

HTTP/2 is an evolution of what most people know as HTTP with the aim of increasing the speed of websites by streamlining the request and delivery of resources. Apple have mandated the use of HTTP/2 for their LL-HLS protocol. Within a typical web page there can easily be 100 requests to the web server so it’s easy to see how increased efficiency could be a benefit. For low latency streaming such as LL-HLS, there are many requests each second so again, even small gains in efficiency can add up.

Rolf W Rasmussen from VizRT explains in this talk the benefits of HTTP/2 taking us through the differences from HTTP. He starts simply by looking at HTTP/1.1 with the messages sent between the client and the server and shows how the requests and responses are sent. Rolf then looks at how the messages are sent at each of the layers of the OSI model. By doing this we discover that the messages are sent in binary.

Binary sending and header compression are ways in which the data to be sent is minimised. We see though that the HTTP/2 is a connection which multiplexes different streams on the same connection. Maintaining the same connection for multiple data streams again reduces the amount of negotiation needed. Multiplexing helps increase the efficient use of that connection. Unlike before, we now see that small requests are cheap whereas there has traditionally been a lot of work to reduce the number of small requests in HTTP/1.1.

Server Push is another key improvement where the server itself can push data into the open connection without a corresponding request. This was originally a requirement of the LL-HLS protocol but has been made optional since. For web pages, there are times when if a page needs resource A, the server knows that it will require resource B later. It’s in these situations that server push is used. Clearly for online streaming, it’s known when the client will need certain chunks or playlist files hence the potential use of server push.

Rolf concludes with questions from the flow and looking at some practical examples of debugging with curl, using proxies and Wireshark as well as dealing with encryption.

Watch now!
Speakers

Rolf W. Ramussen Rolf W. Ramussen
Chief Software Architect,
VizRT

Video: Where can SMPTE 2110 and NDI co-exist?

When are two video formats better than one? Broadcasters have long sought ‘best of breed’ systems matching equipment as close as possible to your ideal workflow. In this talk we look getting the best of both compressed, low-latency and uncompressed video. NDI, a lightly compressed, ultra low latency codec, allows full productions in visually lossless video with a field of latency. SMPTE’s ST-2110 allows full productions with uncompressed video and almost zero latency.

Bringing together the EBU’s Willem Vermost who paints a picture from the perspective of public broadcasters who are planning their moves into the IP realm, Marc Risby from UK distributor and integrator Boxer brings a more general view of the market’s interest and Will Waters who spent many years in Newtek, the company that invented NDI we hear the two approaches of compressed and uncompressed compliment each other.

This panel took place just after the announcement that Newtek had been bought by VizRT, the graphics vendor, who sees a lot of benefit in being able to work in both types of workflow, for clients large and small and who have made Newtek its own entity under the VizRT umbrella to ensure continued focus.

A key differentiator of NDI is it’s focus on 1 gigabit networking. Its aim has always to enable ‘normal’ companies to be able to deploy IP video easily so they can rapidly benefit from the benefits that IP workflows bring over SDI or other baseband video technologies. A keystone in this strategy is to enable everything to happen on normal, 1Gbit switches which are prevalent in most companies today. Other key elements to the codec are: free, software development kit, bi-directionality, resolution independent, audio sample-rate agnostic, tally support, auto discovery and more.

In the talk, we discuss the pros and cons of this approach where interoperability is assured as everyone has to use the same receive and transmit code, against having an standard such as SMPTE ST-2110. SMPTE ST-2110 has the benefit of being uncompressed, assuring the broadcaster that they have captured the best possible quality of video, promises better management at scale, tighter integration into complex workflows, lower latency and the ability to treat the many different essences separately. Whilst we discuss many of the benefits of SMPTE ST-2110, you can get a more detailed overview from this presentation from the IP Showcase.

Watch now!

This panel was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Willem Vermost Willem Vermost
Senior IP Media Technology Architect,
EBU
Marc Risby Marc Risby
CTO,
Boxer Group
Will Walters Will Waters
Vice President Of Worldwide Customer Success,
VizRT
Russell Trafford-Jones Moderator: Russell Trafford-Jones
Exec Member, IET Media
Manager, Support & Services, Techex
Editor, The Broadcast Knowledge