Video: Working remotely in a crisis

We’ve perhaps all seen the memes that the ‘digital transformation’ of a company is not because of ‘leadership vision’, adapting to the competition, but rather ‘Covid-19’. Whilst this is both trite yet often true, there is value in understanding what broadcast companies have done to deal with the pandemic virus and COVID-19.

Robert Ambrose introduces and talks to our guests to find out how their companies have changed to accommodate remote working. First to speak is Jack Edney of The Farm Group, a post production company. They looked closely at the communication needed within the organisation, managing priorities of tasks and maintaining safety and resources. Jack shows how the stark difference between pre- and during- lockdown workflows seeing how much they are now remote. Jack explains how engaged his technical teams have been in making this work very quickly.

Brian Leonard from IMG has done much the same as IMG have moved towards remote working as they have changed from 300 people on site to around 3 people on site and everything else remote. Brian talks about how they’d expanded into a local building in order to make life easier in the earlier days. He then considers the pros and cons of being reliant on a significant freelance staff – that being the option of using their pre-existing equipment at home. Finally we look at how their computer-based SimplyLive production software allows them the immediate ability to remotely produce video.

OWNZONES is up next with Rick Phelps who gives a real example of a customer’s workflow which was on-premise showing the before and after diagrams for when this moved remotely. These workflows were extended into the cloud by, say, using proxies and editing using an EDL, encoding and amending metadata all in the cloud. Rick suggests that this is both a short-term trend but suggests much will remain like this in the longer-term.

Finally, Johan Sundström from Yle in Finland takes to the stand to give a point of view from a public broadcaster. He explains how
they have created guest booths near their main entrance connected to the new channels so facilitate low-contact interviews. Plexiglass is being installed in control rooms and people are doing their own makeup. He also highlights some apps which allow for remote contribution of audio. They are also using software-based mixers like the Tricaster plus Skype TX to keep producers connected and involved in their programmes. The session concludes with a Q&A.

Watch now!
Speakers

Jack Edney Jack Edney
Operations Director,
The Farm Group
Johan Sundström Johan Sundström
Head of Technology Vision,
Yle Finland
Rick Phelps Rick Phelps
Chief Commercial Officer,
OWNZONES
Brian Leonard Brian Leonard
Head of Engineering: Post and Workflows
IMG
Robert Ambrose Robert Ambrose
Managing Consultant,
High Green Media

Video: Reducing peak bandwidth for OTT

‘Flattening the curve’ isn’t just about dealing with viruses, we learn from Will Law. Rather, this is one way to deal with network congestion brought on by the rise in broadband use during the global lockdown. This and other key ways such as per-title encoding and removing the top tier are just two other which are explored in this video from Akamai and Bitmovin.

Will Law starts the talk explaining why congestion happens in a world where ABR (adaptive bitrate streaming) is supposed to deal with this. With Akamai’s traffic up by around 300%, it’s perhaps not a surprise there’s a contest for bandwidth. As not all traffic is a video stream, congestion will still happen when fighting with other, static, data transfers. However deeper than that, even with two ABR streams, the congestion protocol in use has a big impact as will shows with a graph showing Akamai’s FastTCP and BBR where BBR steals all the bandwidth rather than ‘playing fair’.

Using a webpage constructed for the video, Will shows us a baseline video playback and the metrics associated with it such as data transferred and bitrate which he uses to demonstrate the different benefits of bitrate production techniques. The first is covered by Bitmovin’s Sean McCarthy who explains Bitmovin’s per-title encoding technology. This approach ensures that each asset has encoder settings tuned to get the best out of the content whilst reducing bandwidth as opposed to simply setting your encoder to a fairly-high, safe, static bitrate for all content no matter how complex it is. Will shows on the demo that the bitrate reduces by over 50%.

Swapping codecs is an obvious way to reduce bandwidth. Unlike per-title encoding which is transparent to the end-user, using AV1, VP9 or HEVC requires support by the final device. Whilst you could offer multiple versions of your assets to make sure you still cover all your players despite fragmentation, this has the downside of extra encoding costs and time.

Will then looks at three ways to reduce bandwidth by stopping the highest-bitrate rendition from being used. Method one is to manually modify the manifest file. Method two demonstrates how to do so using the Bitmovin player API, and method three uses the CDN itself to manipulate the manifests. The advantage of doing this in the CDN is because this allows much more flexibility as you can use geolocation rules, for example, to deliver different manifests to different locations.

The final method to reduce peak bandwidth is to use the CDN to throttle download speed of the stream chunks. This means that while you may – if you are lucky – have the ability to download at 100Mbps, the CDN only delivers 3- or 5-times the real-time bitrate. This goes a long way to smoothing out the peaks which is better for the end user’s equipment and for the CDN. Seen in isolation, this does very little, as the video bitrate and the data transferred remain the same. However, delivering the video in this much more co-operative way is much more likely to cause knock-on problems for other traffic. It can, of course, be used in conjunction with the other techniques. The video concludes with a Q&A.

Watch now!
Speakers

Will Law Will Law
Chief Architect,
Akamai
Sean McCarthy Sean McCarthy
Technical Product Marketing Manager,
Bitmovin

Video: ASTC 3.0 Basics, Performance and the Physical Layer

ATSC 3.0 is a revolutionary technology bringing IP into the realms of RF transmission which is gaining traction in North America and is deployed in South Korea. Similar to DVB-I, ATSC 3.0 provides a way to unite the world of online streaming with that of ‘linear’ broadcast giving audiences and broadcasters the best of both worlds. Looking beyond ‘IP’, the modulation schemes are provided are much improved over ATSC 1.0 providing much better reception for the viewer and flexibility for the broadcaster.

Richard Chernock, now retired, was the CSO of Triveni Digital when he have this talk introducing the standard as part of a series of talks on the topic. ATSC, formed in 1982 brought the first wave of digital television to The States and elsewhere, explains Richard as he looks at what ATSC 1.0 delivered and what, we now see, it lacked. For instance, it’s fixed 19.2Mbps bitrate hardly provides a flexible foundation for a modern distribution platform. We then look at the previously mentioned concept that ATSC 3.0 should glue together live TV, usually via broadcast, with online VoD/streaming.

The next segment of the talk looks at how the standard breaks down into separate standards. Most modern standards like STMPE’s 2022 and 2110, are actually a suite of individual standards documents united under one name. Whilst SMPTE 2110-10, -20, -30 and -40 come together to explain how timing, video, audio and metadata work to produce the final result of professional media over IP, similarly ATSC 3.0 has sections on explaining how security, applications, the RF/physical layer and management work. Richard follows this up with a look at the protocol stack which serves to explain which parts are served on TCP, which on UDP and how the work is split between broadcast and broadband.

The last section of the talk looks at the physical layer. That is to say how the signal is broadcast over RF and the resultant performance. Richard explains the newer techniques which improve the ability to receive the signal, but highlights that – as ever – it’s a balancing act between reception and bandwidth. ATSC 3.0’s benefit is that the broadcaster gets to choose where on the scales they want to broadcast, tuning for reception indoors, for high bit-rate reception or anywhere in between. With less than -6dB SNR performance plus EAS wakeup, we’re left with the feeling that there is a large improvement over ATSC 1.0.

The talk finishes with two headlining features of ATSC 3.0. PLPs, also known as Physical Layer Pipes, are another headlining feature of ATSC 3.0, where separate channels can be created on the same RF channel. Each of these can have their own robustness vs bit rate tradeoff which allows for a range of types of services to be provided by one broadcaster. The other is Layered Division Multiplexing which allows PLPs to be transmitted on top of each other which allows 100% utilisation of the available spectrum.

Watch now!
Speaker

Richard Chernock Dr. Richard Chernock
Former CSO,
Triveni Digital