Video Case Study: How BT Sport de-centralised its football production

We’ve all changed the way we work during the pandemic, some more than others. There’s nothing better than a real-life case study to learn from and to put your own experience into perspective. In this video, BT Sport and their technology provider Timeline TV take us through what they have and haven’t done to adapt.

Jamie Hindhaugh, COO of BT Sport explains that they didn’t see working at home as simply a decentralisation, but rather a centralisation of the technology to be used by a decentralised body of staff. This concept is similar to Discovery’s recent Eurosport IP transformation project which has all participating countries working from equipment in two datacentres. BT Sport managed to move from a model of two to three hundred people in the office daily to producing a live football talk show from presenters’ homes, broadcast staff also at home, in only 10 days. The workflow continued to be improved over the following 6 weeks at which point they felt they had migrated to an effective ‘at home’ workflow.

 

 

Speaking to the challenges, Dan McDonnell CEO of Timeline TV said that basic acquisition and distribution of equipment like laptops was tricky since everyone else was doing this, too. But once the equipment was in staff homes, they soon found out the problems moving out of a generator-backed broadcast facility. UPSes were distributed to those that needed them but Dan notes there was nothing they could do to help with the distraction of working with your children and/or pets.

Jamie comments that connectivity is very important and they are moving forward with a strategy called ‘working smart’ which is about giving the right tools to the right people. It’s about ensuring people are connected wherever they are and with BT Sport’s hubs around the country, they are actively looking to provide for a more diverse workforce.

BT Sport has a long history of using remote production, Dan points out which has driven BT Sport’s recent decision to move to IP in Stratford. Premiership games have changed from being a main and backup feed to needing 20 cameras coming into the building. This density of circuits in both HD and UHD has made SDI less and less practical. Jamie highlights the importance of their remote production heritage but adds that the pandemic meant remote production went way beyond normal remote productions now that scheduling and media workflows also has to be remote which would always have stayed in the building normally.

Dan says that the perspective has changed from seeing production as either a ‘studio’ or ‘remote OB’ production to allowing either type of production to pick and choose the best combination of on-site roles and remote roles. Dan quips that they’ve been forced to ‘try them all’ and so have a good sense of which work well and which benefit from on-site team working.

Watch now!
Speakers

Dan McDonnell Dan McDonnell
CEO,
Timeline TV
Jamie Hindhaugh Jamie Hindhaugh
COO,
BT Sport
Heather McLean Moderator: Heather McLean
Editor,
SVG Europe

Video: CMAF Live Media Ingest Protocol Masterclass

We’ve heard before on The Broadcast Knowledge about CMAF’s success at bringing down the latency for live dreaming to around 3 seconds. CMAF is standards based and works with Apple devices, Android, Windows and much more. And while that’s gaining traction for delivery to the home, many are asking whether it could be a replacement technology for contribution into the cloud.

Rufael Mekuria from Unified Streaming has been working on bringing CMAF to encoders and packagers. All the work in the DASH Industry forum has centred around to key points in the streamin architecture. The first is on the output of the encoder to the input of the packager, the second between the packager and the origin. This is work that’s been ongoing for over a year and a half, so let’s pause to ask why we need a new protocol for ingest.

 

 

RTMP and Smooth streaming have not been deprecated but they have not been specified to carry the latest codecs and while people have been trying to find alternatives, they have started to use fragmented MP4 and CMAF-style technologies for contribution in their own, non-interoperable ways. Push-based DASH and HLS are common but in need of standardisation and in the same work, support for timed metadata such as splice information for ads could be addressed.

The result of the work is a method of using a separate TCP connection for each essence track; there is a POST command for each subtitles stream, metadata, video etc. This can be done with fixed length POST, but is better achieved with chunked tranfer encoding.

Rufael next shows us an exmaple of a CMAF track. Based on the ISO BMFF standard, CMAF specifies which ‘boxes’ can be used. The CMAF specification provides for optional boxes which would be used in the CMAF fragements. Time is important so is carried in ‘Live basemedia decodetime’ which is a unix-style time stamp that can be inserted into both the fragment and the CMAF header.

With all media being sent separately, the standard provides a way to define groups of essences both implicitly and explicity. Redundancy and hot failover have been provided for with multiple sources ingesting to multiple origins using the timestamp synchronisation, identical fragments can be detected.

The additional timed metadata track is based on the ISO BMFFF standard and can be fragmented just like other media. This work has extended the standard to allow the carrying of the DASH EventMessageBox in the time metadata track in order to reuse existing specifications like id3 and SCTE 214 for carrying SCTE 35 messages.

Rufael finishes by explaining how SCTE messages are inserted with reference to IDR frames and outlines how the DASH/HLS ingest interface between the packager and origin server works as well as showing a demo.

Watch now!
Speaker

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unifed Streaming

Video: How is Technology Shaping the Future of Streaming Services?

The streaming market is defined by technology, but as tech advances, it becomes more and more transparent to the user as it not only better facilitates delivery of media, but also makes the user experience better. This article looks at the current streaming market from the perspective of Disney, MediaKind, Dolby and Twitch to discover how far technology has brought us and the current challenges.

Sardjono Insani from Disney says that’s it’s very easy, now, to launch an OTT service with the technical barrier being much lower than before with many decent platforms which have good features. This allows new entrants a quick route to market. The challenges he sees are now more in the business domain such as having the right content, retaining customers and meeting their expectations. Customers have exposure to the big streaming platforms which have good device integration and can invest heavily in the technology behind their services. Whilst off-the-shelf platforms can be very good and offer similar device integration, depending on your audience, you may have a few notable gaps between the service you can provide and the competition. Without a “smooth tech offering”, Sardjono suggests it will be harder to pick up and keep customers.

Sunita Kaur from Twitch sees customer engagement at the heart of the Twitch experience and one reason why it keeps customers and continues to grow. “What if TV could talk back to me?” is the question she uses to explain Twitch to those unfamiliar with the service highlighting the fact each video comes with a live chat feature allowing the viewers to interact directly with the hosts giving them an immediate connection with the audience. The future of her services will be around customer experience. Will the viewers still tolerate a 5-second delay? What if a feature is more than a click away? Answering questions like this help build the future Twitch. Sunita also touches on ‘partnerships’ which are an important monetisation strategy for streamers whether individuals or media giants. Partnerships, for example, allow microtransactions between viewers and streamers in the form of paid ‘super chats’. This voluntary donation route works well for the younger audiences who are no stranger to ad-blockers. Burkhard Leimbrock, Commercial Director for Twitch EMEA phrases it like this: “In the era of ad blocking, content that is voluntarily engaged with and actively created by an audience – the majority of whom is aged 13 to 34 – in real-time creates a powerful and rare new opportunity for brands.”

 

 

Raul Aldrey from MediaKind talks about using technology to transform live events as we know them now into immersive experiences such as allowing fans to choose camera angles and even clip up their version and share on social media. However, having 25 live cameras able to be delivered to the viewer with frame accuracy is still a difficult challenge. Once you’ve worked out how to do that, the next question is how ad insertion works. Raul feels there is a lot of space for innovation in the user experience including creating hyper-personalised experiences using AI to follow specific players and also, linking in with Sunita’s points, using microtransactions much more during the event.

Pankaj Kedia from Dolby is focused on the world of mobile devices. In his region, he says between 48 and 94% of consumers have upgraded or will upgrade in the coming year or so. This, he feels, means there is a lot of technical capability in the mobile device market leaving a gap between the technology that available content can exploit what the devices can do. He sympathises with the need to maintain a consistent user experience where locally-generated content (such as Bollywood) sits next to international content which may look or sound better. But his point is that content creation has become democratised and tools are more accessible than before. Money is absolutely still a factor, but HDR has arrived in low-end devices such as iPhones so it’s not out of the question to have high-end technology in all levels of content.

Watch now!
Speakers

Pankaj Kedia Pankaj Kedia
Managing Director, Emerging Markets,
Dolby Laboratories
Sardjono Insani Sardjono Insani
Director, Media & Entertainment Distribution,
Walt Disney Company
Sunita Kaur Sunita Kaur
Senior Vice President APAC,
Twitch
Raul Aldrey Raul Aldrey
Chief Product Officer,
MediaKind
James Miner Moderator: James Miner
CEO,
MinerLabs Group

Video: PTP Over Wan

Work is ongoing in the IPMX project to reduce SMPTE ST 2110’s reliance on PTP, but the reality is that PTP is currently necessary for digital audio systems as well as for most ST 2110 workflows. There are certainly challenges in deploying PTP from an architectural standpoint with some established best practices, but these are only useful when you have the PTP signal itself. For the times when you don’t have a local PTP clock, delivery over a WAN may be your only solution. With PTP’s standards not written with a WAN in mind, can this be done and what are the problems?

 

 

Meinberg’s Daniel Boldt describes the work he’s been involved with in testing PTP delivery over Wide Area Networks (WANs) which are known for having higher, more variable latency than Local Area Networks (LANs) which are usually better managed with low latency which users can interrogate to understand exactly how traffic is moving and configure it to behave as needed. One aspect that Daniel focuses on today is Packet Delay Variation (PDV) which is a term that describes the difference in time between the packets which arrive the soonest and those that arrive last. For accurate timing, we would prefer overall latency to be very low and for each packet to take the same amount of time to arrive. In real networks, this isn’t what happens as there are queuing delays in network equipment depending on how busy the device is both in general and on the specific port being used for the traffic. These delays vary from second to second as well as throughout the day. Asymmetry can develop between send and receive paths meaning packets in one direction take half the time to arrive than those in the other. Finally, path switching can create sudden step changes in path latency.

Boundary Clocks and Transparent Clocks can resolve some of this as they take in to account the delays through switches. Over the internet, however, these just don’t exist so your options are to either build your own WAN using dark fibre or to deal with these problems at the remote site. If you are able to have a clock at the remote site, you could use the local GNSS-locked clock with the WAN as a backup feed to help when GNSS reception isn’t available. But when that’s not possible due to cost, space or inability to rack an antenna, something more clever is needed.

Lucky Packet Filter
Source: Meinberg

The ‘lucky packet filter’ is a way of cleaning up the timing packets. Typically, PTP timing packets will arrive between 8 and 16 times a second, each one stamped with the time it was sent. When received, its propagation time can be easily calculated and put in a buffer. The filter can look at the statistics then throw away any packets which took a long time to arrive. Effectively this helps select for those packets which had the least interference through the network. Packets which got held a long time are not useful for calculating the typical propagation time of packets so it makes sense to discard them. In a three-day-long test, Meinberg used a higher transmit rate of 64 packets per second saw the filter reduced jitter from 100 microseconds to an offset variation of 5 microseconds. When this was fed into a high-quality clock filter, the final jitter was only 300ns which was well within the 500ns requirement of ST 2059-2 used for SMPTE ST 2110.

Daniel concludes the video by showing the results of a test with WDR where a PTP Slave gateway device was fed with 16 packets a second from a master PTP switch over the WAN. The lucky packet filter produced a timing signal within 500ns and after going through an asymmetry step detection process in the clock produced a signal with an accuracy of no more than 100ns.

Watch now!
Speaker

Daniel Boldt Daniel Boldt
Meinberg