ASTC 3.0 has taken the bold move to merge RF-delivered services with internet-delivered services. Branded as ‘NextGen TV’, the idea that viewers shouldn’t need to know which path their service comes by is a welcome shift from the days of needing to select the right input on your TV. We’ve covered here before the technical details of ATSC 3.0 but today we’re looking the practical side of delivering such a service.
In this Streaming Media video, Nadine Krefetz hosts a conversation with Madeleine Noland from ASTC, Todd Achilles from Evoca TV, Jim DeChant from News Press & Gazette Broadcasting aswell as Sassan Pejhan. They start by highlighting that one reason ATSC 3.0 was developed over the previous ATSC 1.0 is that it opens up the possibility of delivering HDR and/or UHD along with Dolby Atomos.
Given ATSC 3.0 uses the same MPEG DASH delivery that online streaming services use, one question is why use ATSC 3.0 at all. The benefit of broadcast medium is in the name. There’s extreme efficiency in reaching thousands or millions of people with one transmitter which ATSC 3.0 uses to its advantage. In ASTC 3.0’s case, transmitters typically reach 40 miles. The panel discusses the way in which you can split up your bandwidth to deliver different services with different levels of robustness. Doing this means you can have a service that targets reception on mobile devices whilst keeping a high bandwidth, more delicately modulated channel for your main service intended for delivery to the home.
Not unlike the existing technologies used by satellite and cable providers such as SkyQ in the UK, an internet connection can be used to deliver user-specific adverts which is an important monetisation option that is needed to keep in step with the streaming services that it can work in tandem with. Madeleine explains that ATSC has created APIs for apps to query TV-specific functions like whether it’s on or off but these are the only ways in which app development for ATSC 3.0 differs from other web-based app development.
Finishing up the conversation, the panel discusses the similarities and differences to 5G.
Remote production is changing. Gone are the days when it meant leaving some of your team at base instead of sending them with the football OB. Now it can mean centralised remote production where, as Eurosport has recently shown, TV stations around Europe can remotely use equipment hosted in two private cloud locations. With the pandemic, it’s also started to mean distributed remote production, where the people are now no longer together. This follows the remote production industry’s trend of moving the data all into one place and then moving the processing to the media. This means public or private clouds now hold your files or, in the case of live production like Eurosport, the media and the processing lives there too. It’s up to you whether you then monitor that in centralised MCR-style locations using multiviewers or with many people at home.
This webinar hosted by EVS in association with the Broadcast Academy which is an organisation started by HBS with the aim of creating cosntency of live production throughout the industry by helping people build their skillsets ensuring the inclusion of minorities. With moderator James Stellpflug from EVS is Gordon Castle from Eurosport, Mediapro’s Emili Planas, producer & director John Watts and Adobe’s Frederic Rolland.
Gordon Castle starts and talks with the background of Eurosport’s move to a centralised all-IP infrastructure where they have adopted SMPTE ST 2110 and associated technologies putting all their equipment in two data centres in Europe to create a private cloud. This allows producers in Germany, Finland, Italy and at least 10 other markets to go to their office as normal, but produce a program using the same equipment pool that everyone else uses. This allows Eurosport to maximise their use of the equipment and reduce time lying idle. It’s this centralised remote production model which feeds into Gordon’s comment about wanting to produce more material at a higher quality. This is something that he feels has been achievable by the move to centralisation along with giving more flexibility for people on their location.
Much of the conversation revolved around the pandemic which has been the number one forcing factor in the rise of decentralised remote production seen over the last two years where the workforce is decentralised, often with the equipment centralised in a private or public cloud. The consensus in the panel is that the move to home working is often beneficial, but splitting up the team is a problem in the long term. A team that has worked together for a long time can survive on this is previously gained knowledge of how people work, their benefits and relationships forged in the heat of broadcast and over the dinner table. Bringing together a new team without close interpersonal contact raises the risk of transactional behaviour, not working as a team or simply not understanding how other people work. A strong point made is that an OB is like a sports team on the pitch. The players know where they are supposed to be, their role and what they are not supposed to do. They look out for each other and can predict what their teammates will want or will do. We see this behaviour all the time in galleries around the world as people produce live events, but the knowledge that rests on as well as the verbal and visual cues needed to make that work can be absent if the team has always worked remotely.
Economics plays a role in promoting remote production within companies. For Gordon, there’s a cost benefit in not having OBs on site although he does acknowledge that depending on the country and size of the OB there are times when an on-site presence is still cheaper. When you don’t have all staff on site, people are more available for other work meaning they can do multiple events a day, though John Watts cautions that two football matches are enough for one director if they want to keep their ‘edge’. The panel share a couple of examples about how they are keeping engagement between presenters despite not being in the same place, for instance, Eurosport’s The Cube.
On technical matters, the panel discusses the difficulty of ensuring connectivity to the home but is largely positive about the ability to maintain a working-from-home model for those who want it. There are certainly people whose home physically doesn’t accommodate work or whose surroundings with young family members, for instance, don’t match with the need to concentrate for several hours on a live production. These problems affect individuals and can be managed and discussed in small teams. For large events, the panel considers remote working much more difficult. The overhead for pulling together multiple, large teams of people working at home is high and whether this is realistic for events needing one hundred or more staff is a question yet to be answered.
As the video comes to a close, the panel also covers how software, one monolithic, is moving towards a federated ecosystem which allows broadcasters more flexibility and a greater opportunity to build a ‘best of breed’ system. It’s software that is unlocking the ability to work in the cloud and remotely, so it will be central to how the industry moves forward. They also cover remote editing, the use of AI/ML in the cloud to reduce repetitive tasks and the increased adoption of proxy files to protect high-quality content in the cloud but allow easy access and editing at home. 5G comes under discussion with much positivity about its lower latency and higher bandwidth for both contribution and distribution. And finally, there’s a discussion about the different ways of working preferred by younger members of the workforce who prefer computer screens to hardware surfaces.
Standards in media go back to the early days of cinema standardising the sprocket holes in rolls of film with the intent of making it easier for the US Army to distribute training films. This standardisation work marked the beginning of SMPTE, though the acronym lacked a T at the time since television hadn’t yet been invented. There is a famous XKCD comic that mocks standards or at least standards that promise to replace all that went before. This underlines why it’s more important what standards don’t say than what they do. Giving the market room to evolve, advance and innovate is a vital aspect of good standards.
The broadcast industry is emerging from a time of great stability thanks to a number of standards that have been around for ages. SDI is a decades-old technology that is ubiquitous in the industry. Likewise, H.264 has become the only codec to use unless you have a specific use case for HEVC, AV1, VP9 etc. thanks to its almost universal presence in devices. Black and burst is now being replaced by PTP in IP installations. This is novel, despite PTP’s upcoming twentieth birthday, because it doesn’t matter if PTP is four decades old, its launch in the broadcast sector is recent, support will be low.
This panel from SMPTE Hollywood features two members of SMPTE deeply involved with standardisation within the industry: Bruce Devlin, Standards Vice President and Thomas Bause Mason Director of Standards Development. . They are joined by IP specialist JiNan Glasgow George and moderator Maureen O’Rourke from Disney.
In a sometimes frank discussion, we hear about the attempt by standards bodies to try and keep up with the shift form hardware to software within the whole industry, the use of patents within standards, how standards bodies are financed and the cost of standards, software versus hardware patents, standardisation of AI models, ensuring standards are realistic & useful with plugfests, the difference between standards bodies such as ANSI, ISO, SMPTE etc.,
IPMX is bringing a standards-based, software deployable connectivity solution to ProAV. It stands itself in contrast to hardware-based IP technologies such as Zeevee and SDVoE which both aim to create de facto standards by building large alliances of companies based on their chips. IPMX’s aim is to open up the market to a free-to-access technology that can be implemented in hardware and software alike. In this way vendors have more freedom in implementation with the hope of wider interoperability, depending on IPMX adoption. These are amongst some of the business reasons behind IPMX which are covered in this talk Matrox’s David Chiappini.
In today’s video, Matrox’s Jean Lapierre looks at the technical side of IPMX to answer some of the questions from those who have been following its progress. AIMS, the Alliance for IP Media Solutions, are upfront about the fact that IPMX is a work in progress with important parts of the project dealing with carriage of HDCP and USB still being worked on. However, much has already been agreed so it makes sense to start thinking about how this would work in real life when deployed. For a primer on the technical details of IPMX, check out this video from Andreas Hildebrand.
Jean starts by outlining the aims of the talk; to answer questions such as whether IPMX requires a new network, expensive switches and PTP. IPMX, he continues, is a collection of standards and specifications which enable transport of HD, 4K or 8K video in either an uncompressed form or lightly compressed, visually lossless form with a latency of <1ms. Because you can choose to enable compression, IPMX is compatible with 1GB, CAT5e networks as well as multi-gigabit infrastructure. Moreover, there’s nothing to stop mixing compressed and uncompressed signals on the same network. In fact, the technology is apt for carrying many streams as all media (also known as ‘essences’ to include metadata) is sent separately which can lead to hundreds of separate streams on the network. The benefit of splitting everything up is that in the past if you wanted to read subtitles, you would have to decode a 3Gbps signal to access a data stream better measured in bytes per second. Receiving just the data you need allows servers or hardware chips to minimise costs.
Jean explains how multicast is used to deliver streams to multiple receivers and how receivers can subscribe to multiple streams. A lot of the time, video streams are used separately such as from a computer to a projector meaning exact timing isn’t needed. Even coming into a vision mixer/board doesn’t always need to be synchronised because for many situations, having a frame synchroniser on all inputs works well. There are, however, times when frame-accurate sync is important and for those times, PTP can be used. PTP stands for the Precise Time Protocol and if you’re unfamiliar, you can find out more here.
The upshot of using PTP with IPMX is that you can unlock perfect synchronisation for applications like video walls; any time you need to mix signals really. IMPX relaxes some of the rules of PTP that SMPTE’s ST 2059 employs to reduce the load on the grandmaster clocks. PTP is a very accurate timing mechanism but it’s fundamentally different from black and burst because it’s a two-way technology that relies on an ongoing dialogue between the devices and the clock. This is why Jean says that for anything more than a small network, you are likely to need a switch that is PTP aware and can answer the queries which would normally go to the single, central switch. In summary then, Jean explains that for many IPMX implementations you don’t need a new network, a PTP grandmaster or PTP aware switches. But for those wanting to mix signals with perfect sync or those who have a large network, new investment would reap benefits.
Senior Director, Advanced Technolgies
Subscribe to get daily updates
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE