Video: ASTC 3.0 Basics, Performance and the Physical Layer

ATSC 3.0 is a revolutionary technology bringing IP into the realms of RF transmission which is gaining traction in North America and is deployed in South Korea. Similar to DVB-I, ATSC 3.0 provides a way to unite the world of online streaming with that of ‘linear’ broadcast giving audiences and broadcasters the best of both worlds. Looking beyond ‘IP’, the modulation schemes are provided are much improved over ATSC 1.0 providing much better reception for the viewer and flexibility for the broadcaster.

Richard Chernock, now retired, was the CSO of Triveni Digital when he have this talk introducing the standard as part of a series of talks on the topic. ATSC, formed in 1982 brought the first wave of digital television to The States and elsewhere, explains Richard as he looks at what ATSC 1.0 delivered and what, we now see, it lacked. For instance, it’s fixed 19.2Mbps bitrate hardly provides a flexible foundation for a modern distribution platform. We then look at the previously mentioned concept that ATSC 3.0 should glue together live TV, usually via broadcast, with online VoD/streaming.

The next segment of the talk looks at how the standard breaks down into separate standards. Most modern standards like STMPE’s 2022 and 2110, are actually a suite of individual standards documents united under one name. Whilst SMPTE 2110-10, -20, -30 and -40 come together to explain how timing, video, audio and metadata work to produce the final result of professional media over IP, similarly ATSC 3.0 has sections on explaining how security, applications, the RF/physical layer and management work. Richard follows this up with a look at the protocol stack which serves to explain which parts are served on TCP, which on UDP and how the work is split between broadcast and broadband.

The last section of the talk looks at the physical layer. That is to say how the signal is broadcast over RF and the resultant performance. Richard explains the newer techniques which improve the ability to receive the signal, but highlights that – as ever – it’s a balancing act between reception and bandwidth. ATSC 3.0’s benefit is that the broadcaster gets to choose where on the scales they want to broadcast, tuning for reception indoors, for high bit-rate reception or anywhere in between. With less than -6dB SNR performance plus EAS wakeup, we’re left with the feeling that there is a large improvement over ATSC 1.0.

The talk finishes with two headlining features of ATSC 3.0. PLPs, also known as Physical Layer Pipes, are another headlining feature of ATSC 3.0, where separate channels can be created on the same RF channel. Each of these can have their own robustness vs bit rate tradeoff which allows for a range of types of services to be provided by one broadcaster. The other is Layered Division Multiplexing which allows PLPs to be transmitted on top of each other which allows 100% utilisation of the available spectrum.

Watch now!
Speaker

Richard Chernock Dr. Richard Chernock
Former CSO,
Triveni Digital

Video: An Introduction to fibre optic cabling

Many of us take fibre optics for granted but how much about the basics do we actually know…or remember? You may be lucky enough to work in a company that only uses one type of fibre and connector, but in an job interview, it pays to know what happens in the wider world. Fortunately, Phil Crawley is here to explain fibre optics from scratch.

This introduction to fibre looks at the uses for fibre in broadcast. Simply put, we can consider that they’re used in high-speed networking and long-distance cabling of baseband signals such as SDI, audio or RF. The meat of the topic is that there are two types of fibre, multimode and single mode. It’s really important to know which one you’re going to be using; Phil explains why showing the two ways they manage to get light to keep moving down the glass and get to the other end.

The talk looks at the history of mulit-mode fibres which have continued to improve over the years which is recognised by the ‘OM’ number which currently stretches to OM5 (this is an advance on the OM4 which that talk considers.) Since multi-mode has some different versions, it’s possible to have mismatches if you send from one fibre to another. Phil visits these scenarios explaining how differences of the launch (laser vs. LED) and core diameter all affect the efficiency of moving light from one side of the junction to the other.

On that note, connectors are of key importance as there’s nothing worse than turning up with a fibre patch lead with the wrong connectors on the end. Phil explains the differences then looks at how to splice fibres together and the issues that need to be taken care of to do it well along with easy ways to fault find. Phil finishes the talk explaining how single-mode differs and offers some resources to learn more.

This video was recorded at a Jigsaw24 Tech Breakfast while Phil Crawley was their Chief Engineer. Download the slides

Watch now!
Speaker

Phil Crawley Phil Crawley
Lead Engineer, Media Engineers Ltd.
Former Chief Engineer, Jigsaw24

Video: Live production: Delivering a richer viewing experience

How can large sports events keep an increasingly sophisticated audience entertained and fully engaged? The technology of sports coverage has pushed broadcasting forwards for many years and there’s no change. More than ever there is a convergence of technologies both at the event and delivering to the customers which is explored in this video.

First up is Michael Cole, a veteran of live sports coverage, now working for the PGA European Tour and Ryder Cup Europe. As the event organisers – who host 42 golfing events throughout the year – they are responsible for not just the coverage of the golf, but also a whole host of supporting services. Michael explains that they have to deliver live stats and scores to on-air, on-line and on-course screens, produce a whole TV service for the event-goers, deliver an event app and, of course run a TV compound.

One important aspect of golfing coverage is the sheer distances that video needs to cover. Formerly that was done primarily with microwave links and whilst RF still plays an important part of coverage with wireless cameras, the long distances are now done by fibre. However as this takes time to deploy each time and is hard to conceal in otherwise impeccably presented courses, 5G is seeing a lot of interest to validate its ability to cut rigging time and costs along with making the place look tidier in front of the spectators.

Michael also talks about the role of remote production. Many would see this an obvious way to go, but remote production has taken many years to slowly be adopted. Each broadcaster has different needs so getting the right level of technology available to meet everyone’s needs is still a work in progress. For the golfing events with tens of trucks, and cameras, Michael confirms that remote production and cloud is a clear way forward at the right time.

Next to talk is Remo Ziegler from VizRT who talks about how VizRT serves the live sports community. Looking more at the delivery aspect, they allow branding to be delivered to multiple platforms with different aspect ratios whilst maintaining a consistent look. Whilst branding is something that, when done well, isn’t noticed by viewers, more obvious examples are real-time, photo-realistic rendering for in-studio, 3D graphics. Remo talks next about ‘Augmented Reality’, AR, which can be utilised by placing moving 3D objects into a video making them move and look part of the picture as a way of annotating the footage to help explain what’s happening and to tell a story. This can be done in real time with camera tracking technology which takes into account the telemetry from the camera such as angle of tilt and zoom level to render the objects realistically.

The talk finishes with Chris explaining how viewing habits are changing. Whilst we all have a sense that the younger generation watch less live TV, Chris has the stats showing the change from people 66 years+ for whom ‘traditional content’ comprises 82% of their viewing down to 16-18 year olds who only watch 28%, the majority of the remainder being made up from SCOD and ‘YouTube etc.’.

Chris talks about the newer cameras which have improved coverage both by improving the technical ability of ‘lower tier’ productions but also for top-tier content, adding cameras in locations that would otherwise not have been possible. He then shows there is an increase in HDR-capable cameras being purchased which, even when not being used to broadcast HDR, are valued for their ability to capture the best image possible. Finally, Chris rounds back on Remote Production, explaining the motivations of the broadcasters such as reduced cost, improved work-life balance and more environmentally friendly coverage.

The video finishes with questions from the webinar audience.

Watch now!
Speakers

Michael Cole Michael Cole
Chief Technology Officer,
PGA European Tour & Ryder Cup Europe
Remo Ziegler Remo Ziegler
Vice President, Product Management, Sports,
Vizrt
Chris Evans Chris Evans
Senior Market Analyst,
Futuresource Consulting

Video: Beam hopping in DVB-S2X

Beam hopping is the relatively new ability of a satellite to move its beam so that it’s transmitting to a different geographical area every few milliseconds. This has been made possible by the advance of a number of technologies inside satellite which make this fast, constant, switching possible. DVB is harnessing this new capability to more efficiently deliver bandwidth to different areas.

This talk starts off with a brief history of DVB-S2 moving to DVB-S2X and the successes of that move. But we then see that geographically within a wide beam, two factors come in to play: The satellite throughput is limited by the amplifiers and TWTs, plus certain areas within the beam needed more throughput than others. By dynamically pointing a more focused beam using ferrite switches and steerable antennae – to name but two technologies at play – we see that up to 20% of unmet demand could be addressed.

The talk continues with the ESA’s Nader Alagha explaining some of the basics of directing beams talking about moving from cells and clusters (geographical areas) and then how ‘dwell times’ are the amount of time spent at each cell. He then moves on to give an overview of the R&D work underway and the expected benefits.

A little like in older CRT Televisions a little gap needs to be put into the signal to cover the time the beam is moving. For analogue television this is called ‘blanking’, for stellites this is an ‘idle sequence’. Each time a beam hopes, the carrier frequency, bandwidth and number of carriers can change.

Further topics explored are implementing this on geostationary and non-geostationary satellites, connecting hopping strategy to traffic and the channel models that can be created around beam hopping. The idea of ‘superframes’ is detailed where frames that need to be decoded with a very low SNR, the information is spread out and duplicated an number of times. This is supported in beam hopping with some modifications which require some pre-ambles and the understanding of fragmentation of these frames.

The talk closes discussing looking at future work and answering questions from the webinar attendees.

Watch now!
Speakers

Nader Alagha Nader Alagha
Senior Communications Engineer,
ESA ESTEC (European Space Research and Technology Centre)
Peter Nayler Peter Nayler
Business Manager,
EASii IC
Avi Freedman
Director of System Engineering,
Satixfy