Video: When USB meets Pay-TV – an overview of DVB CI Plus

Content protection needs to evolve not only to new attacks but also to the technology landscape around it. While the PCMCIA form factor has been successfully used now for CAMs, it is an old technology that takes up a lot of space. This video looks at the move to USB interfaces and feature updates to the DVB CI standards,

To lead us through, TP Vision’s Nicholas Frame joins DVB’s Emily Dubs ad starts by explaining how all the different specifications and standards connect to provide the decryption ecosystem. This video centres on CI Plus 1.4 and CI Plus 2.0 which are standardised as ETSI TS 103 205 and ETSI TS 103 605 respectively.



CI Plus 1.4, Nicholas continues, introduces two main features. The first is the introduction of a negotiation mechanism to get a list and choose to use optional features in much the same way as a browser and server negotiate when they set up a secure HTTPS connection using TLS. Nicholas walks us through the negotiation process and explains that the first of these optional features is Overt Watermarking.

Watermarking is the practice of embedding data within a media stream which helps in tracking the source for use in copyright protection. This can be done with hidden data or overtly and works by defining a layer that is composited on top of the base video layer. This is not unlike the way that the decoder would also show the application GUI however the watermark layer is controlled by the CAM which says when to show or hide the watermark. The protocol is kept simple with the watermark itself comprising just ASCII text of a chosen colour at a defined position. Naturally, communication between the CAM and decoder is encrypted and the decoder provides confirmation back to the CAM when the watermark is shown which allows the CAM to take action if it believes the watermark isn’t being respected.

Moving on to CI Plus 2.0, Nicholas explains that it’s an evolution, not a new standard. It’s based on the previous mature, trusted work in the CI Plus standard and adds additional functionality with a modern interface. There’s no loss of features nor change in signalling. It does change the interface, however, which brings with it a whole raft of improvements and possibilities.

USB A is probably the most universally used physical interface which means it’s well known by the public and is a tried and tested, robust connector. It avoids being inserted the wrong way round and has no possibility of bent pins. In terms of manufacturing, space will be saved on circuit boards and manufacturing with USB components is very well understood. Nicholas sees this as opening up new possibilities such as decoders with different form factors or a move to virtualisation.

Although the lower layers defined by USB will change, the upper layers which are specific to CI and DVB won’t change. Nicholas finishes the video explaining how the USB interface (either 2.0 or 3.x) can use bulk transfer and will group MPEG TS packets into fragments for onwards transmission.

Watch now!

Nicholas Frame Nicholas Frame
Standardisation Manager,
TP Vision
Emily Dubs Moderator: Emily Dubs
Head of Technology,
DVB Project

Video: ATSC 3.0 Seminar Part III

ATSC 3.0 is the US-developed set of transmission standards which is fully embracing IP technology both over the air and for internet-delivered content. This talk follows on from the previous two talks which looked at the physical and transmission layers. Here we’re seeing how IP throughout has benefits in terms of broadening choice and seamlessly moving from on-demand to live channels.

Richard Chernock is back as our Explainer in Chief for this session. He starts by explaining the driver for the all-IP adoption which focusses on the internet being the source of much media and data. The traditional ATSC 1.0 MPEG Transport Stream island worked well for digital broadcasting but has proven tricky to integrate, though not without some success if you consider HbbTV. Realistically, though, ATSC see that as a stepping stone to the inevitable use of IP everywhere and if we look at DVB-I from DVB Project, we see that the other side of the Atlantic also sees the advantages.

But seamlessly mixing together a broadcaster’s on-demand services with their linear channels is only benefit. Richard highlights multilingual markets where the two main languages can be transmitted (for the US, usually English and Spanish) but other languages can be made available via the internet. This is a win in both directions. With the lower popularity, the internet delivery costs are not overburdening and for the same reason they wouldn’t warrant being included on the main Tx.

Richard introduces ISO BMFF and MPEG DASH which are the foundational technologies for delivering video and audio over ATSC 3.0 and, to Richard’s point, any internet streaming services.

We get an overview of the protocol stack to see where they fit together. Richard explains both MPEG DASH and the ROUTE protocol which allows delivery of data using IP on uni-directional links based on FLUTE.

The use of MPEG DASH allows advertising to become more targeted for the broadcaster. Cable companies, Richard points out, have long been able to swap out an advert in a local area for another and increase their revenue. In recent years companies like Sky in the UK (now part of Comcast) have developed technologies like Adsmart which, even with MPEG TS satellite transmissions can receive internet-delivered targeted ads and play them over the top of the transmitted ads – even when the programme is replayed off disk. Any adopter of ATSC 3.0 can achieve the same which could be part of a business case to make the move.

Another part of the business case is that ATSC not only supports 4K, unlike ATSC 1.0, but also ‘better pixels’. ‘Better pixels’ has long been the way to remind people that TV isn’t just about resolution. ‘Better pixels’ includes ‘next generation audio’ (NGA), HDR, Wide Colour Gamut (WCG) and even higher frame rates. The choice of HEVC Main 10 Profile should allow all of these technologies to be used. Richard makes the point that if you balance the additional bitrate requirement against the likely impact to the viewers, UHD doesn’t make sense compared to, say, enabling HDR.

Richard moves his focus to audio next unpacking the term NGA talking about surround sound and object oriented sound. He notes that renderers are very advanced now and can analyse a room to deliver a surround sound experience without having to place speakers in the exact spot you would normally need. Options are important for sound, not just one 5.1 surround sound track is very important in terms of personalisation which isn’t just choosing language but also covers commentary, audio description etc. Richard says that audio could be delivered in a separate pipe (PLP – discussed previously) such that even after the
video has cut out due to bad reception, the audio continues.

The talk finishes looking at accessibility such as picture-in-picture signing, SMPTE Timed Text captions (IMSC1), security and the ATSC 3.0 standards stack.

Watch now!

Richard Chernock Richard Chernock
Former CSO,
Triveni Digital

Video: ATSC 3.0 Part II – Cutting Edge OFDM with IP

RF, modulation, Single Frequency Networks (SFNs) – there’s a lot to love about this talk which is the second in a series of ATSC seminars however much is transferable to DVB. Today we’re focussed on transmission showing how ATSC 3.0 improves on DVB-T, how it simultaneously delivers feeds with different levels of robustness, the benefits of SFNs and much more.

In the second in this series of ATSC 3.0 talks, GatesAir’s Joe Seccia leads the proceedings starting by explaining why ATSC 3.0 didn’t simply adopt DVB-T2’s modulation scheme. The answer, explained in detail by Joe, is that by putting in further work, they got closer to the Shannon limit than DVB-T2 does. He continues to highlight the relevant standards which comprise the ATSC 3.0 standard which define the RF physical layer.

After showing how the different processes such as convolutional encoding and multiplexing fit together in the transmission chain, Joe focuses in on Layered Division Multiplexing (LDM) where a high robustness signal can be carefully combined with a lower robustness signal such that where one interferes with the other, there is enough separation to allow it to be decoded.

Next we are introduced to PLPs – Physical Layer Pipes. These can also be found in DVB-T2 and DVB-S2 and are logical channels carrying one or more services, with a modulation scheme and robustness particular to that individual pipe. Within those lie Frames and Subframes and Joe gives a good breakdown of the difference in meaning of the three, the Frame being at the top of the pile containing the other two. We look at how the bootstrap signal at a known modulation scheme and symbol rate details what’s coming next such which allow this very dynamic working with streams being sent with different modulation settings. The bootstrap is also important as it contains Early Alert System (EAS) signalling.

Layered Division Multiplexing is the next hot topic we hit and this elicits questions from the audience. LDM is important because it allows two streams to be sent at the same time with independent or related broadcasts. For instance this could deliver UHD content with HD underneath with the HD modulated to give much better robustness.

Another way of maintaining robustness is to establish an SFN which is now possible with ATSC 3.0. Joe explains how this is possible and how the RF from different antennae can help with reception. Importantly he also outlines how toward out the maximum separation between antennae and talks through different deployment techniques. He then works through some specific cases to understand the transmission power needed.

As the end of the video nears, Joe talks about MIMO transmission explaining how this, among other benefits, can allow channel bonding where two 6Mhz channels can be treated as a single 12Mhz channel. He talks about how PTP can complement GPS in maintaining timing if diverse systems are linked with ethernet and he then finishes with a walkthrough of configuring a system.

Watch now!

Joe Seccia Joe Seccia
Manager, TV Transmission Market and Product Development Strategy

Video: ASTC 3.0 Basics, Performance and the Physical Layer

ATSC 3.0 is a revolutionary technology bringing IP into the realms of RF transmission which is gaining traction in North America and is deployed in South Korea. Similar to DVB-I, ATSC 3.0 provides a way to unite the world of online streaming with that of ‘linear’ broadcast giving audiences and broadcasters the best of both worlds. Looking beyond ‘IP’, the modulation schemes are provided are much improved over ATSC 1.0 providing much better reception for the viewer and flexibility for the broadcaster.

Richard Chernock, now retired, was the CSO of Triveni Digital when he have this talk introducing the standard as part of a series of talks on the topic. ATSC, formed in 1982 brought the first wave of digital television to The States and elsewhere, explains Richard as he looks at what ATSC 1.0 delivered and what, we now see, it lacked. For instance, it’s fixed 19.2Mbps bitrate hardly provides a flexible foundation for a modern distribution platform. We then look at the previously mentioned concept that ATSC 3.0 should glue together live TV, usually via broadcast, with online VoD/streaming.

The next segment of the talk looks at how the standard breaks down into separate standards. Most modern standards like STMPE’s 2022 and 2110, are actually a suite of individual standards documents united under one name. Whilst SMPTE 2110-10, -20, -30 and -40 come together to explain how timing, video, audio and metadata work to produce the final result of professional media over IP, similarly ATSC 3.0 has sections on explaining how security, applications, the RF/physical layer and management work. Richard follows this up with a look at the protocol stack which serves to explain which parts are served on TCP, which on UDP and how the work is split between broadcast and broadband.

The last section of the talk looks at the physical layer. That is to say how the signal is broadcast over RF and the resultant performance. Richard explains the newer techniques which improve the ability to receive the signal, but highlights that – as ever – it’s a balancing act between reception and bandwidth. ATSC 3.0’s benefit is that the broadcaster gets to choose where on the scales they want to broadcast, tuning for reception indoors, for high bit-rate reception or anywhere in between. With less than -6dB SNR performance plus EAS wakeup, we’re left with the feeling that there is a large improvement over ATSC 1.0.

The talk finishes with two headlining features of ATSC 3.0. PLPs, also known as Physical Layer Pipes, are another headlining feature of ATSC 3.0, where separate channels can be created on the same RF channel. Each of these can have their own robustness vs bit rate tradeoff which allows for a range of types of services to be provided by one broadcaster. The other is Layered Division Multiplexing which allows PLPs to be transmitted on top of each other which allows 100% utilisation of the available spectrum.

Watch now!

Richard Chernock Dr. Richard Chernock
Former CSO,
Triveni Digital