Video: Intro into IPv6

It’s certainly taken its time to get here, but IPv6 is increasingly used on the internet. Google now report that just under 30% of the traffic to Google is IPv6 and both Akamai and APNIC show UK IPv6 readiness at around 30% with the US around 50%. Deployment within most enterprise environments, however, is often non existant with many products in the broadcast sector not supporting it at all.

Alex Latzko is an IPv6 evangelist and stands before us to introduce those who are IPv4 savvy to IPv6. For those of us who learnt it once, this is an accessible refresher. Those new to the topic will be able to follow, too, if they have a decent grasp of IPv4. Depending on where you are in the broadcast chain, the impetus to understand IPv6 may be strong, so grab your copy of the slides and let’s watch.

There are no broadcast addresses in IPv6

Alex Latzko
Alex, from ServerCentral Turing Group starts by explaining IPv6 addresses. Familiar to some as a far-too-long mix of hexadecimal numbers and colons, Alex says this is a benefit given the vast range of numbers possible allowing much more flexibility in the way we use the IPv6 address space over IPv4. He takes us through the meanings of the addresses starting with well-known tricks like abbreviating runs of zeros with a double colon, but less well-known ones too, like how to embed IPv4 addresses within an IPv6 address as well as the prefixes for multicast traffic. Alex goes on to show the importance of MAC addresses in IPv6. EUI-64 is a building block used for IPv6 functions which creates a 64-bit string from the 48-bit MAC address. This then allows us to create the important ‘link local’ address.

The last half of the presentation starts with a look at the CIDR prefix lengths that are in use and, is some cases, agreed as standards on the internet at large and in customer networks. For instance, internet routing works on blocks of /48 or larger. Within customer networks, blocks are often /64.

In IPv6, ARP is no longer. ARP can’t work because it uses broadcast addresses which don’t exist within the IPV6 world. This gives rise to the Neighbour Discovery Protocol which allows you to do something very similar. Specifically, it allows you to find your neighbours, routers, detect duplicate addresses and more.

Alex covers whether ‘NAT’ is possible in IPv6 and then looks at how routing works. Starting by imploring us to use ‘proper hierarchy’, he explains that there is no need to conserve IPv6 space. In terms of routing, the landscape is varied in terms of protocols to use. RIP is out of the window, as v1 and v2 have no knowledge of IPv6, OPSFv3 is a beacon of hope, though deployment is often in parallel with the IPv6-ignorant OSPFv2. The good news is that IS-IS, as well as BGP, are perfectly happy with either.

Watch now!
Download the presentation

Speaker

Alex Latzko Alex Latzko
Lead Network Architect
ServerCentral Turing Group

Video: Next Generation TV Audio

Often not discussed, audio is essential to television and film so as the pixels get better, so should the sound. All aspects of audio are moving forward with more processing power at the receiver, better compression at the sender and a seismic shift in how audio is handled, even in the consumer domain. It’s fair to say that Dolby have been busy.

Larry Schindel from Linear Acoustic is here thanks to the SBE to bring us up to date on what’s normally called ‘Next Generation Audio’ (NGA). He starts from the basics looking at how audio has been traditionally delivered by channels. Stereo sound is delivered as two channels, one for each speaker. The sound engineer choosing how the audio is split between them. With the move to 5.1 and beyond, this continued with the delivery of 6, 8 or even more channels of audio. The trouble is this was always fixed at the time it went through the sound suite. Mixing sound into channels makes assumptions on the layout of your speakers. Sometimes it’s not possible to put your speakers in the ideal position and your sound suffers.

Dolby Atmos has heralded a mainstream move to object-based audio where sounds are delivered with information about their position in the sound field as opposed to the traditional channel approach. Object-based audio leaves the downmixing to the receiver which can be set to take into account its unique room and speaker layout. It represents a change in thinking about audio, a move from thinking about the outputs to the inputs. Larry introduces Dolby Atmos and details the ways it can be delivered and highlights that it can work in a channel or object mode.

Larry then looks at where you can get media with Dolby Atmos. Cinemas are an obvious starting point, but there is a long list of streaming and pay-TV services which use it, too. Larry talks about the upcoming high-profile events which will be covered in Dolby Atmos showing that delivering this enhanced experience is something being taken seriously by broadcasters across the board.

For consumers, they still have the problem of getting the audio in the right place in their awkward, often small, rooms. Larry looks at some of the options for getting great audio in the home which include speakers which bounce sound off the ceiling and soundbars.

One of the key technologies for delivering Dolby Atmos is Dolby AC-4, the improved audio codec taking compression a step further from AC-3. We see that data rates have tumbled, for example, 5.1 surround on AC-3 would be 448Kbps, but can now be done in 144kbps with AC-4. Naturally, it supports channel and object modes and Larry explains how it can deliver a base mix with other audio elements over the top for the decoder to place allowing better customisation. This can include other languages or audio description/video description services. Importantly AC-4, like Dolby E, can be sent so that it doesn’t overlap video frames allowing it to accompany routed audio. Without this awareness of video, any time a video switch was made, the audio would become corrupted and there would be a click.

Dolby Atmos and AC-4 stand on their own and are widely applicable to much of the broadcast chain. Larry finishes this presentation by mentioning that Dolby AC-4 will be the audio of choice for ATSC 3.0. We’ve covered ATSC 3.0 extensively here at The Broadcast Knowledge so if you want more detail than there is in this section of the presentation, do dig in further.

Watch now!

Speaker

Larry Schindel Larry Schindel
Senior Product Manager,
Linear Acoustic

Video: Introduction to IPMX

The Broadcast Knowledge has documented over 100 videos and webinars on SMPTE ST 2110. It’s a great suite of standards but it’s not always simple to implement. For smaller systems, many of the complications and nuances don’t occur so a lot of the deeper dives into ST 2110 and its associated specifications such as NMOS from AMWA focus on the work done in large systems in tier-1 broadcasters such as the BBC, tpc and FIS Skiing for SVT.

ProAV, the professional end of the AV market, is a different market. Very few companies have a large AV department if one at all. So the ProAV market needs technologies which are much more ‘plug and play’ particularly those in the events side of the market. To date, the ProAV market has been successful in adopting IP technology with quick deployments by using heavily proprietary solutions like ZeeVee, SDVoE and NDI to name a few. These achieve interoperability by having the same software or hardware in each and every implementation.

IPMX aims to change this by bringing together a mix of standards and open specifications: SMPTE ST 2110, NMOS specs and AES. Any individual or company can gain access and develop a service or product to meet them.

Andreas gives a brief history of IP to date outlining how AES67, ST 2110, ST 2059 and the IS specifications, his point being that the work is not yet done. ProAV has needs beyond, though complementary to, those of broadcast.

AES67 is already the answer to a previous interoperability challenge, explains Andreas, as the world of audio over IP was once a purely federated world of proprietary standards which had no, or limited, interoperability. AES67 defined a way to allow these standards to interoperate and has now become the main way audio is moved in SMPTE 2110 under ST 2110-30 (2110-31 allows for AES3). Andreas explains the basics of 2110, AES, as well as the NMOS specifications. He then shows how they fit together in a layered design.

Andreas brings the talk to a close looking at some of the extensions that are needed, he highlights the ability to be more flexible with the quality-bandwidth-latency trade-off. Some ProAV applications require pixel perfection, but some are dictated by lower bandwidth. The current ecosystem, if you include ST 2110-22’s ability to carry JPEG-XS instead of uncompressed video allows only very coarse control of this. HDMI, naturally, is of great importance for ProAV with so many HDMI interfaces in play but also the wide variety of resolutions and framerates that are found outside of broadcast. Work is ongoing to enable HDCP to be carried, suitably encrypted, in these systems. Finally, there is a plan to specify a way to reduce the highly strict PTP requirements.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Evangelist,
ALC NetworX

Video: The Five Ws of 5G

Following on from last week’s deep dive below the hype of 5G this shorter talk looks both at the promise and implementation challenges of this technology which promises so much to so many different walks of life.

Michael Heiss, takes the stage and starts a short history lesson with 1G (an analogue technology) and shows how it stepped up through 2G A.K.A. GSM and moved into 4G, LTE and now 5G. Michael’s hypothesis is that this is the fourth industrial revolution. The first, he proposes is what we know as the Industrial Revolution which started with harnessing steam power. But until the invention of electricity, you had to be close to your power source. Electricity was the game-changer in enabling people, albeit with the relevant and long wires, to have the machines abstracted from the power generation. Similarly, while data and computing have transformed our world in the past 5 decades or more, Michael says 5G is the technology which will give us that abstraction like electricity to remote people from power production, 5G promises to allow people in general to not have to be next to a computer (where the data is). Michael outlines the ability of higher speeds and lower latency to enable new use-cases. He outlines consumer applications, medical use cases, and business uses.

As with any new technology, there is always a battle for dominance, so Michael outlines some of the different words and phrases used to explain what they mean. If you see “NR”, that stands for New Radio and comes from 3GPP. There are a number of frequency bands which 5G can occupy which Michael introduces. The current bands for 2G and 3G between 700 and 1400 MHz can be used. There are also a number of new frequencies up to and including some C-band frequencies which are in use. These are known, collectively, by some as the ‘sub 6’ frequencies to differentiate them from the millimetre-wave (mm-wave) frequencies which have been opened up starting at 24Ghz up to 47GHz.

It’s an inconvenient truth of physics that higher frequency RF is more highly attenuated in general. This means that the mm-wave frequencies, being so high, are actually only effective with almost direct ‘line of sight’ to the device. They can’t penetrate walls or windows. 5G will need many more cell sites outdoors thanks to the higher sub 6 frequencies, but to use mm-wave, telcos will be restricted to line-of-site transmitter-to-transmitter links or deploying highly local micro or femtocells on lamp posts (light poles) or ceiling mounted internal relays. Michael finishes his talk discussing these implementation difficulties.

Watch now!
Speakers

Michael Heiss Michael Heiss
Principal Consultant
M. Heiss Consulting