Networking in the cloud, by rights, should be the same in your office but with it’s a lot easier when you’re led through it. From subnets to VPN’s, this talk from AWS makes sure you can get your VPC (Virtual Private Cloud) talking to other parts of your cloud infrastructure and your office.
Starting with the basics and building up, Perry and Tom take us through the IP address allocation, address choices, firewall configuration, security configuration and then on to Direct Connect, VPNs sharing VPC resources and much more.
From the AWS Summit 2019, this is a great talk for those who know networking well and are new to AWS, as well as those who are comfortable with AWS names, but are a little rusty on the finer points of networking.
Manipulating the manifest of streamed video allows localisation of adverts with the option of per-client customisation. This results in better monetisation but also a better way to deal with blackouts and other regulatory or legal restrictions.
Using the fact that most streamed video is delivered by using a playlist which is simply a text file which lists the locations of the many files which contain the video, we see that you could deliver different playlists to clients in different locations – detected via geolocating the IP address. Similarly different ads can be delivered depending on the type of client requesting – phone, tablet, computer etc.
Here, Imagine’s Yuval Fisher starts by reminding us how online streaming typically works using HLS as an example. He then leads us through the possibilities of manifest manipulation. One interesting idea is using this to remove hardware delivering cost savings using the same infrastructure to deliver to both the internet and broadcast. Yuval finshes up with a list of “Dos and Don’ts” to explain the best way to achieve the playlist manipulation.
Sarah Foss rounds off the presentation explaining how manifest manipulation sits at the centre of the rest of the ad-delivery system.
Server-Side Ad Insertion (SSAI) it’s the best defence against ad-blockers, but switching in an ad at source can be tricky particularly in low latency streams. This talk at the OTT Leadership Summit at Streaming Media East brings together leaders in the field to explain where they’re up to in delivering this technology and the benefits they see.
Magnus Svensson tells us about the instrumental role Eyevinn Technology, the consultancy who run the technical conference Streaming Tech Sweden , is played in Sweden creating an open standard for all the broadcasters to work to in order to agree how to track SSAI allowing the correct payments to be made. Magnus also talks about aligning SCTE insertion with MPEG structure and the importance of correct preparation of the source video.
Tony Brown from Newsy talks about the centralised nature of SSAI making management easier and gives ana overview of decisioning bringing together buys and sellers of ads. Tony also discusses other analytics such as adjacency and targeting.
Jason Justman of Sinclair Broadcasting Group, explains SCTE insertion and talks about the technical difficulties in reacting to live changes in programming.
Geir Magnusson, Jr. from fuboTV covers the difficulties of preparing the ads quickly enough for thousands or millions of streams to get customised, SSAI ads at the same time and discusses his strategy to start pre-fetching ads from the ad server to prepare them ahead of time. Geir also highlights the misunderstanding that can exist where streaming provides the same video and programme experience as traditional broadcast but ad buyers don’t all understand how much more targeting is possible – even with SSAI.
Delivering an all-IP truck is no mean feat. tpc explains what they learnt, what went well and how they succeeded in delivering a truck which takes no longer to fire up than a traditional SDI truck.
A common questions among people considering a move to IP is ‘do I need to?’ and ‘how can I get ready?’. Here at The Broadcast Knowledge we always say ‘find a small project, get it working, learn what goes wrong and then plan the one you really wanted to do.’ The Swiss broadcasting service provider ‘Technology and Production Centre’, known as ‘tpc’, has done just that.
tpc is currently working on the Metechno project – a large all-IP news, sports and technology center for Swiss radio and television. In order to acquire necessary experience with the SMPTE ST 2110 standard, tpc designed the UHD1 OB van ahead of time which has been used in TV production for 6 months now. In this video, Andreas Lattmann shares the vision of the Metechno Project and, critically, his experiences related to the design and use of the truck.
The UHD1 is a 24-camera OB van with all IP core based on Arista switches with non-blocking architecture. It is the equivalent of an 184-square UHD SDI system however, it can be expanded by adding additional line cards to network switches. The truck is format agnostic, supporting both HD and UHD formats in HDR and SDR. IP gateways are incorporated for SDI equipment.
The SMPTE ST 2110 specification separates video and audio into discrete essence streams which boosts efficiency and flexibility, but we hear in this talk that more attention to latency (lip sync) is required compared to SDI systems. Andreas talks about the flexibility this truck provides with up-/down-conversion, color-correction for any video plus how IP has enabled full flexibility in what can be routed to the multiviewer screens.
Anderas spends some time discussing redundancy and how IP enables full redundancy – an improvement over many SDI infrastructures and how SMPTE’s ST 2022-7 standard makes this possible.
The main GUI is based on a Lawo VSM control system which aims to deliver a familiar experience for operators who used to work in the SDI domain. Network training has been provided for all operators because troubleshooting has changed significantly with the introduction of essences over IP. This is not least because NMOS IS-04 and 05 standards were not mature enough during design of the truck, so all IP connections had to be managed manually. With more than 50 thousand IP addresses in this system, AMWA’s NMOS IS-04 which manages discovery and registration and IS-05 which manages the setup and take-down of connections would have helped significantly in the lean management of the truck.
Lattmann emphasizes importance of using open standards like SMPTE ST 2110 instead of proprietary solutions. That allows you to choose the best components and not rely on a single manufacturer.
The learning’s the Andreas presents us involve difficulties with PTP, IP training, the benefits of flexibility. From a video point of view, Andreas presents his experiences with HDR->SDR workflows, focussing in HDR and UHD.
Webinar Date: Thursday May 30th 2019
Time: Duration 4 hours. 7am PT / 10am ET / 15:00 BST
AWS is synonymous with cloud computing so an insight into managing media on AWS is an insight into cloud computing in general. AWS is offering a 4-hour showcase of implementing content creation, distribution and your supply chain in the cloud.
The online event starts with a keynote on the motivations for moving your workflows into the cloud and how AWS meets them. After that, there are 3 tracks which track the 3 topics.
The complete list is available here. AWS Elemental dominates the distribution track explaining the use cases that can be met and going through the many in-cloud transcoding options.
The creation and supply chain tracks finish with a customer spotlight from FuseFX and Deluxe respectively. For anyone considering a move to the cloud for any part of their operation, these sessions should shed light on what is actually achievable and what is still wishful thinking.
Webinar date: Thursday May 30th 2019
Time: 16:00 BST / 11 am EST / 8 am PDT
Experienced advice is on hand in this webinar for those producing in HDR and UHD. Productions are always trying to raise the quality of acquisition in order to deliver better quality to the viewers, to enhance creative possibilities and to maximise financial gain by future proofing their archives. But this push always brings challenges in production and the move to UHD and HDR is no different.
HDR and UHD are not synonymous, but often do go hand-in-hand. This is partly because the move to UHD is a move to improve quality, but time and again we hear the reasons that increasing resolution in and of itself is not always an improvement. Rather the ‘better pixels’ mantra seeks to improve quality through improving the video using a combination of resolution, frame-rate, HDR and Wide Colour Gamut (WCG). So when it’s possible, HDR and WCG are often combined with UHD.
In this webinar, we hear the challenges on the way to success met by director and producer Pamela Ann Berry and The Farm Group. Register to hear them share their tips and tricks for better UHD and HDR production.
ISO BMFF a standardised MPEG media container developed from Apple’s Quicktime and is the basis for cutting edge low-latency streaming as much as it is for tried and trusted mp4 video files. Here we look into why we have it, what it’s used for and how it works.
ISO BMFF provides a structure to place around timed media streams whilst accommodating the metadata we need for professional workflows. Key to its continued utility is its extensible nature allowing additional abilities to be added as they are developed such as adding new codecs and metadata types.
ATSC 3.0’s streaming mechanism MMT is based on ISO BMFF as well as the low-latency streaming format CMAF which shows that despite being over 18 years old, the ISO BMFF container is still highly relevant.
Thomas Stockhammer is the Director of Technical Standards at Qualcomm. He explains the container format in structure and origin before explaining why it’s ideal for CMAF’s low-latency streaming use case, finishing off with a look at immersive media in ISO BMFF.
RIST solves a problem by transforming unmanaged networks into reliable paths for video contribution. This comes amidst increasing interest in using the public internet to contribute video and audio. This is partly because it is cheaper than dedicated data circuits, partly that the internet is increasingly accessible from many locations making it convenient, but also when feeding cloud-based streaming platforms, the internet is, by definition, part of the signal path.
Packet loss and packet delay are common on the internet and there are only two ways to compensate for them: One is to use Forward Error Correction (FEC) which will permanently increase your bandwidth by up to 25% so that your receiver can calculate which packets were missing and re-insert them. Or your receiver can ask for the packets to be sent again.
RIST joins a number of other protocols to use the re-request method of adding resilience to streams which has the benefit of only increasing the bandwidth needed when re-requests are needed.
In this talk, Ciro Noronha from Cobalt Digital, explains that RIST is an attempt to create an interoperable protocol for reliable live streaming – which works with any RTP stream. Protocols like SRT and Zixi are, to one extent or another, proprietary – although it should be noted that SRT is an open source protocol and hence should have a base-level of interoperability. RIST takes interoperability one stage further and is seeking to create a specification, the first of which is TR-06-1 also known as ‘Simple Profile’.
We then see the basics of how the protocol works and how it uses RTCP for singling. Further more RIST’s support for bonding is explored and the impact of packet reordering on stream performance.
The talk finishes with a look to what’s to come, in particular encryption, which is an important area that SRT currently offers over and above reliable transport. Watch now!
AV1 and VVC are both new codecs on the scene. Codecs touch our lives every day both at work and at home. They are the only way that anyone receives audio and video online and television. So all together they’re pretty important and finding better ones generates a lot of opinion.
So what are AV1 and VVC? VVC is one of the newest codecs on the block and is undergoing standardisation in MPEG. VVC builds on the technologies standardised by HEVC but adds many new coding tools. The standard is likely to enter draft phase before the end of 2019 resulting in it being officially standardised around a year later. For more info on VVC, check out Bitmovin’s VVC intro from Demuxed
AV1 is a new but increasingly known codec, famous for being royalty free and backed by Netflix, Apple and many other big hyper scale players. There have been reports that though there is no royalty levied on it, patent holders have still approached big manufacturers to discuss financial reimbursement so its ‘free’ status is a matter of debate. Whilst there is a patent defence programme, it is not known if it’s sufficient to insulate larger players. Much further on than VVC, AV1 has already had a code freeze and companies such as Bitmovin have been working hard to reduce the encode times – widely known to be very long – and create live services.
Here, Christian Feldmann from Bitmovin gives us the latest status on AV1 and VVC. Christian discusses AV1’s tools before discussing VVC’s tools pointing out the similarities that exist. Whilst AV1 is being supported in well known browsers, VVC is at the beginning.
There’s a look at the licensing status of each codec before a look at EVC – which stands for Essential Video Coding. This has a royalty free baseline profile so is of interest to many. Christian shares results from a Technicolor experiment.