Join IBC365 on Thursday 25 April at 4pm BST to explore why creators are turning to cloud to transform the way they make great content and how virtualised workflows are unlocking the ability to work in new ways: faster, more collaborative, more efficient and more creative.
This webinar goes inside some of the world’s leading content creators, production and post-production operations to hear how they are embracing cloud technology to transform the creative processes used to make, produce and deliver video.
The webinar will cover ways in which cloud is enabling more collaboration, access to more talent, round-the-clock working, more content security, and slicker workflows. There’s also a dose of reality, as the human and technology challenges and the potential pitfalls of virtualising creative workflows are explored.
Case studies focus on using cloud for:
•Streamlining content creation in the field
•Transforming production and post-production processes
•Efficient content delivery and backhaul
There are many ways to speed up live streaming and much work has gone in to reducing chunk lengths for HLS-style streaming, WebRTC has arrived on the scene and techniques to speed up chunk delivery are in production in CDNs around the world.
But we shouldn’t forget lower down in the detail, we have how the web sites are actually saved to customers – the venerable HTTP. Running on TCP/IP, HTTP packets are delivered using very thorough acknowledgement mechanisms within TCP/IP. Furthermore, it’s immune to spoofing attacks due to a three way handshake to set up the connection.
However, all this communication ads latency as even for low latency connections, these communications can add up to a significant latency and affect the speed of the throughout of the connection.
This talk introduces QUIC which is a replacement for HTTP developed by Google which uses UDP as its underlying delivery mechanism, thus avoiding much of this built-in two way comms.
At the Mile High Video event, Miroslav Ponec from Akamai introduces this protocol which is undergoing standardisation at the IETF explaining how it works and why it’s such a good idea.
As we wait for the dust to settle on this NAB’s AV1 announcements hearing who’s added support for AV1 and what innovations have come because of it, we know that the feature set is frozen and that some companies will be using it. So here’s a chance to go in to some of the detail.
Now, we join Nathan Egge who talks us through many of the different tools within AV1 including one which often captures the imagination of people; AV1’s ability to remove film grain ahead of encoding and then add back in synthesised grain on playback. Nathan also looks ahead in the Q&A talking about integration into RTP, WebRTC and why Broadcasters would want to use AV1.
HEVC, also known as H.265 is often discussed even many years after its initial release fro MPEG with some saying that people aren’t using it and others saying its gaining traction. In reality, both sides have a point. Increasingly HEVC is being adopted partly because of wider implementation in products and partly because of a continued push toward higher resolution video which often gives the opportunity to make a clean break from AVC/H.264/MPEG 4.
This expert-led talk looks in detail at HEVC and how it’s constructed. For some, the initial part of the video will be enough. Others will want to bookmark the video to use as reference in their work, whilst still others will want to watch the whole things and will immediately find it puts parts of their work in better context.
Wherever you fit, I think you’ll agree this is a great resource for understanding HEVC streams enabling you to better troubleshoot problems.
‘Better Pixels’ is the continuing refrain from the large number of people who are dissatisfied with simply increasing resolution to 4K or even 8K. Why can’t we have a higher frame-rate instead? Why not give us a wider colour gamut (WCG)? And why not give us a higher dynamic range (HDR)? Often, they would prefer any of these 3 options over higher resolution.
Dynamic Range is the word given to describe how much of a difference there is between the smallest possible signal and the strongest possible signal. In audio, what’s the quietest things that can be heard verses the loudest thing that can be heard (without distortion). In video, what’s the difference between black and white – after all, can your TV fully simulate the brightness and power of our sun? No, what about your car’s headlights? Probably not. Can your TV go as bright as your phone flashlight – well, now that’s realistic.
So let’s say your TV can go from a very dark black to being as bright as a medium-power flashlight, what about the video that you send your TV? When there’s a white frame, do you want your TV blasting as bright as it can? HDR allows producers to control the brightness of your display device so that something that is genuinely very bright, like star, a bright light, an explosion can be represented very brightly, whereas something which is simply white, can have the right colour, but also be medium brightness. With video which is Standard Dynamic Range (SDR), there isn’t this level of control.
For films, HDR is extremely useful, but for sports too – who’s not seen a football game where the sun leaves half the pitch in shadow and half in bright sunlight? With SDR, there’s no choice but to have one half either very dark or very bright (mostly white) so you can’t actually see the game there. HDR enabled the production crew to let HDR TVs show detail in both areas of the pitch.
HLG, which stands for Hybrid Log-Gamma is the name of a way of delivering HDR video. It’s been pioneered, famously, by Japan’s NHK with the UK’s BBC and has been standardised as ARIB STV-B67. In this talk, NHK’s Yuji Nagata helps us navigate working with multiple formats; HDR HLG -> SDR, plus converting from HLG to Dolby’s HDR format called PQ.
The reality of broadcasting is that anyone who is producing a programme in HDR will have to create an SDR version at some point. The question is how to do that and when. For live, some broadcasters may need to fully automate this. In this talk, we look at a semi-automated way of doing this.
HDR is usually delivered in a Wide Colour Gamut signal such as the ITU’s BT.2020. Converting between this colour space and the more common BT.709 colour space which is part of the HD video standards, is also needed on top of the dynamic range conversion. So listen to Yugi Nagata’s talk to find out NHK’s approach to this.
NHK has pushed very hard for many years to make 8K broadcasts feasible and has in recent times focussed on tooling up in time for the the 2020 Olympics. This talk was given at the SMPTE 2017 technical conference, but is all the more relevant now as NHK up the number of 8K broadcasts approaching the opening ceremony. This work on HDR and WCG is part of making sure that the 8K format really delivers an impressive and immersive experience for those that are lucky enough to experience it. This work on the video goes hand in hand with their tireless work with audio which can deliver 22.2 multichannel surround.
Blockchain, often cast aside as a mere hype word, won’t be gone from our lives just like ‘cloud’ and ‘AI’. However it can’t be said that sometimes it’s applied without thought to its true value. Here, IBM Aspera look at where blockchain can actually fit within broadcast.
Blockchain, simplistically, allows you to verify the authenticity of something in a very secure and reliable way, so its applicability to retail and logistics is clear. However, in broadcast we deliver millions of programmes, adverts, trailers, rushes each and every day both to screens and behind the scenes. So if it can disrupt and improve other industries, why not outs?
Blockchain is interesting in not just in its features but also how it does it. In this webinar we’ll see some of that, understanding what how it works and what a ‘trusted business network’ looks like.
James Wilson and Jonathan Solomon will explain the encryption and key exchange that underpins the technology and how this offers improved asset tracking and even execution of commercial and legal terms.
Whether or not edge computing is the next generation of cloud technology, the edge plays a vital role in the streaming video experience. The closer a video is stored to the requesting user, the faster the delivery and better the experience. But, streaming also provides a lot more opportunity for interactivity, engagement, and data collection than traditional broadcast television. That means as the edge grows in compute capacity and functionality, it could enable new and exciting use cases, such as AI, that could improve the viewer experience. In this webinar, we’ll explore the state of edge computing and how it might be leveraged in streaming video.
Streaming Video Alliance
Date: Friday, March 29th 2019
Time: 11am PT / 2pm ET / 18:00 GMT
NAB is coming around again and the betting has started on what the show will bring. Whilst we can look to last year for hints, here editors from Streaming Media come together to discuss the current trends in the industry and how they will be represented at NAB.
Some highlights of the conversation will be:
What HEVC solutions people are showing – the ongoing codec wars are captivating to most people as AV1 tries – and gradually succeeds – to break its ‘too slow’ label, whilst HEVC continues to grow acceptance with its ‘ready to deploy’ label despite the fees.
UHD production and delivery – We know that production houses prefer to capture higher resolution as it increases the value of their content and gives them more options in editing. But how far is UHD developing further down the chain. Is it just for live sports?
Live Streaming – SRT is bound to keep making waves at NAB has Haivision plans its biggest event yet discussing the many ways it’s being used. SRT delivers encrypted, reliable streams – while there are competitors, SRT continues to grow apace.
NDI – This compressed but ultra low latency codec continues to impress for live production workflows – particularly live events, though it’s not clear how much – if at all – it will make its way into top-tier broadcasters.
Much more will be on the cards, so register now for this session on Friday March 29th.
VP & Editor-in-Chief
AWS is synonymous with cloud computing and whether you use it or not, knowing how to do things in AWS reaps benefits when trying to understand or implement systems in a cloud infrastructure. Knowing what’s possible and what others are doing is really useful, so whilst I don’t usually cover heavily product-specific resources here on The Broadcast Knowledge I still believe that knowing AWS is knowing part of the industry.
Here, there are 3 consecutive webinars which cover building a live streaming channel from the fundamentals through to making it operational and ongoing monitoring and maintenance.
Session one at 3pm GMT looks at end-to-end workflows and strategies for redundancy. It looks at both contribution of video into the cloud as much as what happens when it arrives and the delivery.
Session two at 4pm GMT looks examines the more complex workflows where you spread processing/failover across multiple regions and other similar situations.
Session three is the last of the day at 5pm GMT looking at setting up end-to-end monitoring to take the guesswork out of delivering the service on an on-going basis.