Webinar: Transforming creative workflows: Making great content in the cloud

Join IBC365 on Thursday 25 April at 4pm BST to explore why creators are turning to cloud to transform the way they make great content and how virtualised workflows are unlocking the ability to work in new ways: faster, more collaborative, more efficient and more creative.

This webinar goes inside some of the world’s leading content creators, production and post-production operations to hear how they are embracing cloud technology to transform the creative processes used to make, produce and deliver video.

The webinar will cover ways in which cloud is enabling more collaboration, access to more talent, round-the-clock working, more content security, and slicker workflows. There’s also a dose of reality, as the human and technology challenges and the potential pitfalls of virtualising creative workflows are explored.

Case studies focus on using cloud for:
•Streamlining content creation in the field
•Transforming production and post-production processes
•Efficient content delivery and backhaul

Register now!

Speakers

Jeremy Smith Jeremy Smith
Chief Technology Officer,
Jellyfish Pictures
Laura Cotteril Laura Cotterill
Founder & Managing Director
LCTV
Spencer Stephens Spencer Stephens
TechXMedia

Photo by Aleksandar Pasaric from Pexels

Video: Power Talks – ATSC 3.0

ATSC 3.0 is the major next step in broadcasting for the US, South Korea and other countries and is a major update to the ATSC standard in so many way that getting across it all is not trivial. All terrestrial broadcasting in the US are done with ATSC as opposed to many other places, including Europe, which use DVB.

ATSC 3.0 brings in OFDM modulation which is a tried and tested technology also used in DVB. But the biggest change in the standard is that all of the transport within ATSC is IP. Broadcasters now, using broadband as a return path, have two-way communication with their viewers allowing transfer of data as well as media.

In this talk from Imagine Communications, we talk a look into the standard which, as is common nowadays, is a suite of standards. These standards cover Early Alerts, immersive audio, DRM, return paths and more. We then have a look at the system architecture of the ATSC 3.0 broadcast deployed in Phoenix.

South Korea has been pushing forward ATSC 3.0 and Chet Dagit looks at what they have been doing and how they’ve created high quality UHD channels to the consumer. He then looks at what the US can learn from this work but also DVB deployments in Europe.

Finally, Yuval Fisher looks at how the data and granularity available in ATSC 3.0 allows for more targeted ads and how you would manage both internally and harnessing it for ad campaigns.

Watch now!

Speakers

Steve Reynolds Steve Reynolds
President, P & N Solutions,
Imagine Communications
Mark Corl Mark Corl
SVP of Emergent Technology,
Triveni Digital
Chet Dagit Chet Dagit
Founder & Managing Member
RTP Holdings-Lokita Solutions
Yuval Fisher Yuval Fisher
CTO, Distribution
Imagine Communications

Video: 2019 Display Trends and Hot Display Apps

Display technology has always been deeply intertwined with broadcasting. After all, when John Logie Baird first demonstrated his working television, he had to invent both the camera and the display device, then known as at the televisor. He himself worked tirelessly on improving television and less than 20 years after his black and white debut was working on a colour television which used two CRT (Cathode Ray Tubes) to produce its picture culminating in the world’s first demonstration of a colour TV in 1944 – incidentally discovering, demonstrating and patenting 3D TV on the way!

So it is today that the displays define what we can show to viewers. Is there any point in mastering a video to show at 10,000 NITs if there is no display that can show something so bright? Pushing all of Europe and the US’s television programmes to 8K resolution is of limited benefit when 8K TVs are in limited supply and in few homes.

This talk looks at the state of the art of display technology seeing where it’s being used and how. Digital Signage is covered and of course this is where the high brightness technology is developed, for signs outside, some of which could influence more conventional TVs on which we want to watch HDR (High Dynamic Range) video.

When OLED technology first came along it was quickly slated as a great option TVs and yet all these years later we see that its adoption in large panels is low. This shows the difficulty, sometimes, in dealing with the technical challenges of great technologies. We now see OLEDs in wearable devices and smaller screens. The number of the screens is quickly increasing as IoT devices, watches and other electronics start to adopt full screens instead of just flashing LEDs. This increase in manufacturing should lead to renewed investment in this field potentially allowing OLEDs to be incorporated in to full-sized, large TVs.

The talk finished with a look at the TV market covering quantum dots and what people really mean when they mention ‘LED TVs’.

This webinar is from the Society for Information Display and is produced in partnership with SMPTE.

Watch Now!

Speaker

Sri Peruvemba Sri Peruvemba
CEO,
Marketer International

Video: Making Video Streams QUICer

There are many ways to speed up live streaming and much work has gone in to reducing chunk lengths for HLS-style streaming, WebRTC has arrived on the scene and techniques to speed up chunk delivery are in production in CDNs around the world.

But we shouldn’t forget lower down in the detail, we have how the web sites are actually saved to customers – the venerable HTTP. Running on TCP/IP, HTTP packets are delivered using very thorough acknowledgement mechanisms within TCP/IP. Furthermore, it’s immune to spoofing attacks due to a three way handshake to set up the connection.

However, all this communication ads latency as even for low latency connections, these communications can add up to a significant latency and affect the speed of the throughout of the connection.

This talk introduces QUIC which is a replacement for HTTP developed by Google which uses UDP as its underlying delivery mechanism, thus avoiding much of this built-in two way comms.

At the Mile High Video event, Miroslav Ponec from Akamai introduces this protocol which is undergoing standardisation at the IETF explaining how it works and why it’s such a good idea.

Watch now!

Speaker

Miroslav Ponec Miroslav Ponec
Engineering Director,
Akamai Technologies

Video: Building Large SMPTE ST 2110 Systems Using JT-NM TR-1001-1


With the SMPTE 2110 suite of standards largely published and the related AMWA IS-04 and -05 specifications stable, people’s minds are turning to how to implement all these standards bringing them together into a complete working system.

The JT-NM TR-1001-1 is a technical recommendation document which describes a way of documenting how the system will work – for instance how do new devices on the network start up? How do they know what PTP domain is in use on the network?

John Mailhot starts by giving an overview of the standards and documents available, showing which ones are published and which are still in progress. He then looks at each of them in turn to summarise its use on the network and how it fits in to the system as a whole.

Once the groundwork is laid, we see how the JT-NM working group have looked at 5 major behaviours and what they have recommended for making them work in a scalable way. These cover things like DNS discovery, automated multicast address allocation and other considerations.

Watch now

Speaker

John Mailhot John Mailhot
CTO Networking & Infrastructure
Imagine Communications

Video: Per-title Encoding at Scale

MUX is a very pro-active company pushing forward streaming technology. At NAB 2019 they have announced Audience Adaptive Encoding which is offers encodes tailored to both your content but also the typical bitrate of your viewing demographic. Underpinning this technology is machine learning and their Per-title encoding technology which was released last year.

This talk with Nick Chadwick looks at what per-title encoding is, how you can work out which resolutions and bitrates to encode at and how to deliver this as a useful product.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we see some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

Looking then at how to actually work out on a title-by-title basis, Nick explains the pros and cons of the different approaches going on to explain how MUX used machine learning to generate the model they created to make this work.

Finishing off with an extensive Q&A, this talk is a great overview on how to pick great encoding parameters, manually or otherwise.

Watch now!

Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux Inc.

Video: Into the Depths: The Technical Details behind AV1

As we wait for the dust to settle on this NAB’s AV1 announcements hearing who’s added support for AV1 and what innovations have come because of it, we know that the feature set is frozen and that some companies will be using it. So here’s a chance to go in to some of the detail.

AV1 is being created by the AOM, the Alliance for Open Media, of which Mozilla is a founding member. The IETF is considering it for standardisation under their NetVC working group and implementations have started. On The Broadcast Knowledge, we have seen explanations from Xiph.org, one of the original contributors to AV1. We’ve seen how it fares against HEVC with Ian Trow and how HDR can be incorporated in it from Google and Warwick University. For a complete list of all AV1 content, have a look here.

Now, we join Nathan Egge who talks us through many of the different tools within AV1 including one which often captures the imagination of people; AV1’s ability to remove film grain ahead of encoding and then add back in synthesised grain on playback. Nathan also looks ahead in the Q&A talking about integration into RTP, WebRTC and why Broadcasters would want to use AV1.

Watch now!

Speaker

Nathan Egge Nathan Egge
Video Codec Engineer,
Mozilla

Video: HEVC/H.265 Video Coding Standard

HEVC, also known as H.265 is often discussed even many years after its initial release fro MPEG with some saying that people aren’t using it and others saying its gaining traction. In reality, both sides have a point. Increasingly HEVC is being adopted partly because of wider implementation in products and partly because of a continued push toward higher resolution video which often gives the opportunity to make a clean break from AVC/H.264/MPEG 4.

This expert-led talk looks in detail at HEVC and how it’s constructed. For some, the initial part of the video will be enough. Others will want to bookmark the video to use as reference in their work, whilst still others will want to watch the whole things and will immediately find it puts parts of their work in better context.

Wherever you fit, I think you’ll agree this is a great resource for understanding HEVC streams enabling you to better troubleshoot problems.

Watch now!

Speakers

David Marpe David Marpe
Head of Department Video Coding & Analytics,
Fraunhofer Heinrich Hertz Institute
Karsten Suehring Karsten Suehring
Project Manager,
Fraunhofer Heinrich Hertz Institute
Benjamin Bross Benjamin Bross
Project Manager,
Fraunhofer Heinrich Hertz Institute
Dan Grois Dan Grois
Former Senior Researcher,
Fraunhofer Heinrich Hertz Institute

Video: Routing AES67

Well ahead of video, audio moved to uncompressed over IP and has been reaping the benefits ever since. With more mature workflows and, as has always been the case, a much higher quantity of feeds than video traditionally has, the solutions have a higher maturity.

Anthony from Ward-Beck Systems talks about the advantages of audio IP and the things which weren’t possible before. In a very accessible talk, you’ll hear as much about soup cans as you will about the more technical aspects, like SDP.

Whilst uncompressed audio over IP started a while ago, it doesn’t mean that it’s not still being developed – in fact it’s the interface with the video world where a lot of the focus is now with SMPTE 2110-30 and -31 determining how audio can flow alongside video and other essences. As has been seen in other talks here on The Broadcast Knowledge there’s a fair bit to know.(Here’s a full list.

To simplify this, Anthony, who is also the Vice Chair of AES Toronto, describes the work the AES is doing to certify equipment as AES 67 ‘compatible’ – and what that would actually mean.

This talk finishes with a walk-through of a real world OB deployment of AES 67 which included the simple touches as using google docs for sharing links as well as more technical techniques such as virtual sound card.

Packed full of easy-to-understand insights which are useful even to those who live for video, this IP Showcase talk is worth a look.

Watch now!

Speaker

Anthony P. Kuzub Anthony P. Kuzub
IP Audio Product Manager,
Ward-Beck Systems

Video: Multicast ABR

Multicast ABR is a mix of two very beneficial technologies which are seldom seen together. ABR – Adaptive Bitrate – allows a player to change the bitrate of the video and audio that it’s playing to adapt to changing network conditions. Multicast is a network technology which efficiently sends a video stream over the network without duplicating bandwidth.

ABR has traditionally been deployed for chunk-based video like HLS where each client downloads its own copy of the video in blocks of several seconds in length. This means that you bandwidth you use to distribute your video increases by one thousand times if 1000 people play your video.

Multicast works with live streams, not chunks, but allows the bandwidth use for 1000 players to increase – in the best case – by 0%.

Here, the panelists look at the benefits of combining multicast distribution of live video with techniques to allow it to change bitrate between different quality streams.

This type of live streaming is actually backwards compatible with old-style STBs since the video sent is a live transport stream, it’s possible to deliver that to a legacy STB using a converter in the house at the same time as delivering a better, more modern delivery to other TVs and devices.

It thus also allows pure-streaming providers to compete with conventional broadcast cable providers and can also result in cost savings in equipment provided but also in bandwidth used.

There’s lots to unpack here, which is why the Streaming Video Alliance have put together this panel of experts.

Watch now and find out more!

Speakers

Phillipe Carol Phillipe Carol
Senior Product Manager,
Anevia
Neil Geary Neil Geary
Technical Strategy Consultant,
Liberty Global
Brian Stevenson Brian Stevenson
VP of Ecosystem Strategy & Partnerships,
Ericsson
Mark Fisher Mark Fisher
VP of Marketing & Business Development,
Qwilt
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance