Video: Blockchain & the Hollywood Supply Chain

At The Broadcast Knowledge, we’re continuing to cut through the hype and get to the bottom of blockchain. Now part of the NAB drinking game along with words like AI and 5G, it’s similarly not going away. The principle of blockchain is useful – just not useful everywhere.

So what can broadcasters do with Blockchain, and – given this is a SMPTE talk – what can film studios do with it? It’s doubtless that blockchain really makes secure, trusted systems possible so the mind immediately jumps to using it to ensure all the files needed to create films are distributed securely and with an audit trail.

Here, Steve Wong looks at this but explores the new possibilities this creates. He starts with the basics on what blockchain is and how it works, but soon moves in to how this could work for Hollywood explaining what could exist and what already does.

Watch now!

Speaker

Steve Wong Steve Wong
Cloud & Platform Services General Manager, Telecom, Media & Technology
DXC Technology

Video: Holographic update: Light Fields and the Future of Video

Recording Light Fields sounds like sci-fi as it allows you to record a video and then move around that video as you please changing the angle you look at it and your position. This is why it’s also referred to as holography.

It works by recording the video from many different viewpoints rather than just from one angle. Processing all of these different videos of the same thing allows a computer to build a 3D video model of the scene which you can then watch using VR goggles or a holographic TV.

In this talk from San Francisco Video Tech, Ryan Damm from Visby.io talks us through some of the basics of light fields touching and brings us up to date with the current status. Google, Microsoft, Intel are some of the big players investing in R&D among many smaller startups.

Ryan talks about the need for standardisation for light fields. The things we take for granted in 2D video are compared with what you have with light field video by way of explaining the challenges and approaches being seen today in this active field.

Watch now and learn!

Speaker

Ryan Damm Ryan Damm
Co founder,
Visby

Video: A Basic Guide For Real-Time IP Video

There are a lot of videos looking into the details of uncompressed video over IP, but not many for those still starting out – and let’s face it, there are a lot of people who are only just embarking on this journey. Here, Andy Jones takes us through the real basics do prove very useful as a building block for understanding today’s IP technologies.

Andy Jones is well known by many broadcast engineers in the UK having spent many many years working in The BBC’s Training and Development department and subsequently running training for the IABM. The news that he passed away on Saturday is very saddening and I’m posting this video in recognition of the immense amount he has contributed to the industry through his years of tireless work. You can see from this video from NAB 2018 his passion, energy and ability to make complicated things simple.

In this talk, Andy looks at the different layers that networks operate on, including the physical layer i.e. the cables. This is because the different ways in which traffic gets from A to B in networking are interdependent and need to be considered as such. He looks at an example network which shows all the different standards in use in an IP network and talks about their relevance.

Andy briefly looks at IP addresses and the protocol that makes them work. This underpins much of what happens on most networks before looking at the Real-time Transport Protocol (RTP) which is heavily used for sending audio and video streams.

After looking at how timing is done in IP (as opposed to black and burst) he has laid enough foundations to look at SMPTE ST 2110 – the suite of standards which show how different media (essences) are sent in networks delivering uncompressed streams. AES67 for the audio is also looked at before how to control the whole kit and caboodle.

A great primer for those starting out, watch now!

Speaker

Andy Jones Andy Jones

Video: Content Production Technology on Hybrid Log-Gamma


‘Better Pixels’ is the continuing refrain from the large number of people who are dissatisfied with simply increasing resolution to 4K or even 8K. Why can’t we have a higher frame-rate instead? Why not give us a wider colour gamut (WCG)? And why not give us a higher dynamic range (HDR)? Often, they would prefer any of these 3 options over higher resolution.

Watch this video explain more, now.

Dynamic Range is the word given to describe how much of a difference there is between the smallest possible signal and the strongest possible signal. In audio, what’s the quietest things that can be heard verses the loudest thing that can be heard (without distortion). In video, what’s the difference between black and white – after all, can your TV fully simulate the brightness and power of our sun? No, what about your car’s headlights? Probably not. Can your TV go as bright as your phone flashlight – well, now that’s realistic.

So let’s say your TV can go from a very dark black to being as bright as a medium-power flashlight, what about the video that you send your TV? When there’s a white frame, do you want your TV blasting as bright as it can? HDR allows producers to control the brightness of your display device so that something that is genuinely very bright, like star, a bright light, an explosion can be represented very brightly, whereas something which is simply white, can have the right colour, but also be medium brightness. With video which is Standard Dynamic Range (SDR), there isn’t this level of control.

For films, HDR is extremely useful, but for sports too – who’s not seen a football game where the sun leaves half the pitch in shadow and half in bright sunlight? With SDR, there’s no choice but to have one half either very dark or very bright (mostly white) so you can’t actually see the game there. HDR enabled the production crew to let HDR TVs show detail in both areas of the pitch.

HLG, which stands for Hybrid Log-Gamma is the name of a way of delivering HDR video. It’s been pioneered, famously, by Japan’s NHK with the UK’s BBC and has been standardised as ARIB STV-B67. In this talk, NHK’s Yuji Nagata helps us navigate working with multiple formats; HDR HLG -> SDR, plus converting from HLG to Dolby’s HDR format called PQ.

The reality of broadcasting is that anyone who is producing a programme in HDR will have to create an SDR version at some point. The question is how to do that and when. For live, some broadcasters may need to fully automate this. In this talk, we look at a semi-automated way of doing this.

HDR is usually delivered in a Wide Colour Gamut signal such as the ITU’s BT.2020. Converting between this colour space and the more common BT.709 colour space which is part of the HD video standards, is also needed on top of the dynamic range conversion. So listen to Yugi Nagata’s talk to find out NHK’s approach to this.

NHK has pushed very hard for many years to make 8K broadcasts feasible and has in recent times focussed on tooling up in time for the the 2020 Olympics. This talk was given at the SMPTE 2017 technical conference, but is all the more relevant now as NHK up the number of 8K broadcasts approaching the opening ceremony. This work on HDR and WCG is part of making sure that the 8K format really delivers an impressive and immersive experience for those that are lucky enough to experience it. This work on the video goes hand in hand with their tireless work with audio which can deliver 22.2 multichannel surround.

Watch now!

Speaker

Yuji Nagata Yuji Nagata
Broadcast Engineer,
NHK

Video: Scalable IP Architectures for Live Production and Playout

For many building a good network for a 2110 or other media-over-IP standards is new and a bit scary. But if there’s one person who knows how to do it, it’s Arista’s Gerard Phillips who’s here to go through the basics and build up the network needed for a large and scalable network.

Scalability is the heart of this, because life does change – your company grows, technology pushes you from SD to HD to UHD etc. So you need to build scalability in from the beginning. Getting this right comes down to choosing the right hardware and having the right architecture.

Gerard looks at switch architecture and bandwidth both in the switch and of the network cables. He then looks towards ‘hub and spoke’ Vs monolithic switch design. What are the pros and cons to each and which is right for you?

SDN – Software Defined Networking – is also a key ingredient in such a network. This is where the routing decisions of the switch infrastructure is taken out of the switches because they have automatic and blinkered algorithms and takes it to a server which has a complete overview of the whole system. For a broadcaster who deals with critical signal chains – this is usually the best approach to give determinism and safety to the network.

PTP – Precision Time Protocol – provides the foundation of the 2110 standard and is therefore very important to studio installations being used to replace black and burst. What are the best ways to distribute this and how can you deal with redundancy?

These topics and more are all covered at this IP Showcase presentation from IBC 2018.

Watch now!

Speaker

Gerard Phillips Gerard Phillips
Systems Engineer,
Arista Networks

Webinar: Blockchain – A framework for the secure exchange of digital assets

Blockchain, often cast aside as a mere hype word, won’t be gone from our lives just like ‘cloud’ and ‘AI’. However it can’t be said that sometimes it’s applied without thought to its true value. Here, IBM Aspera look at where blockchain can actually fit within broadcast.

Blockchain, simplistically, allows you to verify the authenticity of something in a very secure and reliable way, so its applicability to retail and logistics is clear. However, in broadcast we deliver millions of programmes, adverts, trailers, rushes each and every day both to screens and behind the scenes. So if it can disrupt and improve other industries, why not outs?

Blockchain is interesting in not just in its features but also how it does it. In this webinar we’ll see some of that, understanding what how it works and what a ‘trusted business network’ looks like.

James Wilson and Jonathan Solomon will explain the encryption and key exchange that underpins the technology and how this offers improved asset tracking and even execution of commercial and legal terms.

Register now!

Speakers

James Wilson James Wilson
Directory of Engineering,
IBM Aspera
Jonathan Solomon Jonathan Solomon
Strategic Initiatives Engineer, Streaming
IBM Aspera

Video: Beyond SMPTE Time Code – the TLX Project

SMPTE Time Code started off in the 1970s and has evolved yet in some ways remained unchanged. It is key to electronic video editing, and has found application in many other fields and industries including live events. This is Armin Van Buren explaining how he uses SMPTE timecode in his live DJ sets.

The more we push technology, the more we demand form timecode, so now there is the TLX project (Time Label, eXtensible) which seeks to define a new labelling system.

The webinar will provide an overview of the emerging design, and is intended to provide a preview for potential users, and to encourage participation by those with expertise to offer.

Peter Symes, the host, covers:

  • The history of timing
  • What SMPTE ST 12 is and its evolution
  • The concept of TLX
  • Use of PTP & provision for extensibility
  • What types of data can TLX convey
  • Q&A!

Watch now to hone your knowledge of the SMPTE timecode that already exists and to get ready to understand TLX.

Speakers

Peter Symes Peter Symes
Consultant,
SMPTE Fellow
Joel E. Welch Joel E. Welch
Director of Education,
SMPTE

Video: Running live video with FFmpeg

San Francisco Video Tech welcomes Haluk Ucar talking about live video streaming. How do you encode multiple resolutions/bitrates efficiently on CPUs and maximise the amount of channels? Is there value in managing multiple encodes centrally? How can we manage the balance between CPU use and VQ?

Haluk discusses a toolset for Adaptive Decisions and looks at Adaptive Segment Decisions. Here he discusses the relationship between IDR frames and frequent Scene Changes.

Haluk covers a lot and finishes with a Q&A. So if you have an interest in Live Streaming, then Watch Now!

Speaker

Haluk Ucar Haluk Ucar
Director of Engineering,
IDT

Webinar: Top 15 Things You Can’t Miss at NAB 2019

Date: Wednesday 27th 2019
Time: 11am EDT / 15:00 GMT

All eyes are on NAB for announcements of new products and new mergers! Every year come April, there are big announcements and we get another glimpse into how the market and technologies are changing and adapting. Much of this can be seen from what happened at IBC and NAB the year before…but the devil is in the details and this is where the experienced eye of editors such as Jenny Priestley (TVB Europe) and Tom Butts (TV Technology) come in.

Jenny and Tom will take us through the things to keep a look out for at NAB – or for those not attending what to look out for in the press releases!

Register now!

Speakers

Tom Butts Tom Butts
Content Director,
TV Technology
Jenny Priestley Jenny Priestley
Editor,
TVBEurope

Webinar: Edge Computing and Video Delivery

Date: Thursday 28th March, 2019
Time: 10am PDT / 1pm EDT / 17:00 GMT

Whether or not edge computing is the next generation of cloud technology, the edge plays a vital role in the streaming video experience. The closer a video is stored to the requesting user, the faster the delivery and better the experience. But, streaming also provides a lot more opportunity for interactivity, engagement, and data collection than traditional broadcast television. That means as the edge grows in compute capacity and functionality, it could enable new and exciting use cases, such as AI, that could improve the viewer experience. In this webinar, we’ll explore the state of edge computing and how it might be leveraged in streaming video.

Speakers

Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance