Video: ABA IP Fundamentals For Broadcast

IP explained from the fundamentals here in this in this talk from Wayne Pecena building up a picture of networking from the basics. This talk discusses not just the essentials for uncompressed video over IP, SMPTE ST 2110 for instance, but for any use of IP within broadcast even if just for management traffic. Networking is a fundamental skill, so even if you know what an IP address is, it’s worth diving down and shoring up the foundations by listening to this talk from the President of SBE and long-standing Director of Engineering at Texas A&M University.

This talk covers what a Network is, what elements make up a network and an insight into how the internet developed out of a small number of these elements. Wayne then looks at the different standards organisations that specify protocols for use in networking and IP. He explains what they do and highlights the IETF’s famous RFCs as well as the IEEE’s 802-series of ethernet standards including 802.11 for Wi-Fi.

The OSI model is next, which is an important piece of the puzzle for understanding networking. Once you understand, as the OSI model lays out, that different aspects of networking are built on top of, but operate separately from other parts, fault-finding, desiring networks and understanding the individual technologies becomes much easier. The OSI model explains how the standards that define the physical cables work underneath those for Ethernet as separate layers. There are layers all the way up to how your software works but much of broadcasting that takes place in studios and MCRs can be handled within the first 4, out of 7 layers.

The last section of the talk deals with how packets are formed by adding information from each layer to the data payload. Wayne then finishes off with a look at fibre interfaces, different types of SFP and the fibres themselves.

Watch now!
Speaker

Wayne Pecena Wayne Pecena
Director of Engineering, KAMU TV/FM at Texas A&M University
President, Society of Broadcast Engineers AKA SBE

Video: Video Caching Best Practices

Caching is a critical element of the streaming video delivery infrastructure. By storing objects as close to the viewer as possible, you can reduce round-trip times, cut bandwidth costs, and create a more efficient delivery chain.

This video brings together Disney, Qwilt and Verizon to understand their best-practices and look at the new Open Caching Network (OCN) working group from the Streaming Video Alliance. This recorded webinar is a discussion on the different aspects of caching and the way the the OCN addresses this.

The talk starts simply by answering “What is a caching server and how does it work?” which helps everyone get on to the same page whilst listening to the answers to “What are some of the data points to collect from the cache?” hearing ‘cache:hit-ratio’, ‘latency’, ‘cache misses’, ‘data coming from the CDN vs the origin server’ as some of the answers.

This video continues by exploring how caching nodes are built, optimising different caching solutions, connecting a cache to the Open Caching Network, and how bettering cache performance and interoperability can improve your overall viewer experience.

The Live Streaming Working Group is mentioned covered as they are working out the parameters such as ‘needed memory’ for live streaming servers and moves quickly into discussing some tricks-of-the-trade, which often lead to a better cache.

There are lots of best practices which can be shared and the an open caching network one great way to do this. The aim is to create some interoperability between companies, allowing small-scale start-up CDNs to talk to larger CDNs. A way for a streaming company to understand that it can interact with ‘any’ CDN. As ever, the idea comes down to ‘interoperability’. Have a listen and judge for yourself!

Watch now!
Speakers

Eric Klein Eric Klein
Director, Content Distribution – Disney+/ESPN+, Disney Streaming Services
Co-Chair, Open Cache Working Group, Streaming Video Alliance
Yoav Gressel Yoav Gressel
Vice President of R&D,
Qwilt
Sanjay Mishra Sanjay Mishra
Director, Technology
Verizon
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Media Alliance

Video: A Forensic Approach to Video

Unplayable media is everyone’s nightmare, made all the worse if it could be key evidence in a crimnial case. This is daily fight that Gareth Harbord from the Metropolitan Police has as he tries to render old CCTV footage and files from crashed dash cams playable, files from damaged SD cards and hard drives readable and recover video from old tape formats which have been obselete for years.

In terms of data recovery, there are two main elments: Getting the data off the device and then fixing the data to make it playable. Getting the data off a device tends to be difficult because either the device is damaged and/or connecting to the device requires some proprietary hardware/software which simply isn’t available any more. Pioneers in a field often have to come up with their own way of interfacing which, when the market becomes bigger, is often then improved by a standard way of doing things. Take, as an example, mobile phone cables. They used to be all sorts of shapes and sizes but are now much more uniform with 3 main types. The same was initially true with hard drives, however the first hard drives were so long ago that osolecence is much more of an issue.

Once you have the data on your own system, it’s then time to start analysing it to see why it won’t play. It may play because the data itself is of an old or proprietary format, which Gareth says is very common with CCTV manufacturers. While there are some poular formats, there are many variations from different companies including putting all, say, 4 cameras onto one image or into one file, running the data for the four cameras in parallel. After a while, you start to be able to get a feel for the formats but not without many hours of previous trial and error.

Gareth starts his talk explaining that he works in the download and data receovery function which is different from the people who make the evidence ready for presentation in a trial. Their job is to find the best way to show the relevant parts both in terms of presentation but also technically making sure it is easy to play for the technically uninitiated in court and that it is robust and reliable. Presentation covers the effort behind combining multiple sources of video evidence into one timeline and ensuring the correct chronology. Other teams also deal with enhancing the video and Gareth shows examples of deblurring an image and also using frame averaging to enhance the intelligability of the picture.

Gareth spends some time discussing CCTV where he calls the result of the lack of standardisation “a myriad of madness.” He says it’s not uncommon to have 15-year-old systems which are brought in but, since the hard drives have been spinning for one and half decades, don’t start again when they are repowered. On the otherhand the newer IP cameras are more complicated whereby each camera is generating its own time-stampped video going into a networked video recorder which also has a timestamp. What happens when all of the timestamps disagree?

Mobile devices cause problems due to variable frame rates which are used to deal with dim scenes, non-conformance with standards and who can forget the fun of CMOS videos where the CMOS sensors lead to wobbling of the image when the phone is panned left or right. Gareth highlights a few of the tools he and his colleagues use such as the ever-informative MediaInfo and FFProbe before discussing the formats that they transode to in order to share the videos internally.

Gareth walks us through an example file looking at the how data can be lined up to start understanding the structure and start to decode it. This can lead to the need to write some simple code in C#, or similar, to rework the data. When it’s not possible to get hold of the data in a partiular format to be playable in VLC, or similar, a proprietary player may be the only way forward. When this is the case, often a capture of the computer screen is the only way to excerpt the clip. Gareth looks at the pros and cons of this method.

Watch now!
Speakers

Gareth Harbord Gareth Harbord
Senior Digital Forensic Specialist (Video)
Metropolitan Police Service

Video: 5 PTP Implementation Challenges & Best Practices

PTP is an underlying technology enabling the whole SMPTE 2110 uncompressed ecosystem to work. Using PTP, the Precision Time Protocol, the time a frame of video, audio etc. was captured is recorded and so when decoded can be synchronised with other media recorded around that same time. Though parts of 2110 can function without it, when it comes to bringing media together which need synchronisation, vision mixing for instance, PTP is the way to go.

PTP is actually a standard for time distribution which, like its forerunner NTP, was developed by the IEEE and is a cross-industry standard. Now on version IEEE-1588-2019, it defines not only how to send time onto a network, but also how a receiver can work out what the time actually is. Afterall, if you had a letter in the post telling you the time, you’d know that time – and date for that matter – was old. PTP defines a way of working out how long the letter took to arrive so that you can know the date and time based on the letter and you new-found knowledge of the delivery time.

Knowing the time of day is all very well, but to truly synchronise media, SMPTE ST 2059 is used to interpret PTP for professional media. Video and audio are made from repeating data structures. 2059 relates these repeating data structures back to a common time in the past so that at any time in the future, you can calculate the phase of the signal.

Karl Khun from Tektronix starts by laying out the problems to be solved, such as managing jitter and the precision needed. This leads in into a look at how timestamps are used to make a note of when, separately, video and audio were captured. The network needed to implement PTP, particularly for redundancy and the ability of GPS allowing buildings to be co-timed without being connected.

Troubleshooting PTP will be tricky for many, but learning the IT side of this is only part of the solution. Karl looks at some best practices and tips on faultfinding PPT errors which leads on to a discussion of PTP domains and profiles. An important aspect of PTP is that it is bi-directional. Not only that but it’s much more than a distribution of a signal like the previous black and burst infrastructure. It is a system which needs to be managed and deserves to be monitored. Karl shows how graphs can help show the stability of the network and how RTP/CC errors can show network packet losses/corruptions.

Watch now!
Speakers

Karl Kuhn Karl J. Khun
Principal Solutions Architect
Telestream/Tekronix