Video: A Forensic Approach to Video

Unplayable media is everyone’s nightmare, made all the worse if it could be key evidence in a crimnial case. This is daily fight that Gareth Harbord from the Metropolitan Police has as he tries to render old CCTV footage and files from crashed dash cams playable, files from damaged SD cards and hard drives readable and recover video from old tape formats which have been obselete for years.

In terms of data recovery, there are two main elments: Getting the data off the device and then fixing the data to make it playable. Getting the data off a device tends to be difficult because either the device is damaged and/or connecting to the device requires some proprietary hardware/software which simply isn’t available any more. Pioneers in a field often have to come up with their own way of interfacing which, when the market becomes bigger, is often then improved by a standard way of doing things. Take, as an example, mobile phone cables. They used to be all sorts of shapes and sizes but are now much more uniform with 3 main types. The same was initially true with hard drives, however the first hard drives were so long ago that osolecence is much more of an issue.

Once you have the data on your own system, it’s then time to start analysing it to see why it won’t play. It may play because the data itself is of an old or proprietary format, which Gareth says is very common with CCTV manufacturers. While there are some poular formats, there are many variations from different companies including putting all, say, 4 cameras onto one image or into one file, running the data for the four cameras in parallel. After a while, you start to be able to get a feel for the formats but not without many hours of previous trial and error.

Gareth starts his talk explaining that he works in the download and data receovery function which is different from the people who make the evidence ready for presentation in a trial. Their job is to find the best way to show the relevant parts both in terms of presentation but also technically making sure it is easy to play for the technically uninitiated in court and that it is robust and reliable. Presentation covers the effort behind combining multiple sources of video evidence into one timeline and ensuring the correct chronology. Other teams also deal with enhancing the video and Gareth shows examples of deblurring an image and also using frame averaging to enhance the intelligability of the picture.

Gareth spends some time discussing CCTV where he calls the result of the lack of standardisation “a myriad of madness.” He says it’s not uncommon to have 15-year-old systems which are brought in but, since the hard drives have been spinning for one and half decades, don’t start again when they are repowered. On the otherhand the newer IP cameras are more complicated whereby each camera is generating its own time-stampped video going into a networked video recorder which also has a timestamp. What happens when all of the timestamps disagree?

Mobile devices cause problems due to variable frame rates which are used to deal with dim scenes, non-conformance with standards and who can forget the fun of CMOS videos where the CMOS sensors lead to wobbling of the image when the phone is panned left or right. Gareth highlights a few of the tools he and his colleagues use such as the ever-informative MediaInfo and FFProbe before discussing the formats that they transode to in order to share the videos internally.

Gareth walks us through an example file looking at the how data can be lined up to start understanding the structure and start to decode it. This can lead to the need to write some simple code in C#, or similar, to rework the data. When it’s not possible to get hold of the data in a partiular format to be playable in VLC, or similar, a proprietary player may be the only way forward. When this is the case, often a capture of the computer screen is the only way to excerpt the clip. Gareth looks at the pros and cons of this method.

Watch now!
Speakers

Gareth Harbord Gareth Harbord
Senior Digital Forensic Specialist (Video)
Metropolitan Police Service

Video: Real-Time Remote Production For The FIFA Women’s World Cup

We hear about so many new and improved cloud products and solutions to improve production that, once in a while, you really just need to step back and hear how people have put them together. This session is just that, a look at the whole post production workflow for FOX Sports’ production of the Women’s World Cup.

This panel from the Live Streaming Summit at Streaming Media West is led by FOX Sports’ Director of Post Production, Brandon Potter as he talks through the event with three of his key vendors, IBM Aspera, Telestream and Levels Beyond.

Brandon starts by explaining that this production stood on the back of the work they did with the Men’s World Cup in Russia, both having SDI delivery of media in PAL at the IBC. For this event, all the edit crew was in LA which created problems with some fixed frame-rate products still in use in the US facility.

Data transfer, naturally is the underpinning of any event like this with a total of a petabyte of data being created. Network connectivity for international events is always tricky. With so many miles of cable whether on land or under the sea, there is a very high chance of the fibre being cut. At the very least, the data can be switched to take a different path an in that moment, there will be data loss. All of this means that you can’t assume the type of data loss, it could be seconds, minutes or hours. On top of creating, and affording, redundant data circuits, the time needed for transfer of all the data needs to be considered and managed.

Ensuring complete transfer of files in a timely fashion drove the production to auto archive of all content in real time into Amazon S3 in order to avoid long post-match ingest times of multiple hours, “every bit of high-res content was uploaded.” stated Michael Flathers, CTO of IBM Aspera.

Dave Norman, from Telestream explains how the live workflows stayed on-prem with the high-performance media and encoders and then, “as the match ended, we would then transition…into AWS”. In the cloud, the HLS proxies would then being rendered into a single mp4 proxy editing files.

David Gonzales explains the benefits of the full API integrations they chose to build their multi-vendor solution around, rather than simple watch-folders. For all platforms to know where the errors were was very valuable and was particularly useful for the remote users to know in detail where their files were. This reduces the number of times they would need to ask someone for help and meant that when they did need to ask, they had a good amount of detail to specify what the problem was.

The talk comes to a close with a broad analysis of the different ways that files were moved and cached in order to optimise the workflow. There were a mix of TCP-style workflows and Aspera’s UDP-based transfer technology. Worth noting, also, that HLS manifests needed to be carefully created to only reference chunks that had been transferred, rather than simply any that had been created. Use of live creation of clips from growing files was also an important tool, the in- and out-points being created by viewing a low-latency proxy stream then the final file being clipped from the growing file in France and delivered within minutes to LA.

Overall, this case study gives a good feel for the problems and good practices which go hand in hand with multi-day events with international connectivity and shows that large-scale productions can successfully, and quickly, provide full access to all media to their production teams to maximise the material available for creative uses.

Watch now!
Speakers

Mike Flathers Mike Flathers
CTO,
IBM Aspera
Brandon Potter Brandon Potter
Director of Post Production,
FOX Sports
Dave Norman Dave Norman
Principal Sales Engineer,
Telestream
Daniel Gonzales Daniel Gonzales
Senior Solutions Architect,
Levels Beyond

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group