Video: Pervasive video deep-links

Google have launched a new initiative allowing publishers to highlight key moments in a video so that search results can jump straight to that moment. Whether you have a video that looks at 3 topics, one which poses questions and provides answers or one which has a big reveal and reaction shots, this could help increase engagement.

The plan is the content creators tell Google about these moments so Paul Smith from theMoment.tv takes to the stage at San Francisco Video Tech to explain how. After looking at a live demo, Paul takes a dive into the webpage code that makes it happen. Hidden in the tag, he shows the script which has its type set to application/ld+json. This holds the metadata for the video as a whole such as the thumbnail URL and the content URL. However it also then defines the highlighted ‘parts’ of the video with URLs for those.

Whiles the programme is currently limited to a small set of content publishers, everyone can benefit from these insights on google video search. It will also look at YouTube descriptions in which some people give links to specific times such as different tracks in a music mix, and bring those into the search results.

Paul looks at what this means for website and player writers. On suggestion is the need to scroll the page to the correct video and make the different videos on a page clearly signposted. Paul also looks towards the future at what could be done to better integrate with this feature. For example updating the player UI to see and create moments or improve the ability to seek to sub-second accuracy. Intriguingly he suggests that it may be advantageous to synchronise segment timings with the beginning of moments for popular video. Certainly food for thought.

Watch now!
Speaker

Paul Smith Paul Smith
Founder,
theMoment.tv

Video: CMAF and DASH-IF Live ingest protocol

Of course, without live ingest of content into the cloud, there is no live streaming so why would we leave such an important piece of the puzzle to an unsupported protocol like RTMP which has no official support for newer codecs. Whilst there are plenty of legacy workflows that still successfully use RTMP, there are clear benefits to be had from a modern ingest format.

Rufael Mekuria from Unified Streaming, introduces us to DASH-IF’s CMAF-based live ingest protocol which promises to solve many of these issues. Based on the ISO BMFF container format which underpins MPEG DASH. Whilst CMAF isn’t intrinsically low-latency, it’s able to got to much lower latencies than standard HLS and LHLS.

This work to create a standard live-ingest protocol was born out of an analysis, Rufael explains, of which part of the content delivery chain were most ripe for standardisation. It was felt that live ingest was an obvious choice partly because of the decaying RTMP protocol which was being sloppy replaced by individual companies doing their own thing, but also because there everyone contributing, in the same way, is of a general benefit to the industry. It’s not typically, at the protocol level, an area where individual vendors differentiate to the detriment of interoperability and we’ve already seen the, then, success of RMTP being used inter-operably between vendor equipment.

MPEG DASH and HLS can be delivered in a pull method as well as pushed, but not the latter is not specified. There are other aspects of how people have ‘rolled their own’ which benefit from standardisation too such as timed metadata like ad triggers. Rufael, explaining that the proposed ingest protocol is a version of CMAF plus HTTP POST where no manifest is defined, shows us the way push and pull streaming would work. As this is a standardisation project, Rufael takes us through the timeline of development and publication of the standard which is now available.

As we live in the modern world, ingest security has been considered and it comes with TLS and authentication with more details covered in the talk. Ad insertion such as SCTE 35 is defined using binary mode and Rufael shows slides to demonstrate. Similarly in terms of ABR, we look at how switching sets work. Switching sets are sets of tracks that contain different representations of the same content that a player can seamlessly switch between.

Watch now!
Speaker

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unified Streaming

Video: CPAC Case Study – Replacement of a CWDM System with an IP System

For a long time now, broadcasters have been using dark fibre and CWDM (Coarse Wavelength Division Multiplexing) for transmission of multiple SDI feeds to and from remote sites. As an analogue process, WDM is based on a concept called Frequency Division Multiplexing (FDM). The bandwidth of a fibre is divided into multiple channels and each channel occupies a part of the large frequency spectrum. Each channel operates at a different frequency and at a different optical wavelength. All these wavelengths (i.e., colours) of laser light are combined and de-combined using a passive prism and optical filters.

In this presentation Roy Folkman from Embrionix shows what advantages can be achieved by moving from CWDM technology to real-time media-over-IP system. The recent project for CPAC (Cable Public Affairs Channel) in Canada has been used as an example. The scope of this project was to replace an aging CWDM system connecting government buildings and CPAC Studios which could carry 8 SDI signals in each direction with a single dark fibre pair. The first idea was to use a newer CWDM system which would allow up to 18 SDI signals, but quite quickly it became apparent that an IP system could be implemented at similar cost.

As this was an SDI replacement, SMPTE ST 2022-6 was used in this project with a upgrade path to ST 2110 possible. Roy explains that, from CPAC point of view, using ST 2022-6 was a comfortable first step into real-time media-over-IP which allowed for cost reduction and simplification (no PTP generation and distribution required, re-use of existing SDI frame syncs and routing with audio breakaway capability). The benefits of using IP were: increased capacity, integrated routing (in-band control) and ease of future expansion.

A single 1RU 48-port switch on each side and a single dark fibre pair gave the system a capacity of 48 HD SDI signals in each direction. SFP gateways with small Embronix enclosures have been used to convert SDI outs of cameras to IP fibre – that also allowed to extend the distance between the cameras and the switch above SDI cabling limit of 100 meters. SFP gateway modules converting IP to SDI have been installed directly in the switches in both sites.

Roy finishes his presentation with possible future expansion of the system, such as migration to ST 2110 (firmware upgrade for SFP modules), increased capacity (by adding additional dark fibres ands switches), SDI and IP routing integration with unified control system (NMOS), remote camera control and addition of processing functions to SFP modules (Multiviewers, Up/Down/CrossConversion, Compression).

Watch now!

Download the slides.

Speaker

Roy Folkman 
VP of Sales
Embrionix

Video: ABA IP Fundamentals For Broadcast

IP explained from the fundamentals here in this in this talk from Wayne Pecena building up a picture of networking from the basics. This talk discusses not just the essentials for uncompressed video over IP, SMPTE ST 2110 for instance, but for any use of IP within broadcast even if just for management traffic. Networking is a fundamental skill, so even if you know what an IP address is, it’s worth diving down and shoring up the foundations by listening to this talk from the President of SBE and long-standing Director of Engineering at Texas A&M University.

This talk covers what a Network is, what elements make up a network and an insight into how the internet developed out of a small number of these elements. Wayne then looks at the different standards organisations that specify protocols for use in networking and IP. He explains what they do and highlights the IETF’s famous RFCs as well as the IEEE’s 802-series of ethernet standards including 802.11 for Wi-Fi.

The OSI model is next, which is an important piece of the puzzle for understanding networking. Once you understand, as the OSI model lays out, that different aspects of networking are built on top of, but operate separately from other parts, fault-finding, desiring networks and understanding the individual technologies becomes much easier. The OSI model explains how the standards that define the physical cables work underneath those for Ethernet as separate layers. There are layers all the way up to how your software works but much of broadcasting that takes place in studios and MCRs can be handled within the first 4, out of 7 layers.

The last section of the talk deals with how packets are formed by adding information from each layer to the data payload. Wayne then finishes off with a look at fibre interfaces, different types of SFP and the fibres themselves.

Watch now!
Speaker

Wayne Pecena Wayne Pecena
Director of Engineering, KAMU TV/FM at Texas A&M University
President, Society of Broadcast Engineers AKA SBE