Video: 5G – Game-Changer Or Meh?

The 5G rollout has started in earnest in the UK, North America, Asia and many other regions. As with any new tech rollout, it takes time and currently centres on densely populated areas, but tests and trials are already underway in TV productions to find out whether 5G can actually help improve workflows. Burnt by the bandwidth collapse of 4G in densely populated locations, there’s hope amongst broadcasters that the higher throughput and bandwidth slicing will, this time, deliver the high bandwidth, reliable connectivity that the industry needs.

Jason Thibeault from the Streaming Video Alliance join’s Zixi’s Eric Bolten to talk to Eric Schumacher-Rasmussen who moderates this discussion on how well 5G is standing up to the hype. For a deeper look at 5G, including understanding the mix of low frequencies (as used in 2G, 3G and 4G) and high, Ultra Wide Band (UWB) frequencies referred to in this talk, check out our article which does a deep dive on 5G covering roll out of infrastructure and many of the technologies that make it work.

 

 

Eric starts by discussing trials he’s been working on in including one which delivered 8K at 100Mbps over 5G. He sees 5G as being very useful to productions whether on location or on set. He’s been working to test routers and determine the maximum throughput possible which we already know is in excess of 100Mbps, likely in the gigabits. Whilst rollouts have started and there’s plenty of advertising surrounding 5G, the saturation in the market of 5G-capable phones is simply not there but that’s no reason for broadcasters of film crews not to use it. 30 markets in the US are planning to be 5G enabled and all the major telcos in the UK are rolling the technology out which is already in around 200 cities and towns. It’s clear that 5G is seen as a strategic technology for governments and telcos alike.

Jason talks about 5G’s application in stadia because it solves problems for both the on-location viewers but also the production team themselves. One of the biggest benefits of 5G is the ultra-low-latency. Having 5G cameras keeps wireless video in the milliseconds using low-latency codecs like JPEG XS then delivery to fans within the stadium can also be within milliseconds meaning the longest delay in the whole system is the media workflow required for mixing the video, adding audio and graphics. The panel discusses how this can become a strong selling point for the venue itself. Even supporters who don’t go into the stadium itself can come to an adjacent location for good food, drinks a whole load of like-minded people, massive screens and a second-screen experience like nothing available at home. On top of all of that, on-site betting will be possible, enabled by the low latency.

Moving away from the stadium, North America has already seen some interest in linking the IP-native ATSC 3.0 broadcast network to the 5G network providing backhaul capabilities for telcos and benefits for broadcasters. If this is shown to be practical, it shows just how available IP will become in the medium-term future.

Jason summarises the near-term financial benefits in two ways: the opportunity for revenue generation by delivering better video quality and faster advertising but most significantly he sees getting rid of the need for satellite backhaul as being the biggest immediate cost saver for many broadcast companies. This won’t all be possible on day one, remembering that to get the major bandwidths, UWB 5G is needed which is subject to a slower roll-out. UWB uses high-frequency RF, 24Ghz and above, which has very little penetration and relies on line-of-sight links. This means that even a single wall can block the signal but those that can pick it up will get gigabits of throughput.

The panel concludes by answering a number of questions from the audience on 5G’s benefit over fibre to the home, the benefits of abstracting the network out of workflows and much more.

Watch now!
Speakers

Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance
Eric Bolten Eric Bolten
VP of Business Development,
Zixi
Eric Schumacher-Rasmussen Moderator: Eric Schumacher-Rasmussen
Editor-in-Chief,
Streaming Media

Video: Live-Streaming Best Practices

Live streaming of events can be just as critical as broadcast events in that failure is seldom an option. Whether a sports game, public meeting or cattle auction, the kit needed to put on a good stream shares many of the hallmarks of anything with high production values: multiple cameras, redundant design, ISO recording, separate audio and video mixing and much more. Yet live streaming is often done by one person or just a handful of people. How do you make all this work? How do you guide the client to the best event for their budget? What pitfalls can be avoided if only you’d known ahead of time?

Robert Reinhardt from videoRx took to the stage with Streaming Media to go through his approach to live streaming events of all different types. He covers the soft skills as well as the tech leaving you with a rounded view of what’s necessary. He starts by covering the kit that he will typically use discussing the cameras, encoders, recorders, audio mixer and video mixer. He talks about the importance of getting direct mic feeds so you retain control of the gain structure. Each of these items is brought on-site with a spare which is either held as a backup or, like the recorders, is used as an active backup to get two copies of the event and media.

For Robert, Wowza in AWS is at the centre of most of the work he does. His encoders, such as Videon deliver into the cloud using RTMP where Wowza can convert to HLS in multiple bitrates. Robert calls out the Videon encoders as well priced with a friendly and helpful company behind them. We see a short demo of how Wowza can be customised with custom-made add-ins.

 

 

Robert says that every live stream needs a source, an encoder, a publishing endpoint, a player, archive recording and reliable internet. A publishing endpoint could be YouTube or Facebook, a CDN or your own streaming server such as in Robert’s case. The reliable internet connection issue is dealt with as a follow up to the initial Discovery process. This discovery process is to help you work out who matters such as the stakeholders and product owners, which other vendors are involved and their responsibilities. You should also confirm who will be delivering content such as slides and graphics to you and find out how static their budget is.

Budget is a tricky topic as Robert has found that the customer isn’t always willing to tell you their budget, but you have to quickly link their expectations in terms of resilience and production values to their budget expectations. Robert shares his advice on detailing the labour and equipment costs for the customer.

A pre-even reccy is of vital importance for assessing how suitable the internet connectivity is and making sure that physical access and placement is suitable for your crew and equipment. This would be a good time to test the agreed encoder parameters. Ahead of the visit, Robert suggests sharing samples of bitrates and resolutions with the customer and agreeing on the maximum quality to expect for the bandwidth available.

Robert rounds off the talk by walking us through all of the pre-event checks both several days ahead and on the day of the event.

Watch now!
Speakers

Robert Reinhardt Robert Reinhardt
CTO,
videoRx

Video: The OTT Quality Challenge

Quality of Experience (QoE) has a wider meaning than Quality of Service (QoS) even though viewers have a worse time if either are impacted. What’s the difference and how are companies trying to deal with maximising enjoyment of their services? This panel from Streaming Media brings together Akamai’s Will Law, Robert Colantuoni from Disney Streaming Services, CJ Harvey from HBO Max. and Ian Greenblatt from JD Power detail the nuances of Quality of Experience.

The panel starts by outlining some of the differences between QoS and QoE. Ian explains that QoE is about the whole experience of the UI, recommendations, search, rebuffering and much more. QoS can impact QoE but is restricted to the success of the delivery the stream itself. QoS measures impairments such as rebuffering, macroblocking, video quality, time to play etc. Whilst poor QoS will usually reduce QoE, there’s a lot that a well-written player can do to mitigate the effects of QoS. Having good QoE is ensuring the viewer can put trust in each of their ‘clicks’, that they will know what will happen and won’t have to wait.

 

 

Measuring QoE is not without its challenges, afterall what should you measure? Rebuffering measured second-to-second gives you different results than measuring over 10-second windows. Will Law highlighted CTA 2066 which is a free specification. There is also a QoE best practices white paper from Akamai.

“Multi-CDN is the new norm” declares Will Law, as the conversation turns to how players should deal with CDN selection. The challenge is to be picking for the CDN which works best for the user. Robert points out that a great CDN in one geography may not perform so well in another. A player making a ping-based choice at the beginning of playback is going to make a much worse choice overall than a player which samples each CDN in turn and continues to pick the best. This needs to be done carefully though, giving each CDN time to warm up and usefully affect its pre-fetch capabilities.

Where QoE raises itself over QoS is in questions of perception. A good player will not simply target high bitrate, but take in to account colour volume depth, resolution and device to name but three.

There are plenty of questions from the audience covering load balancers, jarring changes between sharp, high budget productions and old episodes of 4:3 TV dramas plus a look-ahead to the next two years of streaming.

Watch now!
Speakers

Will Law Will Law
Chief Architect, Edge Technology Group,
Akamai
CJ Harvey CJ Harvey
VP Product Management,
HBO Max
Robert Colantuoni Robert Colantuoni
Content Distribution Performance Architect,
Disney Streaming Services
Ian Greenblatt Ian Greenblatt
Managing Director,
J.D. Power
Tim Siglin Moderator: Tim Siglin
Contributing Editor,
Streaming Media

Video: Build The New Generation Of Real Time Streaming Solutions With WebRTC

WebRTC continues to live two lives; one of massive daily use in video conferencing in apps from Google, Facebook as well as many others, and one as a side-lined streaming protocol in hte broadcast and streaming industry. WebRTC is now an IETF/W3C standard, is a decade old and is seeing continued work and innovation from Google, other large companies and smaller specialists pushing it forward.

In this extended Streaming Media Connect video with Millicast’s Ryan Jespersen, we explore where WebRTC is up to now, how it can replace RTMP, how real-time AV1 not only shows the innovation within the technology but also enables several use cases and upcoming technologies such as end-to-end encryption for streaming workflows. The video is in sections: product demos, technology discussion and overviews of use cases.

A clear first question is why bother with WebRTC at all. Ryan’s quick to point out that WebRTC is in daily use not only in many of the big video call apps but also in Clubhouse, the high-scale WebRTC-based interactive audio platform. He also establishes that it’s commonly in use on CDNs such as Limelight and Millicast to deliver ultra-low-latency streams to end-users for auctions, gambling and interactive streams, but also as part of broadcast workflows. NFL, for instance, used WebRTC for low-latency monitoring of 122 cameras for the Super Bowl. As far as end-users are concerned, Ryan sees the ‘interactivity’ market as a way, as yet untapped, to release value in many verticals and will be the fastest-growing sector of the streaming industry over the next few years.

 

 

Looking back at Flash, Ryan explains that we came from a point where we had a low-latency protocol in the name of RTMP. Its latency was in the realms of 1 to 3 seconds, it had end-to-end security, encoder control and interactivity. RTMP was displaced due to three main factors, security concerns, rejection of the proprietary nature of the protocol and the move to HLS which provided improved scalability and was enthusiastically adopted by CDNs.

WebRTC, Ryan contends, learns from the mistakes of RTMP. WebRTC has ways to recover lost packets, is content agnostic, has a solution for NAT traversal, is non-proprietary and has no plugins. These latter two points address many of the security concerns of RTMP. Now a standard, the W3C has documented many upcoming use cases for this free, Open Source, technology.

Why, then, do we not see WebRTC much more prevalent in video streaming such as Netflix or Peacock? This is a question that Russell Trafford-Jones discussed in this IBC panel with nanocosmos, M2A and VisualOn. One view from that panel is that sub-second is lower than needed for some services. For instance, a public broadcaster may not wish to deliver online faster than it does over the air. Also, there’s a quality issue to contend with. One strength of WebRTC is that it prioritises latency over quality, always. This is great for face-to-face communication, but tier-1 broadcasters want people to see video in the same quality that left their encoders and if that means waiting for packets to be recovered instead of showing an impaired signal, that’s what they will do. As ever, therefore, this is a business decision that has to pay careful attention to the needs of the viewers, the quality aspirations of the viewers and broadcaster/provider as well as the technical pros and cons of each approach.

Ryan tlks about Real-time AV1 in WebRTC covered also in this talk

Moving on to AV1, Ryan explains that this royalty-free codec has been sped up significantly since the early days when it required thousands of CPUs for real-time encoding. Using AV1 is a boon for WebRTC for two reasons: screen content and scalable video coding. Screen Content Coding is a set of techniques to adapt encoding specifically for screen content meaning computer graphics whether that be in games or just sending a computer desktop. With straighter lines and the possibility for many parts of the screen to be identical or close to identical to other parts, it’s possible to get much better encoding for screen content if you can detect it and optimise for it.

Ryan moves on to AV1’s use in shoring up security. Although a codec and not a security measure in and of itself, AV1’s ability to send multiple resolutions in one stream is a big deal for securing communications. Scalable video coding, SVC, is not a new technology, but AV1 is one of the first mainstream, modern codecs which has it by default. This enables an encoder to encode to, say, sub-SD, SD and HD resolution and send these all at once in one stream. These are not simply 3 encodes squeezed down the same pipe, but they encode that build on top of each other. The sub-HD provides a foundation on which the SD feed provides enhancement information. You need both the sub-SD and SD layer to get SD. Adding on the HD layer to those two gives you that full-resolution HD. By only delivering the extra information needed for HD rather than all the underlying data again, a lot of bitrate can be saved. Importantly, by generating all the encoding at the source, you can encrypt at the source for an end-to-end encrypted workflow and also deliver multiple bitrates. Ryan explains that the move to ABR streaming, whether HLS, DASH or otherwise breaks the end-to-end security model as the need to transcode the media necessitates being able to view it. Using AV1’s SVC is one way around the need for mid-workflow transcoding.

One aspect is missing, though, for modern streaming workflows. If you don’t want to do peer-to-peer networking, some form of traffic manipulation will be needed in your CDN and/or delivery infrastructure. This is why Ryan says that Millicast has proposed that ‘secure frames’ are added to the WebRTC spec. Whilst this talk doesn’t detail their functionality they add a way of encrypting data twice such that the media can be encrypted for end-to-end workflows, but also each hop can be separately encrypted. This provides just enough access to the metadata of the stream for traffic manipulation, but without allowing access to the underlying media.

As the video comes to end, Ryan gives us a glimpse into one other upcoming technology that may be added to WebRTC called WHIP. The RFC explains the intention of WHIP:

The WebRTC-HTTP ingest protocol (WHIP) uses an HTTP POST request to
perform a single shot SDP offer/answer so an ICE/DTLS session can be
established between the encoder/media producer and the broadcasting
ingestion endpoint.

Once the ICE/DTLS session is set up, the media will flow
unidirectionally from the encoder/media producer broadcasting
ingestion endpoint. In order to reduce complexity, no SDP
renegotiation is supported, so no tracks or streams can be added or
removed once the initial SDP O/A over HTTP is completed.

Ryan closes his video with a demonstration of the Millicast platform and looks at how other use cases might be architected such as watch parties.

Watch now!
Download the slide deck

Speaker

Ryan Jespersen Ryan Jespersen
Head of Sales and Marketing
Millicast