Video: Reliable and Uncompressed Video on AWS

Uncompressed video in the cloud is an answer to the dreams that many people are yet to have, but the early adopters of cloud workflows, those that are really embedding the cloud into their production and playout efforts are already asking for it. AWS have developed a way of delivering this between computers within their infrastructure and have invited a vendor to explain how they are able to get this high-bandwidth content in and out.

On The Broadcast Knowledge we don’t normally feature such vendor-specific talks, but AWS is usually the sole exception to the rule as what’s done in AWS is typically highly informative to many other cloud providers. In this case, AWS is first to the market with an in-prem, high-bitrate video transfer technology which is in itself highly interesting.

LTN’s Alan Young is first to speak, telling us about the traditional broadcast workflows of broadcasters giving the example of a stadium working into the broadcaster’s building which then sends out the transmission feeds by satellite or dedicated links to the transmission and streaming systems which are often located elsewhere. LTN feel this robs the broadcaster of flexibility and cost savings from lower-cost internet links. The hybrid that he sees working in medium-term is feeding the cloud directly from the broadcaster. This allows production workflows to take place in the cloud. After this has happened, the video can either come back to the broadcaster before on-pass to transmission or go directly to one or more of the transmission systems. Alan’s view is the interconnecting network between the broadcaster and the cloud needs to be reliable, high quality, low-latency and able to handle any bandwidth of signal – even uncompressed.

Once in the cloud, AWS Cloud Digital Interface (CDI) is what allows video to travel reliably from one computer to another. Andy Kane explains what the drivers were to create this product. With the mantra that ‘gigabits are the new megabits’, they looked at how they could move high-bandwidth signals around AWS reliably with the aim of abstracting the difficulty of infrastructure away from the workflow. The driver for uncompressed in the cloud is reducing re-encoding stages since each of them hits latency hard and, for professional workflows, we’re trying to keep latency as close to zero as possible. By creating a default interface, the hope is that inter-vendor working through a CDI interface will help interoperability. LTN estimate their network latency to be around 200ms which is already a fifth of a second, so much more latency on top of that is going to creep up to a second quite easily.

David Griggs explains some of the technical detail of CDI. For instance, it has the ability to send data of any format be that raw packetised video, audio, ancillary data or compressed data using UDP, multicast between EC2 instances within a placement group. With a target latency of less than one frame, it’s been tested up to UHD 60fps and is based on the Elastic Fabric Adapter which is a free option for EC2 instances and uses kernel bypass techniques to speed up and better control network transfers. CPU use scales linearly so where 1080p60 takes 20% of a CPU, UHD would take 80%. Each stream is expected to have its own CPU.

The video ends with Alan looking at the future where all broadcast functionality can be done in the cloud. For him, it’s an all-virtual future powered by the increasingly accessible high-bandwidth internet connectivity coming in a less than the cost of bespoke, direct links. David Griggs adds that this is changing the financing model moving from a continuing effort to maximise utilisation of purchased assets, to a pay as you go model using just the tools you need for each production.

Watch now!
Download the slides
Please note, if you follow the direct link the video featured in this article is the seventh on the linked page.

Speakers

David Griggs David Griggs
Senior Product Manager,
AWS
Andy Kane Andy Kane
Principal Business Development Manager,
AWS
Alan Young Alan Young
CTO and Head of Strategy,
LTN Global

Video: CDNs: Building a Better Video Experience

With European CDN spend estimated to reach $7bn by 2023, an increase in $1.2 in only three years, it’s clear there is no relenting in the march towards IP. In fact, that’s a guiding principle of the BBC’s transmission strategy as we hear from this panel which brings together three broadcasters, beIN, Globo and the BBC to discuss how they’re using CDNs at the moment and their priorities for the future.

Carlos Octavio introduces Globo’s massive scale of programming for Brazil and Latin America. Producing 26,000 hours of content annually, they aim to differentiate themselves as much with the technology of their offerings as with the content. This thirst for differentiation drives their CDN strategy. Brazil is a massive country, so covering the footprint is hard. Octavio explains that they have created their own CDN to support Globo Play which is based on 4 tiers from their two super PoPs in Rio and Sao Paolo down to edge caches. Octavio shows that they are able to achieve the same response times as the major CDN companies in the region. For overflow capacity, Globo uses a multi-CDN approach.

Bhavesh Patel talks about the sports and news output of beIN, both of these being bursty in nature. Whilst traffic for sporting events can forecast, with news this is often not possible. This, plus the wide variability of customers’ home bandwidth are drivers in choosing which CDNs to partner with. Over the next twelve months, Bhavesh explains, beIN’s focus will move to bring down latency on their system as a whole, not on a service by service level. They are also expecting to continue to modify their ABR ladders to follow viewers as they continue their shift from second screens to 60 inch TVs.

The BBC’s approach to distribution is explained by Paul Tweedy. Whilst the BBC is still well known as a linear, public broadcaster, it has been using online distribution for 25 years and continues to innovate in that space. Two important aspects to their strategy are being on as many devices as practical and ensuring the quality of the online experience meets or is comparable to the linear services. The BBC has been using multiple CDNs for many years now. What changes is the balance and what they use CDNs for. They cover a lot of sports, explains Paul, which leads to short-term scaling difficulties, but long term scaling difficulties are equally on his mind due to what the BBC calls the ‘glide path to IP’. This is the acknowledgement that, at some point, it won’t be financially viable to run transmitters and IP will be the wise way to use the licence fee on which the BBC depends. Doing this, clearly, will demand IP delivery of many times what is currently being used. Yesterday’s article on multicast ABR is one way in which this may be mitigated and fits into a multi-CDN strategy.

Watch now! Free registration

Looking at today’s streaming services, Paul and his colleagues aim to get analytics from every player on every device wherever possible. Big data techniques are used to understand these logs along with server-side, client-to-edge and edge-to-origin logs. This information along with sports schedules can lead to capacity planning, though many news events are much less easy to plan. It’s these unplanned, high-peak events which drive the BBC’s build up of internal monitoring tools to help them understand what is working well under load and what’s starting to feel the strain so they can take action to ensure quality is maintained even through these times of intense interest. The BBC manage their capacity with their own CDN, called BIDI, which provides for the baseline needs and allows an easier-to-forecast budget. Mulitple, third-party CDNs are, then, the key to providing the variable and peak capacities needed.

As we head into the Q&A Limelight’s Steve Miller-Jones outlines the company’s strengths including their focus on adding abilities on top of a ‘typical’ CDN. For instance, running applications on the CDN which is particularly useful as part of edge compute and their ability to run WebRTC at scale which not many CDNs are built to do. The Q&A sees the broadcasters outlining what they particularly look out for in a CDN and how they leverage AI. Globo anticipate using AI to help them predict traffic demand, beIN see it providing automated highlights whilst the BBC see it enabling easier access to their deep archives.

Watch now!
Free registration
Speakers

Carlos Octavio Carlos Octavio
Head of Architecture and Analytics,
Globo
Bhavesh Patel Bhavesh Patel
Global Digital Director,
beIN MEDIA GROUP
Paul Tweedy Paul Tweedy
Lead Architect, Online Technology Group,
BBC Design + Engineering
Steve Miller-Jones Steve Miller-Jones
Vice President of Product Strategy,
Limelight Networks

Video: Delivering Quality Video Over IP with RIST

RIST continues to gain traction as a way to deliver video reliably over the internet. Reliable Internet Stream Transport continues to find uses both as part of the on-air signal chain and to enable broadcast workflows by ensuring that any packet loss is mitigated before a decoder gets around to decoding the stream.

In this video, AWS Elemental’s David Griggs explains why AWS use RIST and how RIST works. Introduced by LearnIPvideo.com’s Will Simpson who is also the co-chair of the RIST Activity Group at the VSF. Wes starts off by explaining the difference between consumer and business use-cases for video streaming against broadcast workflows. Two of the pertinent differences being one-directional video and needing a fixed delay. David explains that one motivator of broadcasters looking to the internet is the need to replace C-Band satellite links.

RIST’s original goals were to deliver video reliably over the internet but to ensure interoperability between vendors which has been missing to date in the purest sense of the word. Along with this, RIST also aimed to have a low, deterministic latency which is vital to make most broadcast workflows practical. RIST was also designed to be agnostic to the carrier type being internet, satellite or cellular.

Wes outlines how important it is to compensate for packet loss showing that even for what might seem low packet loss situations, you’ll still observe a glitch on the audio or video every twenty minutes. But RIST is more than just a way of ensuring your video/audio arrives without gaps, it. can also support other control signals such as PTZ for cameras, intercom feeds, ad insertion such as SCTE 35, subtitling and timecode. This is one strength which makes RIST ideal for broadcast over using, say RTMP for delivering a live stream.

Wes covers the main and simple profile which are also explained in more detail in this video from SMPTE and this article. One way in which RIST is different from other technologies is GRE tunnelling which allows the carriage of any data type alongside RIST and also allows bundling of RIST streams down a single connecting. This provides a great amount of flexibility to support new workflows as they arise.

David closes the video by explaining why RIST is important to AWS. It allows for a single protocol to support media transfers to, from and within the AWS network. Also important, David explains, is RIST’s standards-based approach. RIST is created out of many standards and RFC with very little bespoke technology. Moreover, the RIST specification is being formally created by the VSF and many VSF specifications have gone on to be standardised by bodies such as SMPTE, ST 2110 being a good example. AWS offer RIST simple profile within MediaConnect with plans to implement the main profile in the near future.

Watch now!
Speakers

David Griggs David Griggs
Senior Product Manager, Media Services,
AWS Elemental
Wes Simpson Wes Simpson
RIST AG Co-Chair,
President & Founder, LearnIPvideo.com

Video: Scaling of Live Streaming on the Ingest Side

You can quickly and easily ‘scale up’ in the cloud, but how? Life is seldom as easy as just clicking a button and the times you do find a button, chances are that will help you scale your outputs. But what happens when you need to scale your inputs? What should you consider when creating your scaling architecture in the cloud? Why is scaling down more difficult than scaling up for a peak? This webinar highlights what you need to know.

Karel Boek from Raskenlund starts by explaining that, while CDNs allow scaling for delivery to end-users, there are fewer solutions for scaling up your ingest. Even if you’re streaming using WebRTC, which isn’t cacheable via CDNs, there are companies such as NanoCosmos who will scale that for you. But for ingest, scaling gets more bespoke more quickly.

There is, Karel explains, the option to outsource entire operation to AWS. For many, this is on the face of it, ideal as there’s not that much work to be done. However, you may need to use more customisation than is possible on this general service and, more importantly, there’s a reason which also affects the second option: creating some of your own workflows but using the cloud to scale.

The problem with cloud autoscalers is that they’re built for HTTP. Karel details how they look at metrics from your servers to determine the point at which they need to be scaled. These could be metrics such as the number of HTTP connections, CPU usage, bandwidth etc. Although Google does allow custom metrics, you may quickly find that a key metric such as GPU load isn’t supported leaving you having to scale without the most important data driving the decision making. Worse, when it comes to scaling down, autoscalers don’t understand ingest. As ingest streams stop, the scaler could be looking at a server which is taking a feed but has very low utilisation and therefore gets killed distributing the stream.

Building your own system is the only way to fully mitigate or remove these problems as you’re putting yourself in full control of creating a system sensitive to the ‘unusual’ metrics of ingesting streams which are very different from serving HTTP files that many autoscalers are built around. Karel looks at the elements of scaling a solution including load balancers, proxy servers and creating an algorithm which listens to metrics and makes up- and down-scaling decisions.

Karel advises writing down the logic for when and how to scale up so that’s it’s clear and well thought-through. Similarly, you need a strategy for Load balancing (i.e. why is round-robin the right/wrong choice for you?) and a scaling down plan. In order to scale down with minimal impact, you need to scale up well. You should use as many clues as you can to group similar feeds onto similar servers. This means a whole server is more likely to be free than if you mix and match long-lived and short-lived feeds on the same server, say.

Finally, Karel details the three main Pitfalls: Scaling down, time taken to scale up (can you wait 3 minutes?), and creating upper limits on your scaling to prevent your algorithm autoscaling you into debt by spinning up tens, hundreds or thousands of unnecessary servers.

Watch now!
Speakers

Karel Boek Karel Boek
CEO,
Raskenlund