Video: Low-Earth Orbit Satellites

Low-latency internet connectivity from anywhere would be transformational for many use cases. Recently we’ve heard a lot about 5G because its low latency connectivity promise but the reach of each cell, particularly the high-bandwidth cells is severely range limited. Low Earth Orbiting satellites offer both low ping times but also global coverage. This explains why there seems to be a gold rush in the industry with so many companies working to launch fleets of thousands of satellites.

Starting this SMPTE Toronto Section video, Michael Martin from Metercor sets the scene and looks at SpaceX’s Starlink service. There are currently around 6000 satellites available at the moment, only 40% of which are fully functional. Many are impaired and, for instance, in elliptical orbits. As a reminder, Michael looks at Geostationary satellites, the best-known satellites in broadcast circles since they stay in the same position in the sky allowing for static dishes. Remembering that orbital speed is defined by your distance from the centre of Earth, geostationary satellites have a distance of approximately 36,000km which requires 120ms just for light to travel that distance meaning the best latency to such satellite is around 250ms which is not considered a great internet ping time. GPS satellites are closer in and orbit in only 12 hours. It’s the low-earth orbit, LEO, satellites that are close enough to provide low-latency network connectivity.

Starlink has satellites planned for orbits 550km and 1100km high which are only 1.8-4ms away for light. This means data can take approximately 10ms to get from you to its destination if going via satellites in the 1100km orbital shell. These shells are used to keep satellites away from each other to avoid collisions. On top of low-latency internet links, Starlink aims to deliver downlink speeds of 1Gbps when fully deployed and has already been seen to deliver over 300Mbps.

 

 

Other upcoming solutions are Kuiper, backed by Amazon, which is aiming for 3200 satellites delivering around 400Mbps of downlink speed. OneWeb which has already deployed many satellites and has secured funding to complete its satellite fleet. China has announced its intention to deliver 10,000 satellites to orbit and Samsung has intimated they intend a 2030 launch of a fleet of satellites. Whilst their intention is not fully clear, it may be linked with the plans for 6G which are already been worked on.

Michael finishes his section with a look at some use cases like using LEO satellites to provide a downlink into a 5G system for a remote village or small town. This would free locations from requiring fibre both for delivery of the internet to the ISP, but also for delivery around the town which would be 5G based. First responders are another example of people who would benefit from always-on, any-location connectivity. Michaels last point is that although we can be very interested in how LEOs deliver the service we want, most people still only care about accessing the content they are interested; a good internet connection should be transparent to them.

The second presentation in the video is from Telesat whose Telesat Lightspeed constellation of LEO satellites whose satellites are planned for launch next year with trials starting in 2023 and full-service available Q4 2024. The fleet will feature inter-satellite communication via laser links and will initially focus on serving Canada before becoming global. Its aim is to become ‘virtual fibre’ tying in with Michael’s point about connectivity being invisible to users.

Ted Baker and Thomas Detlor take us through the envisaged use cases such as backhaul, government, backup/disaster recovery and connectivity to ships and oil platforms. They highlight the broadcast use cases of linking stadia, buildings and creating backup links with their layer 2 service on which you can build your layer 3 network.

They finish by looking at a checklist comparing all of the present and upcoming services, installation methods, upcoming equipment for up/downlinks and some comments on how Telesat is different to Starlink. The session finishes with a Q&A covering many items including the concern of in-orbit crashes and rain fade.

Watch now!
Speakers

Michael Martin Michael Martin
Vice President of Technology,
Metercor, Inc.
Thomas Detlor Thomas Detlor
Product Analyst,
Telesat
Ted Baker Ted Baker
Account Manager, Broadcast,
Telesat
Thomas Portois Thomas Portois
Senior Product Manger,
Telesat

Troy English Moderator: Troy English
CTO,
Ross Video Ltd.

Video: Live-Streaming Best Practices

Live streaming of events can be just as critical as broadcast events in that failure is seldom an option. Whether a sports game, public meeting or cattle auction, the kit needed to put on a good stream shares many of the hallmarks of anything with high production values: multiple cameras, redundant design, ISO recording, separate audio and video mixing and much more. Yet live streaming is often done by one person or just a handful of people. How do you make all this work? How do you guide the client to the best event for their budget? What pitfalls can be avoided if only you’d known ahead of time?

Robert Reinhardt from videoRx took to the stage with Streaming Media to go through his approach to live streaming events of all different types. He covers the soft skills as well as the tech leaving you with a rounded view of what’s necessary. He starts by covering the kit that he will typically use discussing the cameras, encoders, recorders, audio mixer and video mixer. He talks about the importance of getting direct mic feeds so you retain control of the gain structure. Each of these items is brought on-site with a spare which is either held as a backup or, like the recorders, is used as an active backup to get two copies of the event and media.

For Robert, Wowza in AWS is at the centre of most of the work he does. His encoders, such as Videon deliver into the cloud using RTMP where Wowza can convert to HLS in multiple bitrates. Robert calls out the Videon encoders as well priced with a friendly and helpful company behind them. We see a short demo of how Wowza can be customised with custom-made add-ins.

 

 

Robert says that every live stream needs a source, an encoder, a publishing endpoint, a player, archive recording and reliable internet. A publishing endpoint could be YouTube or Facebook, a CDN or your own streaming server such as in Robert’s case. The reliable internet connection issue is dealt with as a follow up to the initial Discovery process. This discovery process is to help you work out who matters such as the stakeholders and product owners, which other vendors are involved and their responsibilities. You should also confirm who will be delivering content such as slides and graphics to you and find out how static their budget is.

Budget is a tricky topic as Robert has found that the customer isn’t always willing to tell you their budget, but you have to quickly link their expectations in terms of resilience and production values to their budget expectations. Robert shares his advice on detailing the labour and equipment costs for the customer.

A pre-even reccy is of vital importance for assessing how suitable the internet connectivity is and making sure that physical access and placement is suitable for your crew and equipment. This would be a good time to test the agreed encoder parameters. Ahead of the visit, Robert suggests sharing samples of bitrates and resolutions with the customer and agreeing on the maximum quality to expect for the bandwidth available.

Robert rounds off the talk by walking us through all of the pre-event checks both several days ahead and on the day of the event.

Watch now!
Speakers

Robert Reinhardt Robert Reinhardt
CTO,
videoRx

Video: Building Media Systems in the Cloud: The Cloud Migration Challenge

Peter Wharton from TAG V.S. starts us on our journey to understanding how we can take real steps to deploying a project in the cloud. He outlines five steps starting with evaluation, building a knowledge base, building for scale, optimisation and finishing with ‘realising full cloud potential’. Peter says that the first step which he dubs ‘Will It Work?’ is about scoping out what you see cloud delivering to you; what is the future that the move to cloud will give you? You can then evaluate the activities in your organisation that are viable options to move to the cloud with the aim of finding quick, easy wins.

Peter’s next step in embracing the cloud in a company is to begin the transformation in earnest by owning the transformation and starting the move not through technical actions, but through the people. It’s a case of addressing the culture of your organisation, changing the lens through which people think and for the larger companies creating a ‘centre of excellence around cloud deployments. A big bottleneck for some organisations is siloing which is sometimes deliberate, sometimes intentional. When a broadcast workflow needs to go to the cloud, this can bring together many different parts of the company, often more than if it were on-prem, so Peter identifies ‘cross-functional leadership’ as an important step in starting the transformation. He also highlights cost modelling as an important factor at this stage. A clear understanding of the costs, and savings, that will be realised in the move is an important motivational factor, but should also be used to correctly set expectations. Not getting the modelling right at this stage can significantly weaken traction as the process continues. Peter talks about the importance of creating ‘key tenets’ of your migration.

Direct link

End-to-End Migration is the promise if you can bring your organisation along with you on this journey when you start looking at actually bringing full workflows into the cloud and deploying them in production. To do that, Peter suggests validating your solution when working at scale, finding ways of testing it way above the levels you need on day one. Another aspect is creating workflows that are cloud-first and translating your current workflows to the cloud rather than taking existing workflows and making the cloud follow the same procedures – to do so would be to miss out on much of the value of the cloud transition. This step will mark the start of you seeing the value of setting your key tenets but you should feel free to ‘break rules and make new ones’ as you adapt to your changing understanding.

The last two stages revolve around optimising and achieving the ‘full potential’ of the cloud. As such, this means taking what you’ve learnt to date and using that to remake your solutions in a better, more sustainable way. Doing this allows you to hone them to your needs but also introduce a more stable approach to implementation such as using an infrastructure-as-code philosophy. This is all topped off by the last stage which is adding cloud-only functionality to the workflows you’ve created such as using machine learning or scaling functions in ways that are seldom practical for on-prem solutions.

These steps are important for any organisation wanting to embrace the cloud, but Peter reminds us that it’s not just end users who are making the transition, vendors also are. Most technology suppliers have products that pre-date today’s cloud technologies and are having to make their own journey which can start with short-term fixes to ‘make it work’ and move their existing code to the cloud. They then will need to work on their pricing models and cloud security which Peter calls the ‘Make it Viable’ stage. It’s only then that they start to be able to leverage cloud capabilities such as scaling properly and if they are able to progress further they will become a cloud-native solution and fully cloud-optimised. However, these latter two steps can take a long time for some suppliers.

Peter finishes the video talking about the difference in perspective between legacy vendors and cloud-native vendors. For example, legacy vendors may still be thinking about site visits, whereas cloud-native vendors don’t need that. They will be charging using a subscription model, rather than large Capex pricing. Peter summarises his talk by underlining the need to set your vision, agree on your key tenets for migration, invest in the team, keep your teams accountable & small and seek partners that not only understand the cloud but that match your aims for the future.

Watch now!

Speakers

Peter Wharton Peter Wharton
Director of Corporate Strategy,
TAG V.S.

Video: AES67 Beyond the LAN

It can be tempting to treat a good quality WAN connection like a LAN. But even if it has a low ping time and doesn’t drop packets, when it comes to professional audio like AES67, you can help but unconver the differences. AES67 was designed for tranmission over short distances meaning extremely low latency and low jitter. However, there are ways to deal with this.

Nicolas Sturmel from Merging Technologies is working as part of the AES SC-02-12M working group which has been defining the best ways of working to enable easy use of AES67 on the WAN wince the summer. The aims of the group are to define what you should expect to work with AES67, how you can improve your network connection and give guidance to manufacturers on further features needed.

WANs come in a number of flavours, a fully controlled WAN like many larger broadacsters have which is fully controlled by them. Other WANs are operated on SLA by third parties which can provide less control but may present a reduced operating cost. The lowest cost is the internet.

He starts by outlining the fact that AES67 was written to expect short links on a private network that you can completely control which causes problems when using the WAN/internet with long-distance links on which your bandwidth or choice of protocols can be limited. If you’re contributing into the cloud, then you have an extra layer of complication on top of the WAN. Virtualised computers can offer another place where jitter and uncertain timing can enter.

Link

The good news is that you may not need to use AES67 over the WAN. If you need precise timing (for lip-sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily have a varying round trip time of, say, 16 to 40ms. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms. This variation in timing is known as the Packet Delay Variation (PDV).

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

Nocals finishes by summarising that your solution will need to be sent over unicast IP, possibly in a tunnel, each end locked to a GNSS, high buffers to cope with jitter and, perhaps most importantly, the output of a workflow analysis to find out which tools you need to deploy to meet your actual needs.

Watch now!
Speaker

Nicolas Sturmel Nicolas Sturmel
Network Specialist,
Merging Technologies