Video: Is IP Really Better than SDI?

Is SDI so bad? With the industry as a whole avidly discussing and developing IP technology, all the talk of the benefits of IP can seem like a dismissal of SDI. SDI served the broadcast industry very well for decades, so what’s suddenly so wrong with it? Of course, SDI still has a place and even some benefits over IP. Whilst IP is definitely a great technology to take the industry forward, there’s nothing wrong with using SDI in the right place.

Ed Calverley from Q3Media takes an honest look at the pros and cons of SDI. Not afraid to explain where SDI fits better than IP, this is a very valuable video for anyone who has to choose technology for a small or medium project. Whilst many large projects, nowadays, are best done in IP, Ed looks at why that is and, perhaps more importantly, what’s tricky about making it work, highlighting the differences doing the same project in SDI.

This video is the next in IET Media’s series of educational videos and follows on nicely from Gerard Phillips’ talk on Network Design for uncompressed media. Here, Ed recaps the reasons SDI has been so successful and universally accepted in the broadcast industry as well as looking at SDI routing. This is essential to understand the differences when we move to IP in terms of benefits and compromises.

SDI is a unidirectional technology, something which makes it pleasantly simple, but at scale makes life difficult in terms of cabling. Not only is it unidirectional, but it can only really carry one video at a time. Before IP, this didn’t seem to be much of a restriction, but as densities have increased, cabling was often one limiting factor on the size of equipment – not unlike the reason 3.5mm audio jacks have started to disappear from some phones. Moreover, anyone who’s had to plan an expansion of an SDI router, adding a second one, has come across the vast complexity of doing so. Physically it can be very challenging, it will involve using tie-lines which come with a whole management overhead in and of themselves as well as taking up much valuable I/O which could have been used for new inputs and outputs, but are required for tying the routers together. Ed uses a number of animations to show how IP significantly improves media routing,

In the second part of the video, we start to look at the pros and cons of key topics including latency, routing behaviour, virtual routing, bandwidth management, UHD and PTP. With all this said, Ed concludes that IP is definitely the future for the industry, but on a project-by-project basis, we shouldn’t dismiss the advantages that do exist of SDI as it could well be the right option.

Watch now!
Speakers

Ed Ed Calverley
Trainer & Consultant
Q3Media.co.uk
Russell Trafford-Jones Moderator: Russell Trafford-Jones
Exec Member, IET Media Technical Network
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex

Video: Remote Production in the Cloud for DR and the New Normal

How does NDI fit into the recent refocussing of interest in working remotely, operating broadcast workflows remotely and moving workflows into the cloud? Whilst SRT and RIST have ignited imaginations over how to reliably ingest content into the cloud, an MPEG AVC/HEVC workflow doesn’t make sense due to the latencies. NDI is a technology with light compression with latencies low enough to make cloud workflows feel almost immediate.

Vizrt’s Ted Spruill and Jorge Dighero join moderator Russell Trafford-Jones to explore how the challenges the pandemic have thrown up and the practical ways in which NDI can meet many of the needs of cloud workflows. We saw in the talk Where can SMPTE ST 2110 and NDI co-exist? how NDI is a tool to get things done, just like ST 2110 and that both have their place in a broadcast facility. This video takes that as read looks at the practical abilities of NDI both in and out of the cloud.

Taking the of a demo and then extensive Q&A, this talk covers latency, running NDI in the cloud, networking considerations such as layer 2 and layer 3 networks, ease of discovery and routing, contribution into the cloud, use of SRT and RIST, comparison with JPEG XS, speed of deployment and much more!.

Click to watch this no-registration, free webast at SMPTE
Speakers

Jorge Dighero Jorge Dighero
Senior Solutions Architect,
Vizrt
Ted Spruill Ted Spruill
Sales Manager-US Group Stations,
Vizrt
Russell Trafford-Jones Moderator:Russell Trafford-Jones
Editor, TheBroadcastKnowledge.com
Director of Education, Emerging Technologies, SMPTE
Manager, Support & Services, Techex

Video: Reliable, Live Contribution over the Internet

For so long we’ve been desperate for a cheap and reliable way to contribute programmes into broadcasters, but it’s only in recent years that using the internet for live-to-air streams has been practical for anyone who cares about staying on-air. Add to that an increasing need to contribute live video into, and out of, cloud workflows, it’s easy to see why there’s so much energy going into making the internet a reliable part of the broadcast chain.

This free on-demand webcast co-produced by The Broadcast Knowledge and SMPTE explores the two popular open technologies for contribution over the internet, RIST and SRT. There are many technologies that pre-date those, including Zixi, Dozer and QVidium’s ARQ to name but 3. However, as the talk covers, it’s only in the last couple of years that the proprietary players have come together with other industry members to work on an open and interoperable way of doing this.

Russell Trafford-Jones, from UK video-over-IP specialist Techex, explores this topic starting from why we need anything more than a bit of forward error correction (FEC) moving on to understanding how these technologies apply to networks other than the internet.

This webcast looks at how SRT and RIST work, their differences and similarities. SRT is a well known protocol created and open sourced by Haivision which predates RIST by a number of years. Haivision have done a remarkable job of explaining to the industry the benefits of using the internet for contibution as well as proving that top-tier broadcasters can rely on it.

RIST is more recent on the scene. A group effort from companies including Haivision, Cobalt, Zixi and AWS elemental to name just a few of the main members, with the aim of making a vendor-agnostic, interoperable protocol. Despite, being only 3 years old, Russell explains the 2 specifications they have already delivered which brings them broadly up to feature parity with SRT and are closing in on 100 members.

Delving into the technical detail, Russell looks at how ARQ, the technology fundamental to all these protocols works, how to navigate firewalls, the benefits of GRE tunnels and much more!

The webcast is free to watch with no registration required.

Watch now!
Speakers

Russell Trafford-Jones Russell Trafford-Jones
Manager, Support & Services, Techex
Director of Education, Emerging Technologies, SMPTE
Editor, The Broadcast Knowledge

On Demand Webinar: The Technology of Motion-Image Acquisition

A lot of emphasis is put on the tech specs of cameras, but this misses a lot of what makes motion-image acquisition an art form as much as it is a science. To understand the physics of lenses, it’s vital we also understand the psychology of perception. And to understand what ‘4K’ really means, we need to understand how the camera records the light and how it stores the data. Getting a grip on these core concepts allow us to navigate a world of mixed messages where every camera manufacturer from webcam to phone, from DSLR to Cinema is vying for our attention.

In the first of four webinars produced in conjunction with SMPTE, Russell Trafford-Jones from The Broadcast Knowledge welcomes SMPTE fellows Mark Schubin and Larry Thorpe to explain these fundamentals providing a great intro for those new to the topic, and filling in some blanks for those who have heard it before!

Russell will start by introducing the topic and exploring what makes some cameras suitable for some types of shooting, say, live television and others for cinema. He’ll talk about the place for smartphones and DSLRs in our video-everywhere culture. Then he’ll examine the workflows needed for different genres which drive the definitions of these cameras and lenses; If your live TV show is going to be seen 2 seconds later by 3 million viewers, this is going to determine many features of your camera that digital cinema doesn’t have to deal with and vice versa.

Mark Schubin will be talking about at lighting, optical filtering, sensor sizes and lens mounts. Mark spends some time explaining how light is made up and created whereby the ‘white’ that we see may be made of thousands of wavelengths of light, or just a few. So, the type of light can be important for lighting a scene and knowing about it, important for deciding on your equipment. The sensors, then, are going to receive this light, are also well worth understanding. It’s well known that there are red-, green- and blue-sensitive pixels, but less well-known is that there is a microlens in front of each one. Granted it’s pricey, but the lens we think most about is one among several million. Mark explains why these microlenses are there and the benefits they bring.

Larry Thorpe, from Canon, will take on the topic of lenses starting from the basics of what we’re trying to achieve with a lens working up to explaining why we need so many pieces of glass to make one. He’ll examine the important aspects of the lens which determine its speed and focal length. Prime and zoom are important types of lens to understand as they both represent a compromise. Furthermore, we see that zoom lenses take careful design to ensure that the focus is maintained throughout the zoom range, also known as tracking.

Larry will also examine the outputs of the cameras, the most obvious being the SDI out of the CCU of broadcast cameras and the raw output from cinema cameras. For film use, maintaining quality is usually paramount so, where possible, nothing is discarded hence creating ‘raw’ files which are named as they record, as close as practical, the actual sensor data received. The broadcast equivalent is predominantly RGB with 4:2:2 colour subsampling meaning the sensor data has been interpreted and processed to create RGB pixels and half the colour information has been discarded. This still looks great for many uses, but when you want to put your image through a meticulous post-production process, you need the complete picture.

The SMPTE Core Concepts series of webcasts are both free to all and aim to support individuals to deepen their knowledge. This webinar is in collaboration with The Broadcast Knowledge which, by talking about a new video or webinar every day helps empower each person in the industry by offering a single place to find educational material.

Watch now!
Speakers

Mark Schubin Mark Schubin
Engineer and Explainer
Larry Thorpe Larry Thorpe
Senior Fellow,
Canon U.S.A., Inc.
Russell Trafford-Jones Russell Trafford-Jones
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex
Exec Member, IET Media