Video: It all started with a Reddit post…

A lively conversation today on updating workflows, upskilling staff, when to embrace the cloud…and when not to. Started by a discussion on Reddit, we hear from CEO of Canadian service provider Nextologies, Sasha Zivanovic and co-founder of Nxtedition, Robert Nagy. The discussion, hosted by Adam Leah, starts by tackling the question of how to deal with legacy workflows. The initial disagreement seems to come from the two approaches. Robert’s pragmatic approach acknowledges that legacy workflows can be functional and dysfunctional and the decision on whether to start again or transition lies in whether your current workflow works without constant human intervention or not. Sasha agrees that dysfunctional workflows, ones that fall apart if key people are away, need to be dismantled and reworked at the earliest opportunity. Otherwise, he feels that education is key in ensuring that you teach people how to use the new technologies available and how to create good, robust workflows on which you can really base your future business.

Indeed, for Sasha education is the key because, in his words, ‘there is no 1-800 Amazon’. Being progressive and moving your workflow into the cloud may be the right way forward, but understanding that the cloud providers are only providing infrastructure means that if any little thing doesn’t work, you will need your own staff to explain it and resolve the problem. Even big players who may have access to named Engineers will still have far too many smaller issues that they themselves will have to deal in order to allow their named resources at the cloud provider to work on the higher priority/bigger problems and designs being discussed. Moreover, lack of education is more likely to lead people simply to go with what’s easy namely making something work using free/low-cost hardware and software. Sasha’s point isn’t that free things are bad, but that often the solutions based on getting OBS up and running are often not robust and may accept more compromises such as latency of image quality, than needed.

 

 

Robert and Sasha go on to discuss this question of what quality is good enough directly, both advising against superfluous quality as much as recommending avoiding workflows that under spec the stream. Quality needs to come down to your brand, the video’s context and the technical capability of the workflow. To speak to the latter, both Robert and Sasha point out the folly in demanding archives and contribution happen in the ‘house format’ such as 25Mbps. Such bitrates may make a lot of sense on-prem, but for streaming or some cloud workflows are counterproductive and don’t deliver a better result to the viewer. Your brand does need to be considered in order to set a lower bar for the content, but usually, the venue of your video is more important, agree Robert and Sasha, where a YouTube Story would attract a different quality to a Vimeo post to a long-form OTT asset.

The larger concern raised in this conversation is the ‘bifurcation’ of the market. Looking at this from a service provider’s point of view, Sasha sees that the tech companies have increased the size of the market which positive. But with that comes problems. The ease of access to the cloud increases the ability for small players to participate but there is still a high-end place in the market where tier-1 broadcasters play who do benefit from the cloud, but still requires a high investment in time and design to create it along with high Opex. This doesn’t mean overall there is no cost-benefit to those broadcasters, often there is and sometimes it’s not cost they are optimising for. But it’s the gap that concerns Sasha, where those not engaging like tier-1 broadcasters tend to graduate to the bottom end of the market which has much lower revenues than before. Whilst The Broadcast Knowledge would suggest this is where Sasha can prove the worth of his company, anchoring the bottom of the market at a low cost does reduce the opportunities for companies such as Nextologies to charge a sufficient amount to cover costs and maintain competitiveness. Robert and Sasha both agree that that success with clients nowadays is achieved through partnering with them and following, helping and encouraging them on their journey. The value of such a long-term design or product partner is worth more than any single workflow.

Watch now!
Speakers

Sasha Zivanovic Sasha Zivanovic
CEO
Nextologies
Robert Nagy Robert Nagy
Lead Developer & Co-founder,
nxtedition
Adam Leah Moderator: Adam Leah
Creative Director,
nxtedition

Video: JT-NM ProAV Technology Roadmap for IPMX – Panel Discussion

Building on our coverage of IPMX to date, we see that this push to create a standard for IP for the ProAV market has been growing in momentum. With activity now in AIMS, AMWA, VSF and SMPTE, it’s important to bring together the thinking and have a central strategy which is why the JT-NM have released a roadmap. This has started by defining what is meant by ProAV: “The market for audiovisual (AV) communication equipment used in professional, industrial, commercial, and retail environments as a means to communicate with people.” As is noted by today’s panel, this is a wide definition and helps us understand why this is such a different proposition compared to the related ST 2110 and NMOS work for the broadcast market. There are lots of silos in the Pro AV space with many solutions being developed to cater to just one or two. This makes requirements capture difficult and has led to the fragmentation seen to date and partly why strong manufactures tend to be the ones pushing the market in a certain direction in contrast to the broadcast market where strong, early adopters set the direction for vendors.

 

 

The roadmap itself sets the aims of IPMX, of instance that is secure from the start, it will scale and integrate with 2110/AMWA broadcast installations and be able to be a software only solution. Phase 1 of the roadmap identifies existing standards and specifications which underpin the three IPMX tenents of security, Media, Control. Identified are NMOS IS-10 for access authentication and encryption, relevant 2110 standards, NMOS IS-04,-05, -07 and capabilities to use EDIDs. Phase 2 then adds HDCP support, support for ProAV audio formats, enhanced control such as for audio mapping (IS-08), legacy camera control via RS-232 etc. and then phase 3 will bring in media compression for WAN links, error correction techniques and closed captioning & subtitling. For control, it will add USB HID and a training and certification scheme will be launched.

The panel concludes discussing how IPMX is very much at home with live production which, of course, should help it dovetail well into the broadcast space. IPMX is seen by the panel to simplify the implementation of a 2110-like infrastructure which should allow easier and quicker installations than 2110 which are seen as larger, higher risk projects. IPMX could, it’s suggested, be used as an initial step into IP for broadcasters who seek to understand what they need to do organizationally and technically to adopt IP ahead of perhaps developing 2110 systems. But the technology is seen as going both ways allowing broadcasters to more readily adopt compressed workflows (whether JPEG XS or otherwise) and allow Pro AV players to bring uncompressed workflows more easily into their productions for those that would benefit.

Watch now!
Speakers

Karl Paulsen Karl Paulsen
CTO,
Diversified
Andrew Starks Andrew Starks
Director of Product Management,
Macnica America’s Inc.

David Chiappini David Chiappini
Chair, Pro AV Working Group, AIMS
Executive Vice President, Research & Development,
Matrox Graphics Inc.

Richard M. Friedel Richard Friedel
Executive Vice President, Technology & Broadcast Strategy,
21st Century Fox

Video: The Future Impact of Moore’s Law on Networking

Many feel that Moore’s law has lost its way when it comes to CPUs since we’re no longer seeing a doubling of chip density every two years. This change is tied to the difficulty in shrinking transistors even more when their size is already close to some of the limits imposed by physics. In the networking world, transistors are bigger so which is allowing significant growth in bandwidth to continue. In recent years we have tracked the rise of 1GbE which made way for 10GbE, 40GbE and 100 Gigabit networking. We’re now seeing general availability of 400Gb with 800Gb firmly on the near-term roadmaps as computation within SFPs and switches increases.

In this presentation, Arista’s Robert Welch and Andy Bechtolsheim, explain how the 400GbE interfaces are made up, give insight into 800GbE and talks about deployment possibilities for 400GbE both now and in the near future harnessing in-built multiplexing. It’s important to realise that high capacity links we’re used to today of 100GbE or above are delivered by combining multiple lower-bandwidth ethernet links, known as lanes. 4x25GbE gives a 100GbE interface and 8x50GbE lanes provides 400GbE. The route to 800GbE is, then to increase the number of lanes or bump the speed of the lanes. The latter is the chosen route with 8x100GbE in the works for 2022/2023.

 

 

One downside of using lanes is that you will often need to break these out into individual fibres which is inconvenient and damages cost savings. Robert outlines the work being done to bring wavelength multiplexing (DWDM) into SFPs so that multiple wavelengths down one fibre are used rather than multiple fibres. This allows a single fibre pair to be used, much simplifying cabling and maintaining compatibility with the existing infrastructure. DWDM is very powerful as it can deliver 800GB over distances of over 5000km or 1.6TB for 1000km, It also allows you to have full-bandwidth interconnects between switches. Long haul SFPs with DWDM built in are called OSFP-LS transceivers.

Cost per bit is the religion at play here with the hyperscalers keenly buying into the 400Gb technology because this is only twice-, not four-, times the price of the 100Gb technology it’s replacing. The same is true of 800Gb. The new interfaces will run the ASICs faster and so will need to dissipate more heat. This has led to two longer form factors, the OSFP and QSFP-DD. The OSFP is a little larger than the QSFP but an adaptor can be used to maintain QSFP-form factor compatibility.

Andy explains that 800Gb Ethernet has been finished by the Ethernet Technology Alliance and is going into 51,2t silicon which will allow channels of native 800Gb capacity. This is somewhat in the future, though and Andy says that in the way 25G has worked well for us the last 5 years, 100gig is where the focus is for the next 5. Andy goes on to look at what future 800G chassis might loo like saying that in 2U you would expect 64 800G OSFP interfaces which could provide 128 400G outputs or 512 100G outputs with no co-packaged optics required.

Watch now!
Speakers

Robert Welch Robert Welch
Technical Solutions Lead,
Arista
Andy Bechtolsheim Andy Bechtolsheim
Chairman, Chief Development Officer and Co-Founder,
Arista Networks

Video: Enhanced Usage of the RIST Protocol to Address Network Challenges

The Reliable Internet Stream Transport (RIST) is an open specification from the Video Services Forum which allows for reliable tansmission of video, audio and other data over lossy links. It does this by retransmitting any lost packets which the receiver hopes to receive before its receive buffer is exhausted. A seemingly simple, but powerful feature of RIST is delivery of multiple links to be bonded together to deliver to a single receiver. In this video, Adi Rozenberg explains the many ways to use this flexible functionality. If you’re new to RIST, check out this SMPTE primer or this intro from AWS.

Adi starts by outlining the basic functionality which allows a sender, using multicast or unicast, to set up multiple links to a destination. Each of these links will be managed by an RTCP channel. This setup allows for a number of strategies to deliver content.

 

 

RIST supports a number of output modes. In the standard mode, packets are passed through without modification. Header conversion can be added, however, which allows the destination IP, UDP port and source IP to be changed. There are also modes determining whether a link carries stream data, just any retransmitted packets or both. In most similar protocols, the default is that a link carries both the stream data and the retransmitted data. Lastly, it’s possible to define that normally a percentage of traffic goes down each path which then adjusts if one or more links go down.

Adi outlines the following systems:

  • Stream transmission over three links with retransmissions sent over any link
  • Dynamic load share with three links carrying 45%, 45% & 10% load respectively. This cuts down on bandwidth compared to the first option which needs 300% of the stream bandwith.
  • Use of three links where the third takes retransmission traffic only.

These systems allow for use cases such as splitting the video bitrate between two or more links, having a low-bandwidth backup link normally carrying 3% traffic but which can burst up to 100% if the main link fails. This would work well for cloud-provided feeds where the main delivery is satellite RF and the IP delivery is dependent on the cloud and therefore the cost is related to egress charges or conversely if the RF link is paid for such as a 4G cellular link, the 3% would lie on that and DSL would handle the main delivery.

Watch now!
Speaker

Adi Rozenberg Adi Rozenberg
CTO & Co Founder
VideoFlow