San Francisco Video Tech welcomes Haluk Ucar talking about live video streaming. How do you encode multiple resolutions/bitrates efficiently on CPUs and maximise the amount of channels? Is there value in managing multiple encodes centrally? How can we manage the balance between CPU use and VQ?
Haluk discusses a toolset for Adaptive Decisions and looks at Adaptive Segment Decisions. Here he discusses the relationship between IDR frames and frequent Scene Changes.
Haluk covers a lot and finishes with a Q&A. So if you have an interest in Live Streaming, then Watch Now!
A great ffmpeg how-to from Jan Ozer followed by cloud deployment advice from RealEyes Media.
Starting from some of the basics of the ffmpeg command line, working up to HLS packaging, Jan Ozer offers advanced alternatives along side the familiar commands.
By taking control of your own encoding and packaging, you can greatly reduce cost and maintain high adaptability and agility to meet your needs now and in the future. When working with cloud encoding, there are several transcoding and packaging options, and the APIs for these options will change over time. David Hassoun and Jun Heider, from RealEyes Mediatalk talk about how to build a more dynamic cloud encoder that can use the best tool for a specific job by decoupling the tools from the core application, as well as how to mix and match multiple operations concurrently on a single encoding task. Operations include WebVTT and AAC sidecar manifests, DASH assets, metadata, video quality, and stream muxing/demuxing. This session covers some of the strategies we’ve used to handle dynamic cloud encoding and packaging for live and VOD delivery.