One of the main reasons we started the SEGMENTS conference was because we saw a gap in the industry: no one was talking about operational issues. The industry events focused either on the technology stack or the business models/market. So we felt that the SVTA
Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), and other forms of Extended Reality (XR), are beginning to drive deep immersive experiences. For example, AR/VR have reached a broad range of applications across various industry verticals including sports & entertainment, healthcare, automotive, aviation,
As technology advances, so too does the way we experience entertainment. Artificial Intelligence (AI) has become an integral part of how we access and interact with our favorite films and shows. AI is increasingly being used to help viewers navigate their entertainment experiences, improve content
As the streaming tech stack has grown in complexity, ensuring a high-quality viewing experience has become increasingly challenging. Part of that is the nature of a distributed architecture frequently assembled from third party services and technologies as well as just more moving parts. But part of it is also a lack of well-established standards which meet the specific needs of streaming content providers. The combination of these two issues results in operational “blind spots” when tracing issues through the workflow. Because of the critical importance of data within streaming operations, any blind spots can significantly increase the Mean Time to Diagnose (MTTD) and, more importantly, the Mean Time to Resolve (MTTR). When these operational streaming metrics go up, QoE and viewer satisfaction goes down often resulting in loss of revenue and/or increased churn. There is a need then, for a standardized approach to trace data across the different vendors in the streaming video technology stack with an obvious starting point: CDNs.