In the first of a series of industry think pieces, we at Sencore want to start by digging into the concept of Internet Protocols. Why are there so many, what do they achieve, and what’s stopping us from moving towards one harmonized standard across the industry?
Of all the sci-fi dreams that have been realized in the past century – from space travel to self-driving cars to lifelike robots – the one thing we still seem to be far from achieving is that of harmonized standards. Visions of the future never dwelled on the idea as particularly spectacular – they just assumed everything would work together. After all, why wouldn’t it? Think of the Enterprise and its ability to communicate via video chat with a Klingon warship, thousands of miles across space, with no message saying, ‘codec not supported’ and no jitter or signal loss (unless it happened to be prudent to the plot).
The concept of interoperability remains one of our biggest hurdles – in the broadcast industry. The propagation of different standards and approaches to various networking tasks continues to grow, with no sign of convergence in sight.
But is harmonization what we need? Or is there good reason to maintain variation in the range of network protocols that we retain as an industry? We think that’s a question worthy of further exploration…
The great IP debate
The issue of ‘how to do things’ (i.e. what methods, protocols, standards and workflows to deploy) begins right at the top, with the question of IP. There are arguments to say that the adoption of IP isn’t so much a debate as it is inevitable. But, in essence, the first of our interoperability wars starts not at the nitty-gritty level, but right at the top: RF, DTT, Satellite, Cable, IPTV, OTT: just how should the industry be getting content to their audiences? Even at the production level, there’s the raging SDI/IP debate.
At Sencore we’d argue the writing is on the wall: IP is the inevitable future in both production and distribution spaces. Those that haven’t made the transition yet are held back by one of two things: fear or finances. One of our missions at Sencore is to make sure that neither need be an obstacle – providing knowledge and guidance along with solutions designed to reduce overall OpEx.
The network as a wedding cake
It’s not a perfect metaphor, but what we’re trying to portray, is the idea that networks operate on four layers: application, transport, network, and link (or under ISO’s definition, seven layers: physical, data link, network, transport, session, presentation, and application).
It is of course the transport layer where the first major protocol battle occurs; normally waged between UDP (User Datagram Protocol) and TCP (Transmission Control Protocol). It is upon these that streaming protocols rest. The central difference is the need for a ‘handshake’, in which TCP has the client both ask for and acknowledge the server’s response. The net result is that TCP has high levels of reliability in terms of packet ordering, whilst UDP can achieve things much more quickly, and without reference to bandwidth limitations, but has significant possibility for a real mess to arrive at the other end. The difference in outcome is illustrated by the two jokes we’ve added below. (We’d like to tell you another UDP joke, but you might not get it, and we don’t really care…).
Crossing the Stream
With the basis of UDP and TCP established, it’s time to get down to the real challenge: choosing a streaming protocol. These can be effectively divided into two, the ‘traditional’ and the ‘HTTP adaptive’ (plus a couple of new emerging standards).
In the traditional camp, there’s RTMP (Real-Time Messaging Protocol) and RTP (Real-Time Streaming Protocol). These protocols support low latency streaming, but they rarely have much native support in the browsers and cell phones that constitute their final destination. RTMP used to see broad support with ‘Flash’ video player, but that left our collective browsers in 2020. However, RTMP still plays a significant role in contribution. RTP however remains the dominate player here, with MPEG TS over IP and Forward Error Correction (ST 2022-1 & ST 2022-2) providing an important step in correcting the shortcomings of the underlying UDP tendency of throwing packets at the wall and hoping they stick…
Then we’ve got the range of HTTP adaptive protocols. These include MPEG-DASH, Apple’s HLS, amongst others. These use adaptive bitrate streaming and work to maximise the viewer experience within the constraints of what’s available; (largely) regardless of connection, bandwidth or the software or device being used.
Lots of these standards do impressive things and are great for covering the ‘last mile’. But you’ll notice one common factor amongst them: the presence of some big brand names. And that brings with it its own set of problems…
There is more than one way to bake a cake
So why are there so many competing ways of doing things? Well, the reasons are complex and multifaceted, but two key themes generally emerge, one of which is a slightly more charitable interpretation than the other.
The charitable: One could surmise from the above discussion of various protocols and standards, that they all carry different strengths and weaknesses and are each suited for different tasks. The deployment of different protocols and different standards grants more flexibility in regulating the traditional quality/speed/reliability triangle and matching it to the needs of the data and audience in question.
The uncharitable: Profit! For some time, the industry had an idea that if proprietary mechanisms of data processing/movement/interaction could be developed, then that constituted a nifty way of locking customers into their products for life. This can be seen in the majority of the HTTP-based protocols listed above. And it’s a real problem that limits the capacity of the broadcast industry to progress as a whole.
But the times they are a-changing…
This somewhat cynical proprietary attitude has – we’re glad to say – been largely abated these days. Where standards are being developed, it tends to be open-source alliances (as seen with SRT) or open standards groups (as seen with RIST). These developer communities of industry professionals put in the effort to standardize because they genuinely think it provides a source of advantage. For everyone.
Yes, the buzzwords of the day are increasingly ‘interoperability’, if not yet quite ‘harmonization’. Ensuring that everything created can talk to each other through some mediated dictionary, even if it’s using different languages. The more enlightened players within the industry (and yes, we humbly count ourselves at Sencore amongst them) are eager to stress the idea of ‘competitive collaboration’; a recognition that while we would all like to achieve profit growth, it’s no longer a zero-sum game that requires the elimination of competitors. Instead, the best way to achieve that growth, is for the industry to move forwards as a whole – and there’s certainly capacity for that as the market’s appetite for content continues to grow.
As a result, there’s an increasing focus on making devices “talk” the same language. Which has always been the heart of the Sencore mission; whether it be encoding and decoding, sending SDI over IP, or monitoring complex networks which integrate an array of data streams, data types and transmission methods.
So even if there are a range of complex, competing and confusing standards and protocols underlying video delivery these days, a combination of mindset change in the industry and the products at the heart of Sencore’s offerings mean that broadcasters can still maintain a seamless, interconnected network without the headache of negotiating competing standards and contradictory workflows. In conclusion, our ongoing mission is, and remains, to make this process more intuitive and easier to accomplish. Watch this space for the big steps we’re about to take in achieving that.