大象传媒

Beyond Streams and Files - Storing Frames in the Cloud

End-to-end IP broadcast is becoming reality, so we're looking to the cloud for what's next.

Published: 20 March 2018

We've been talking about end-to-end IP production for some time now, and it's finally becoming reality. Manufacturers are getting on board, IBC had an IP showcase and 大象传媒 Wales are getting an IP core. Our Lightweight Live demo at IBC 2017 also showed what IP Production could look like in the cloud, taking R&D's previous IP Studio work and running it in AWS. Now we're starting to think about what truly "cloud-fit" production might look like.

We think this means breaking our flows of media down into small objects, such as a frame of video or a frame鈥檚 worth of audio. These objects are then stored in an object store with their identity, timestamp and other metadata. We can process objects in parallel, and even use serverless computing such as AWS Lambda. This opens up opportunities for flexible production, discussed in more detail on the . However to begin with we need to find a way to store our objects; and we鈥檝e been experimenting with the .

Object Store Diagram

Our Media Object Experiments

The 大象传媒 already uses S3 as a media object store in our online distribution pipeline. Chunks of video are taken from our broadcast encoders and uploaded while a programme is being broadcast. When the programme ends, the chunks are then extracted and encoded for distribution on iPlayer - the 大象传媒's Lead Architect Stephen Godwin talked about this in his .

We wanted to know whether we could upload objects fast enough to cope with real-time uncompressed HD video streams. To try it we built some simple test software to upload objects filled with random data from several cloud compute instances; simulating uploading real media objects. The software recorded the start and end time of each object upload, which were used to calculate average upload rate. Various automation scripts meant the cloud compute instances could be started up, configured, used to run tests and shut down again; with no user input required.

s3 object graph

We found objects needed to be fairly large (of the order of 500MB to 2GB) to get upload rates good enough for real time uncompressed video, and that object size was the largest driver of upload speed. However larger objects come with a penalty, in that it takes longer for them to form in real-time. For example, if one object is the equivalent of five seconds of video, it takes at least five seconds before the object can be uploaded. If we combine this with how long it takes the object to upload, we can find the latency 鈥 how long it is between an object鈥檚 first frame arriving, and it being available in the store. Smaller objects mean latency can be reduced.

Parallelism

We need an upload rate of around 750Mbps for the smallest uncompressed video format we work with (HD  8-bit), and that increases to 1.5Gbps for 鈥減roduction quality鈥 (HD  10-bit padded). Looking at the graph, that means using 2GB objects and having no margin for variation in S3鈥檚 performance. However there is another way; we can run multiple compute instances uploading in parallel and combine their upload rates.

Object Store Diagram parallel

This gives us some flexibility; we can choose to reduce the latency by adding more instances, but we鈥檒l have to pay more for running the servers.

This cost-vs-latency tradeoff is one of the key design parameters of our storage system, and is plotted for various object sizes and video formats below.

 Cost vs Latency in various formats

Scability

Another area of interest is in scalability; how many video streams can we add before S3 starts to slow down? To test this, we steadily increased the number of EC2 compute instances writing into S3 up to 100. Looking at the per-host average upload rate, it doesn鈥檛 really change as the number of instances increases. If we plot cumulative upload rate across all hosts, it increases steadily with the number of hosts. This trend continued up to 20Gbps for our 100 host maximum.

We took a brief look at some of the other parameters we can adjust in our client software (which uses the library), and found that the default settings mostly provide the best performance. We also tested various EC2 instance types, and the 鈥渃5.large鈥 has by far the best performance for the price with our use case.

What Next

Now that we鈥檝e proved our idea is actually possible, the next step is to make a prototype. That means building something to manage the metadata for our objects; which flow they belong to and the time range they represent. We also plan to make our object store immutable; once an object is written it cannot be updated. This means we have to implement in our metadata management, but means we can achieve our stored by default workflow.

We鈥檒l use the prototype to validate some of our tests with real media content, and continue to build the other components of our system. We鈥檒l also carry out read tests to check we can get the objects back out again. We'll also use these experiments to inform the design of our experimental on-premise cloud.

  • Sign up for the IP Studio Insider Newsletter:
  • Join our mailing list and receive news and updates from our IP Studio team every quarter. Privacy Notice
  • First Name:
  • Last Name: 
  • Company:   
  • Email:         
  • or Unsubscribe

Scability

Another area of interest is in scalability; how many video streams can we add before S3 starts to slow down? To test this, we steadily increased the number of EC2 compute instances writing into S3 up to 100. Looking at the per-host average upload rate, it doesn鈥檛 really change as the number of instances increases. If we plot cumulative upload rate across all hosts, it increases steadily with the number of hosts. This trend continued up to 20Gbps for our 100 host maximum.

s3 cumulative graph

We took a brief look at some of the other parameters we can adjust in our client software (which uses the  library), and found that the default settings mostly provide the best performance. We also tested various EC2 instance types, and the 鈥渃5.large鈥 has by far the best performance for the price with our use case.

What Next

Now that we鈥檝e proved our idea is actually possible, the next step is to make a prototype. That means building something to manage the metadata for our objects; which flow they belong to and the time range they represent. We also plan to make our object store immutable; once an object is written it cannot be updated. This means we have to implement  in our metadata management, but means we can achieve our stored by default workflow.

We鈥檒l use the prototype to validate some of our tests with real media content, and continue to build the other components of our system. We鈥檒l also carry out read tests to check we can get the objects back out again. We'll also use these experiments to inform the design of our experimental  on-premise cloud.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: