Client-side Video Tricks for IIIF

Published: 2016-10-18 19:55 +0200

I wanted to push out these examples before the IIIF Hague working group meetings and I’m doing that at the 11th hour. This post could use some more editing and refinement of the examples, but I hope it still communicates well enough to see what’s possible with video in the browser.

IIIF solved a lot of the issues with working with large images on the Web. None of the image standards or Web standards were really developed with very high resolution images in mind. There’s no built-in way to request just a portion of an image. Usually you’d have to download the whole image to see it at its highest resolutions. Image tiling works around a limitation of image formats by just downloading the portion of the image that is in the viewport at the desired resolution. IIIF has standardized and image servers have implemented how to make requests for tiles. Dealing with high resolution images in this way seems like one of the fundamental issues that IIIF has helped to solve.

This differs significantly from the state of video on the web. Video only more recently came to the web. Previously Flash was the predominant way to deliver video within HTML pages. Since there was already so much experience with video and the web before HTML5 video was specified, it was probably a lot clearer what was needed when specifying video and how it ought to be integrated from the beginning. Also video formats provide a lot of the kinds of functionality that were missing from still images. When video came to HTML it included many more features right from the start than images.

As we’re beginning to consider what features we want in a video API for IIIF, I wanted to take a moment to show what’s possible in the browser with native video. I hope this helps us to make choices based on what’s really necessary to be done on the server and what we can decide is a client-side concern.

Crop a video on the spatial dimension (x,y,w,h)

It is possible to crop a video in the browser. There’s no built-in way that this is done, but with how video it integrated into HTML and all the other APIs that are available there cropping can be done. You can see one example below where the image of the running video is snipped and add to a canvas of the desired dimensions. In this case I display both he original video and the canvas version. We do not even need to have the video embedded on the page to play it and copy the images over to the canvas. The full video could have been completely hidden and this still would have worked. While no browser implements it a spatial media fragment could let a client know what’s desired.

Also, in this case I’m only listening for the timeupdate event on the video and copying over the portion of the video image then. That event only triggers so many times a second (depending on the browser), so the cropped video does not display as many frames as it could. I’m sure this could be improved upon with a simple timer or a loop that requests an animation frame.

And similar could be done solely by creating a wrapper div around a video. The div is the desired width with overflow hidden and the video is positioned relative to the div to give the desired crop.

This is probably the hardest one of these to accomplish with video, but both of these approaches could probably be refined and developed into something workable.

Truncate a video on the temporal dimension (start,end)

This is easily accomplished with a Media Fragment added to the end of the video URL. In this case it looks like this: http://siskel.lib.ncsu.edu/SCRC/ua024-002-bx0149-066-001/ua024-002-bx0149-066-001.mp4#t=6,10. The video will begin at the 6 second mark and stop playing at the 10 second mark. Nothing here prevents you from playing the whole video or any part of the video, but what the browser does by default could be good enough in lots of cases. If this needs to be a hard constraint then it ought to be pretty easy to do that with JavaScript. The user could download the whole video to play it, but any particular player could maintain the constraint on time. What’s nice with video on the web is that the browser can seek to a particular time and doesn’t even need to download the whole video to start playing any moment in the video since it can make byte-range requests. And the server side piece can just be a standard web sever (Apache, nginx) with some simple configuration. This kind of “seeking” of tiles isn’t possible with images without a smarter server.

Scale the video on the temporal dimension (play at 1.5x speed)

HTML5 video provides a JavaScript API for manipulating the playback rate. This means that this functionality could be included in any player the user interacts with. There are some limitations on how fast or slow the audio and video can play, but there’s a larger range of how fast or slow the just the images of the video can play. This will also differ based on browser and computer specifications.

This video plays back at 3 times the normal speed:

This video plays back at half the normal speed:

Change the resolution (w,h)

If you need to fit a video within a particular space on the page, a video can easily be scaled up and down on the spatial dimension. While this isn’t always very bandwidth friendly, it is possible to scale a video up and down and even do arbitrary scaling right in the browser. A video can be scaled with or without maintaining its aspect ratio. It just takes some CSS (or applying styles via JavaScript).

Rotate the video

I’m not sure what the use case within IIIF is for rotating video, but you can do it rather easily. (I previously posted an example which might be more appropriate for the Hague meeting.)

Use CSS and JavaScript safely, OK?

Conclusion

Two of the questions I’ll have about any feature being considered for IIIF A/V APIs are:

  1. What’s the use case?
  2. Can it be done in the browser?

I’m not certain what the use case for some of these transformations of video would be, but would like to be presented with them. But even if there are use cases, what are the reasons why they need to be implemented via the server rather than client-side? Are there feasibility issues that still need to be explored?

I do think if there are use cases for some of these and the decision is made that they are a client-side concern, I am interested in the ways in with the Presentation API and Web Annotations can support the use cases. How would you let a client know that a particular video ought to be played at 1.2x the default playback rate? Or that the video (for some reason I have yet to understand!) needs to be rotated when it is placed on the canvas? In any case I wonder to what extent making the decision that someone is a client concern might effect the Presentation API.