This post shows how to make video taken at normal speed look like it was taken with a high speed camera!
In this post, click on a picture to see the full size version.
To do this work, I have used AVISynth and Virtual Dub. I keep trying to use commercial packages but always fall back to these two because of their almost limitless power! You can get VirtualDub from here: http://www.virtualdub.org/ and AVISynth from here http://avisynth.org/.
Both the program manipulate video and sound data and both run on Windows. AVISynth is a video scripting tool. You write scripts to tell it how to process video. VirtualDub is a visual tool in which you can see the video input and output and process video. The scripting side of AVISynth makes it more powerful for complex processing and the visual side makes VirtualDub more powerful for editing. Put the two together and the world is your oyster!
What Is Synthetic Slow Motion?
A normal camcorder runs at somewhere around 20-30 frames per second. European ones run at 25 frames per second (mine for instance). On top of this, they are normally interlaced. This means that in each frame, only every other line is actually updated. This reduces the amount of information by half. What I am aiming to do here is take this 25 half frames per second and make it look like it was taken at 250 full frames per second. Then slow the 250 frames per second back down to 25 frames per second to make a 10:1 slow motion.
Combing out your interlacing...
Converting the interlaced video to non-interlaced is a huge and complex subject. There is a simple solution though - just use Smart Delinterlace. VirtualDub has a plugins directory. Here is mine (at the time of writing this post) plugins. Simply unzip this in your VirtualDub install directory and you will get all the VirtualDub filters I am currently using, including Smart Deinterlace. Then you can simply use SmartDeinterlace with its default settings. However, the default settings use frame blending to deinterlace areas of the image which are changing a lot between frames. I find that this give a slight 'ghosting' effect. I prefer to set the deinterlacer to 'edge directed interpolate':
Getting rid of the shakes
You might think that the next step is to do the synthetic slow motion. However, there is another step to make which is pretty important. Before doing any complex processing of video - it is a good idea to remove as much camera shake and rotation as possible. Fortunately, VirtualDub has a pluging (included in my download) which is truly amazing at removing cameral shake and rotation. You have to do this because the synthetic slow motion is going to work on the differences between on frame and the next. You want those differences only to be due to objects that move, not because the camera moved. That way, the motion interpolation has the best chance of making a realistic effect rather than being 'confused' by global motion.
Deshaker uses motion vectors created by a video analysis pass to remove shake and rotation in a video processing pass. That sounds kind of complex, but the idea is simple. It looks at the difference between each frame and the next. If it finds block in the frame's image that look similar, it assumes they were from the same object in the field of view. It takes the difference in position of a number of such block and uses a bit of mathematics to work out if the objects were moving or the camera was moving. If then records how much it 'thinks' the camera is moving. This is the 'analysis pass'. Once it has recorded all the camera movements for your video, you do a 'processing pass' where it moves the image in each frame the opposite what to the way I thinks the camera moved, thus removing camera movement. This can leave borders on the image where the edges are 'missing'. The deshaker can fill these in with an average of previous and future frames. This filling in makes the borders almost impossible to see so that the image becomes magically 'deshaken'.
To use Deshaker, just add it as a filter after Smart Delinterlace. You then set it up using the default settings and ensure that you have set it to 'pass 1'; you do this by making sure the button at the top of the 'pass 1' panel is pressed in the setup window:
You are not actually going to produce any output video in pass-1. The purpose is only to analyse the video to work out the global motion vectors. So, once you have Deshaker set up you need to run a video analysis pass. You do this from the 'File' menu of VertualDub:
Second Pass Deshaking
Now that we have done the first pass, we do the actual deshaking. Go back to the Video/Filters list and click on Deshaker, then click on configure. Now you can set the deshaker to run its second pass. I normally set the option to fill in the borders with pervious and following frames but I set these to 10 not the default 30. This does make the picture go out of synchronisation with the sound. You can correct this later, however, in our case, we are doing slow motion, so the sound is going to be thrown away anyhow!
Once you have set up the second pass of the deshaker, it is time to actually create an AVI (video) file. I like to make a new AVI file after each significant processing step. This makes going back and making changes and/or editing much easier. Granted, you will need a lot of disk space to do this! If you have less than around 100Gig of free space, please go invest in an external UBS2 drive. These are very cheap nowadays and do the job perfectly. I am lucky enough to work in the computer industry and currently use a 1TByte external, but the 250Gig ones from your local computer store or online are just fine.
A note about AVI compression
Whilst working on video, it is really sensible to use AVI files to store the video. You might convert the video to something else - like mpeg - as the last step. AVI is just a 'container' format. It is quite simple and easily recognised by a wide range of software. When I say it is a 'container', it does not specify the way that video is encoded in the file, only how the encoded video is stored. So, when you create an AVI file, you need to specify how the video will be encoded.
Video encoding performs two tasks. It gives a way of defining how to turn the data in the encoded video back into a watchable moving image. Optionally, it defines how to compress the video. Most digital video is defined as separate channels for the black and white part (luma)) and the colour part (chroma). These are stored as pixels. Often there are less chroma pixels than luma pixels, because humans see color variation less clearly than luma variation. So, for example, if you have 640x480 resolution video, each frame may contain 307200 bytes of luma information and 76800 bytes of chroma information. This 384000bytes (or 375KBytes) of data adds up very quickly at 25 frames per second. So, there is a big benefit in not encoding the video as 'raw'.
Compression is a mechanism to reduce the amount of data required to store a length of video. Compression take two forms, lossless and lossy. Lossy compression uses models of how humans perceive moving pictures to estimate what information can be removed from the video and not make it look worse (no at least, not make it look too bad. Lossy compression can be amazing things, like reducing the amount of data in encoded video by a factor of ten to one without a human watcher being able to see any difference (well - almost). However, lossy compression is 'cumulative'. If you lossy compress video, then lossy compress the already compressed video, the losses rapidly add up and the video degrades. It is worse than you might think, because the process of lossy compression makes the resultant video more difficult to recompress! This is where lossless compression comes in.
Lossless compression reduces the amount of data in the encoded video without reducing the amount of information. It works a bit like this: 'aaaaab' could be written 'a*5b' (a times 5 then b) and in so doing, take up less space. There is a great encoder/decoder (codec) called 'huffy' which does this for video. It is fast enough and good enough at compression to make it the best choice (better than 'raw') for storing AVI which your are working on. You can find the binary for huffy here.
Now for the slow-mo magic with AVISynth!
LoadPlugin("C:\Documents and Settings\AJT\Desktop\AV Stuff\AVISyth\plugins\mvtools.dll") source = AVISource("C:\My Record\standing-randori.avi",false) oSource=source source=ConvertToYV12(source) source=AssumeFPS(source,25) backward_vec = source.MVAnalyse(isb = true, truemotion=true, pel=2, idx=1) # we use explicit idx for more fast processing forward_vec = source.MVAnalyse(isb = false, truemotion=true, pel=2, idx=1) cropped = source.crop(4,4,-4,-4) # by half of block size 8 backward_vec2 = cropped.MVAnalyse(isb = true, truemotion=true, pel=2, idx=2) forward_vec2 = cropped.MVAnalyse(isb = false, truemotion=true, pel=2, idx=2) fSource=source.MVFlowFps2(backward_vec,forward_vec,backward_vec2,forward_vec2,num=250,idx=1,idx2=2) fSource=AssumeFPS(fSource,25) return fSource
AVISynth is a video scripting program. The script is stored in a file with the .avs file extension. This is a special extension which video viewing programs understand. For example, when you open a avs file with VirtualDub, it (via some magic to which I am not privy) launches AVISynth and uses AVISynth's output as its video input.
The script above first loads the awesome mvtool plug-in. It then reads in an AVI file which I have created using VirtualDub - just like in this example. The second parameter to the source reading function
AVISource tells it not to bother with any sound. This is going to be slow motion, so the sound track makes no sense!
Some AVISynth plug-ins and functions require the mix of colour and luma to follow a particular one of the many formats that are possible for video. I have changed the format to YV12 because this works with the functions I am using here. Then the final 'preparation' step is to set the incoming frame rate to 25 FPS. It does not really matter what you set it to here, it is what you set it to at the end of the script the matters to the output.
The next block of script which says loads of stuff about forward and backward is where the magic happens. Mvtools uses motion detection between frames to create 'motion vectors'. These represent the best estimate of motion of blocks in the images in frames. This is imperfect because images are not actually made up of blocks - but it is better than nothing (see later). From these motion vectors, mvtools can work out what frames in-between the existing frames would look like by moving the blocks of the image along the vectors.
MVFlowFps2 is where this inter-frame interpolation goes on. I have set this function to create 250 frames out of every 25 frames. This is because I have assumed the input FPS to be 25 and set the output to 250. Mvtools then makes 8 interpolations between each real frame. Each interpolation involves moving the image block a bit further along the motion vectors. To make this into slow motion, I then set the frame rate back down to 25 fps. If the input frame rate was something different, then you should set something different for the output. E.g. if you have an input of 30 fps then you should set the last 'assumefps' to 30.
The final step to this is to save the avs file and then open from VirtualDub. VirtualDub then sees the avs file as a video source with 10 times as many frames as the orriginal. You can then set the compressor to whatever you want (set XVID - for mpeg4) and save the final slow motion video.
Comparing Motion Interpolation With Motion Blur
Here I have taken a normal camcorder video of a rather cool Judo move (not done by me - I wish!) and slowed it down 10 times. I have taken the output and using ffmpeg (on my Linux box) to create a MPEG1 video at 320x240 of the result. I have used this format so it is practical to download form the Internet.
I have placed two copies of the video here. One uses the traditional 'motion blur' to fill in the missing frames when creating a slow motion effect. The other uses the motion compensation technique discussed here. I think you can clearly see just how much more realistic the motion compensation version it. Further, the motion compensation version makes it much easier to actually see what is going on.