Why is the animate() routine coded in that way? As you probably know, VIDEO RAM is much slower than "normal" RAM, so it's advisable to reduce VRAM blits to a minimum. Drawing sprite on the screen (meaning in VRAM) and then clearing a background for it is not very fast. This example uses a different method which is much faster, but require a bit more memory.
First the buffer is cleared (it's a normal BITMAP), then the sprite is drawn on it, and when the drawing is finished this buffer is copied directly to the screen. So the end result is that there is a single VRAM blit instead of blitting/clearing the background and drawing a sprite on it. It's a good method even when you have to restore the background. And of course, it completely removes any flickering effect.
When one uses a big (ie. 800x600 background) and draws something on it, it's wise to use a copy of background somewhere in memory and restore background using this "virtual background". When blitting from VRAM in SVGA modes, it's probably, that drawing routines have to switch banks on video card. I think, I don't have to remind how slow is it.
Note that on modern systems, the above isn't true anymore, and you usually get the best performance by caching all your animations in video ram and doing only VRAM->VRAM blits, so there is no more RAM->VRAM transfer at all anymore. And usually, such transfers can run in parallel on the graphics card's processor as well, costing virtually no main cpu time at all. See the exaccel example for an example of this.