RE: Single buffer + flush vs double buffer + swap
Double buffering is used to prevent flicker or "tearing" when significant parts of the scene are moving. If the scene is mostly static and only moves occasionally (as in a CAD application) single buffering is going to look fine.
The cause of the flicker is that the framebuffer is being read constantly to generate the RGB signal to the monitor, asynchronous to the program. So when you draw into it, it's unpredictable which parts of your drawn objects appear first. If the output was currently at the center of the screen (as the beam of a CRT passes the midpoint of the display), and you drew a rect covering the whole screen at that moment, the bottom of the rect would appear, then after the screen finishes refreshing, the video blanks for a few milliseconds, and begins refreshing at the top again, and finally the top half of the rect appears. This happens faster than you can think but you eyes see it as flicker. To get around that you can have two framebuffers: a current one that is being shown on screen, and a future one that you do your drawing into. When you've finished drawing a frame—all the objects that should be shown for the next moment in time—you tell the graphics to swap buffers at the next refresh. What you see on the monitor is the complete new frame without any flickering or artifacts.
Note that in OpenGL (and graphics workstations like SGIs generally) double-buffering is in hardware—the memory in the graphics unit is divided into two buffers. Some other computers perform something they call "double-buffering" in software, which is rather different. They use a backing-store in main memory to draw into, and then BLT the whole thing into display memory during the vertical blanking time. This was used by PC games in the VGA era, for example. This avoids tearing but puts more load on the system bus compared to hardware. If your graphics hardware lacks support for double-buffering this is an alternative. Another is to just use the framebuffer, but only write during the vertical blanking: you have less time to calculate your output, but if you can do it quickly there is no tearing. Yet another option is to "race the beam": keep track of where the current position being output is, and order your drawing so you only write below that point. In the extreme you can do without a framebuffer at all, and just output data directly as the monitor sees it. The Atari VCS did this. It requires cycle-exact control over instruction timing.
Because double-buffering requires the graphics subsystem to configure itself as two display buffers, it is only available when the display mode is set to a configuration that supports it.
There is also triple- and quad-buffering for use with stereo graphics modes. The reason this is needed is what happens when the software hasn't finished drawing a new frame by the time the display refresh happens: In mono (single perspective view) graphics, the active buffer can be left as it is, and the effect is that you just "drop a frame" (the previous frame is output twice and the new frame is delayed by one cycle). In stereo (two perspective views), this does not work, since the left perspective buffer must always be used on even cycles and the right buffer for odd cycles. So four buffers are used instead, a current and future left buffer, and a current and future right buffer.
|