Overview:
Rasterization generates a stream of source pixels from graphic primitives, which are combined with destination pixels in the frame buffer. The term frame buffer originates from the early days of raster graphics and referred to a bank of mem- ory that contained a single image, or frame. As computer graphics evolved, the term came to encompass the image data as well as any ancilliary data needed during graphic rendering.
In Direct3D, the frame buffer encompasses the currently selected render target surface and depth/stencil surfaces. If mul- tisampling is used, additional memory is required but is not explicitly exposed as directly manipulable surfaces.
After rasterization, each source pixel contains an RGB color, an associated transparency value in its alpha channel, and an associated depth in the scene in its Z value. The Z value is a ¯xed-point precision quantity produced by rasterization. Fog may then be applied to the pixel before it is incorporated into the render target. The application of fog mixes the pixel's color value, but not its alpha vlaue, from rasterization with a fog color based on a function of the pixel's depth in the scene.
Fog, also referred to as depth cueing, can be used to diminish the intensity of an object as it recedes from the camera, placing more emphasis on objects closer to the viewer.
After fog application, pixels can be rejected on the basis of their trans- parency, their depth in the scene, or by stenciling operations. Stencil operations allow arbitrary regions of the frame buffer to be masked away from rendering, among other things. Unlike the associated alpha and depth produced for each pixel during rasterization, the stencil value associated with the source pixel is obtained from a render state.
Double buffering is a concept you need to be familiar with before moving on. When data goes through the rendering pipeline, it does not exit the other end to you screen. As you know, the rendering pipeline takes in 3D data, and outputs pixels. These pixels are outputted to a rectangular grid of pixels, known as the “frame buffer”.
Eventually, the frame buffer is displayed on the monitor. Now the problem with this is that the displaying of the frame buffer to the monitor is not completely in your control. Imagine you want to draw a scene of a town with a few people roaming around. You need various different 3D models to create a believable town scene. A few buildings, some houses, some shops, different people models, maybe a few props like benches and lamp posts, and then the model of the ground to put all the stuff on. Now you’re happily sending data to your graphics card and everything is going fine, you send it house one, house two, the shop, a few people, but then before you can send in the data that represents another person, your graphics card decides to give the frame buffer to the monitor.
And what do you get on the screen?
You get a scene of a town that has a few people displayed, a few buildings, half a human (because the frame buffer was sent to the monitor before you finished sending the entire data for the human figure you were rendering) and no streets (because you haven’t sent that data in yet) or props.
This is obviously no good for business. So we need a way to counter this problem. This is where double buffering comes in. The trick is to have two frame buffers. One called the front buffer, and the second one called the back buffer.
The front buffer is the one that is always displayed on your screen, not the back buffer. The back buffer is used as the rectangular grid that the rendering pipeline outputs the pixels to. So all your rendering goes straight to the back buffer. Only when you tell D3D to move the back buffer to the front buffer will your scene be displayed on the monitor. And by the time you tell D3D to move the back buffer to the front, you would’ve already finished rendering the entire scene. So using the example above, when you are sending the data for the human 3D model and your system displays the frame buffer – instead of seeing an incomplete scene, you will see whatever is on the front buffer (which is nothing at this time).
Then you can continue rendering the rest of the scene to the back buffer and move the back buffer to the front buffer when you’re done.
The figure above shows this process. You keep on sending data through the pipeline and it outputs pixels to the back buffer at the other end. Only when you tell D3D to switch the buffers will D3D take the data that is in the back buffer and put it in the front buffer.
Depth Buffers :
This depth-buffer is also known as a z-buffer, because the z-axis usually represents “depth”. A depth buffer usually has a certain level of accuracy associated with it. Just like in C++ you can have a “float” data type which allows for 32 bits of floating-point precision, or you can have a “double” data type which allows for 64 bits of floating-point precision.
When we talk about “16” or “32” bits per pixel for a depth buffer, we are discussing the accuracy of the device for determining how to arrange our objects. The higher the bit depth, the more accurate this arrangement is. This accuracy can also come at a cost of performance though, so make sure you try to test the scene using both.
No comments:
Post a Comment