Click Here to Install Silverlight*
United StatesChange|All Microsoft Sites
|Developer Centers|Library|Downloads|Code Center|Subscriptions|MSDN Worldwide
Search for

Advanced Search
MSDN Home > MSJ > February 1998
February 1998

Microsoft Systems Journal Homepage

Get Fast and Simple 3D Rendering with DrawPrimitive and DirectX 5.0

Ron Fosner

DirectX attempts to obviate the need for multiple software flavors.Two features are key: a common API software developers can write to that forces hardware manufacturers to write the API-hardware driver code, and a rich hardware emulation layer that raises the least common denominator.

This article assumes you're familiar with DirectX, COM

Code for this article: drawPrimitive.exe (137KB)
Ron Fosner, who lives and breaths 3D graphics, runs Data Visualization, a 3D graphics consulting group specializing in creating fast OpenGL and Direct3D applications. You can reach him at

One of the cool things that's happened in the last two years has been the widespread adoption of 3D graphics. Whether it's viewing business graphics, virtual reality, 3D Web sites, or just playing Quake, 3D is rapidly becoming a standard feature in many applications. This Christmas should see the largest number of 3D accelerated video cards ever—an estimated 42 million 3D graphics chips will be sold in 1997. If you've never considered 3D important simply because most 3D applications run slowly on your PC, then you might want to check out the cards that are coming out. Adding a 3D accelerator to your system can make those 3D applications run 3 to 10 times faster, just by adding a card that you can find in the $200 or less range. Look for video cards with 3D chipsets from ATI, 3Dfx, 3Dlabs, NVIDIA, and S3.
Of course, there's a catch to this speed. Typically 3D applications had specific ports to various hardware—thus there'd be a software-only version, an ATI version, a 3Dfx version, and so on. This made it a real nightmare for software developers who had to write all these different flavors—or, more typically, pick only two or three and let everyone else run the software-only version. This can turn a 3D graphics accelerator into nothing more than just dumb video memory.
The latest 3D interface from Microsoft aims to change all that, with a brand new interface for 3D object creation called DrawPrimitive. I'm going to cover the new features that can be found in DirectX
® 5.0 Direct3D®—features that greatly improve the usability of Direct3D and make it easier than ever to create 3D graphics.
Microsoft originally targeted the game development community with DirectX. Lately, it's been broadening its reach to include a more general multimedia audience. DirectX is Microsoft's attempt to obviate the need to write all these software flavors. Two features are key: a common API that software developers can write to that forces the hardware manufacturers to write the API-hardware driver code, and a rich hardware emulation layer that raises the least common denominator. For example, typically most 3D programs had their own transformation code—code that would compute how a 3D object would look from the current viewpoint. The output from the transformation code is what would get written to the video buffer as pixels. Direct3D, the 3D graphics component of DirectX, has this layer built in so that you no longer have to be an expert in matrix math to do 3D graphics (although it still doesn't hurt!).
If you're familiar with the Direct3D Immediate Mode version of the Direct3D API, then you're probably aware of the controversy surrounding it. There has been a huge on-going debate just about everywhere that 3D graphics is discussed—Internet newsgroups, magazines, graphics conferences; even newspapers have gotten into the fray. Direct3D Immediate Mode has been repeatedly compared to another, more mature 3D API, OpenGL, and found to be lacking, mostly by proponents of OpenGL. By mature, I refer to the fact that OpenGL has a 10-year history on SGI hardware-accelerated machines. Microsoft initially took a rather standoffish attitude toward acknowledging the problems of Direct3D and Direct3D Immediate Mode in particular, much to the annoyance of the Direct3D users. Lately, however, starting with DirectX 5.0, Microsoft has taken steps to address the previously denied shortcomings of Direct3D. It is not too surprising that some of these changes make Direct3D Immediate Mode look much more like OpenGL—a backhanded acknowledgment that perhaps the lessons of 3D hardware acceleration that SGI has learned from the last 10 years and placed in the OpenGL API are better than the software-only rendering engine that made up the origins of Direct3D. It's interesting to note that both OpenGL and Direct3D are supported in Windows
® 95 and Windows NT®, and that Windows 98 and Windows NT 5.0 will support them both natively.
In fact, the debate surrounding Direct3D and OpenGL has spurred innovation in both APIs and forced hardware vendors to start coming out with driver support for both. It's good news for programmers in the sense that no matter what flavor of API you like, they both are going to get better and better. In fact, at the 1997 Computer Game Developer's Conference, practically every video board vendor had support for Direct3D and OpenGL, a huge change from last year. These are the cards that are starting to show up now. But to take advantage of the features in these new cards, you'll need to know how to program for them. This is where DirectX and Direct3D in particular come in.
If you've attempted programming in Direct3D Immediate Mode, you're probably familiar with something called execute buffers. An execute buffer is essentially a memory location that you would load up with a series of either state changes (for example, lighting or viewing changes) or primitive construction instructions (vertex information). Programming execute buffers is not a task for novices: let's just say that the pains associated with learning how to use execute buffers are both potent and plentiful.
Adding to the programming misery, Microsoft provided almost no documentation about how to construct the exe buffers aside from some sample programs, which used a collection of C macros to insert information into these buffers. And these macros were not really very useful for creating real programs. There was no information about the optimal execute buffer size or how to place information in the buffers that wasn't provided by the samples (for example, how to use multiple textures in a single execute buffer). If you placed the wrong information in an execute buffer and then passed it to Direct3D to process, your program would most certainly crash. This alone drove many away from even considering Direct3D as a viable 3D API, since most folks find trial-and-error programming sprinkled with frequent reboots not that productive or enjoyable. Those who did manage to figure out Direct3D execute buffers usually ended up tossing out those heinous macros and wrapping them in a C++ class.
With DirectX 5.0, Microsoft has finally acknowledged that execute buffers were a "mistake" and rectified the situation with the introduction of better-organized samples, more robust code, and, of course, the DrawPrimitive methods. It's a source of amusement to the OpenGL proponents that DrawPrimitive looks a lot like OpenGL's primitive construction interface. In reality this isn't that surprising, since the goal of OpenGL is to provide a robust, narrow 3D API that makes it easy to implement as a driver to a hardware accelerator. This actually makes it nice for developers, since once you learn one API's method of constructing primitives, it's straightforward to port the code to the other.
The biggest advantage of DrawPrimitive is that it makes constructing a primitive much easier than using execute buffers. It's still a bit tedious, but remember, this is a low-level 3D API. You wouldn't be here if you weren't interested in eking out every last bit of speed and control over your application. Once you construct your primitive, you then pass it off to the API where, depending upon the hardware, it might be accelerated. Unless you're intimately familiar with Direct3D primitives, you should read the section on primitives. While the concept of primitives is a simple one, there are subtleties in their construction that are easy to miss.


First, let's go over the basics. Probably the biggest surprise that newcomers to 3D graphics get is the fact that you have to construct everything yourself, vertex by vertex. It seems tedious, and it is if you have to do it yourself. While writing a .X file or .3ds file parser is beyond the scope of this article, once you understand the vertex-list/index-list structures that are pretty common to these file formats, writing your own is relatively straightforward. In fact, Nigel Thompson has put together an article about making your own .3ds-to-DrawPrimitive reader that's scheduled to appear in the next MSDN CDs.
Primitives in Direct3D Immediate Mode come in six styles, consisting of three types of base primitives: points, lines and triangles. If you've never done any 3D primitive construction before, then you might desire a higher level of object construction. Remember, though, that this is as close to the metal as you can get without writing your own hardware driver. You're trying to get away from the actual hardware-specific interface and take a minimal step above that interface with DirectX. Thus, you're forced to break down your models into very simple primitives—in this case lines and triangles—and then feed these to the hardware interface. There are routines available that will take a complex primitive made of a n-sided polygon or a surface description and tessellate it for you—break it up into a sequence of triangles representing the polygon or surface. You can then pass these to the API. But in any event, you'll be passing only point, triangle, or line information to DrawPrimitive.
Figure 1.
The first two primitive types I'll look at are point and line primitives. If you look at Figure 1 you can see how these primitives are constructed. The vertex numbers are the order that they are passed to DrawPrimitive. You may be wondering why there are two line primitives. This is actually a crucial point that I'll come back to later on.
The next type of primitive is the most interesting, the triangle primitive. Figure 2 shows the three types of triangle primitives that DrawPrimitive supports. Again, the numbered vertexes indicate how DrawPrimitive expects these primitives to be constructed. Now I'm going to cover an important point about DrawPrimitive's primitives; this applies to all 3D APIs in general. If you examine the triangle list primitive and compare it to the other two triangle primitives, you'll notice that one triangle list can be used to construct the other two. What you should also notice is that both the triangle strip and triangle fan primitives require fewer triangles to be specified than if you used the triangle list. This is the key point—you don't ever want to duplicate vertex __information if you don't have to.
Figure 2  Triangle Primitives
Figure 2  Triangle Primitives

Every vertex that is entered will have to be put through translation, projection, and lighting calculations. That's a dozen or more floating-point operations for each vertex. If the vertexes are shared, then there's absolutely no reason to perform that operation on these shared vertexes. That's why these shared-vertex primitives are provided. In a model that contains hundreds or thousands of shared vertexes, the time saved in avoiding redundant calculations can make the difference between a fast rendering scene and one that slows your rendering to a crawl. If you get only one thing out of this article, it's that you should strive to use shared vertex primitives, even if it means reordering the construction of your objects.
Now that I've told you that you should always try to reuse your vertexes, I'll explain when you can't do it. The only time you can reuse a vertex is if everything—and I mean everything—about the vertex is shared. Not just location in 3-space, but also color, texture coordinates, materials, and vertex normals. Everything about a vertex has to be the same in order to share a vertex. This is sometimes a problem for novice 3D programmers, but realize that there's more to a vertex than just x, y, and z information. When you specify a vertex, you're creating a lot of associated information with that vertex. All that information is the vertex. If all that information is the same, then you can reuse that vertex. In the "boids" example below, you'll find some reasons when you might not want to reuse vertexes.

Execute Buffers versus DrawPrimitive

To see the difference between execute buffers and DrawPrimitive, look at the code in Figure 3. This code is just for demonstration and shouldn't be considered anything other than illustrative. The initial part of the code is just the cube data, which is shared by both examples. The execute buffer code is C-based, and is essentially copied out of one of the DirectX 3 sample applications. The DrawPrimitive code is C++-based and is more representative of what you'd see in the DirectX 5.0 samples. I've cut out some things (like texturing and lighting) that were in the original example, just to make it easier to focus on the differences.
There are three steps to using DrawPrimitive in your program. The first is getting the necessary interfaces and devices. You get access to Draw- Primitive through the new DirectDraw
® IDirect3D2 interface. Assuming that you've got a DirectDraw object, the following line

 dd->QueryInterface(IID_IDirect3D2, (void**)&Direct3D);
creates a new Direct3D object. It's just the same as you did previously in DirectX 3 code, but with a new interface specified by IID_IDirect3D2. The new DirectX 5.0 device model has the Direct3D device as a separate object from a DirectDraw surface.
The IDirect3D2 interface gives access to this new functionality. IDirect3D2 is used to find or enumerate the types of devices supported. It identifies devices by unique CLSIDs. There are typically multiple Direct3D devices with different capabilities (some software, some hardware), but each supports the same set of interfaces. With DirectX 5.0, you now specify the CLSID to identify which type of device object you want. The CLSID (obtained from IDirect3D2::FindDevice or IDirect3D2::EnumDevices) is then used in a call to the IDirect3D2::CreateDevice method to create a device. These device objects support both the original IDirect3DDevice and the new IDirect3DDevice2 interfaces. Unlike in DirectX 3, you cannot call QueryInterface on these objects to retrieve an IDirectDrawSurface interface. Instead, you must use the IDirect3DDevice2::GetRenderTarget method.
You may notice that the relationship between Direct3D and DirectDraw has changed with DirectX 5.0, since the DirectDraw object now encapsulates both the DirectDraw and Direct3D states. When you create a DirectDraw object and then use the IDirectDraw2::QueryInterface method to obtain an IDirect3D2 interface, the reference count of the DirectDraw object is 2. This means that the lifetime of the Direct3D driver state is the same as that of the DirectDraw object. That is, releasing the Direct3D interface does not destroy the Direct3D driver state—that state is not destroyed until all references to that object (both DirectDraw and Direct3D references) have been released. Therefore, if you release a Direct3D interface while holding a reference to a DirectDraw driver interface, and then query the Direct3D interface again, the Direct3D state will be preserved.
This is different from previous versions of DirectX, where a Direct3D device was aggregated off a DirectDraw surface. In these versions of DirectX, the IDirect3DDevice and IDirectDrawSurface were two interfaces to the same object. A given Direct3D object supported multiple 3D device types. The IDirect3D interface was used to find or enumerate the device types. The IDirect3D::EnumDevices and IDirect3D::FindDevice methods identified the various device types by unique interface IIDs, which were then used to retrieve a Direct3D device interface by a call to QueryInterface on a DirectDraw surface. The lifetimes of the DirectDraw surface and the Direct3D device were identical, since the same object implemented them. The reason for this change is that the previous architecture did not allow the programmer to change the rendering target of the Direct3D device, which is now possible through the IDirect3DDevice2::SetRenderTarget method.
The next step is to actually get a Direct3D device. As before, you execute a line of code with the Direct3D object to get the desired device.

                        pBackBuffer, &Direct3DDevice );
The parameter IID_IDirect3DxxxDevice is the identifier for the new device. This can be IID_IDirect3DHALDevice, IID_IDirect3DMMXDevice, IID_IDirect3DRampDevice, or IID_IDirect3DRGBDevice. The DirectDraw HEL (hardware emulation layer) supports the creation of texture, mipmap, and z-buffer surfaces. Because of the tighter integration of DirectDraw and Direct3D in DirectX 5.0, a DirectDraw-enabled system always provides Direct3D support—at the very least in software emulation. Therefore, the DirectDraw HEL exports the DDSCAPS_3DDEVICE flag to indicate that a surface can be used for 3D rendering. DirectDraw drivers for hardware-accelerated 3D display cards should use this flag to indicate the presence of hardware-accelerated 3D.
Once you have your Direct3D device, you can set the state. You can use the new interfaces to set the render state, lighting state, viewport, transforms, and so on.

So far, coding isn't much different from previous versions of DirectX, except that you don't need execute buffers to change the state. This alone is a huge improvement.
The last step is to actually create and render the primitive. The new Direct3D interfaces allow you to construct a primitive work by specifying the primitive type and then the vertexes. No execute buffers, no pointers to keep track of. Very simple. To create a simple square using the simplest new DrawPrimitive interface, the code would look something like this:

     // these are DIRECT3DVT_VERTEX variables
     Direct3DDevice->Vertex(  vertex0  );
     Direct3DDevice->Vertex(  vertex1  );
     Direct3DDevice->Vertex(  vertex2  );
     Direct3DDevice->Vertex(  vertex3  );
 Direct3DDevice->End( 0 );
This is the simplest interface found in IDirect3DDevice2. Notice that I didn't even mention DrawPrimitive. The IDirect3DDevice2::Begin/Vertex/End paradigm is the simplest primitive construction method. The only Direct3D method you can legally call between calls to IDirect3DDevice2::Begin and IDirect3DDevice2::End is IDirect3DDevice2::Vertex. This method makes it easy to construct primitives and to try out the new interface.
Three more interfaces for primitive construction are found in the IDirect3DDevice2 interface, IDirect3DDevice2:: BeginIndexed, IDirect3DDevice2::DrawPrimitive, and IDirect3DDevice2::DrawIndexed-Primitive. Collectively, these four methods are referred to as the DrawPrimitive interfaces. If you are constructing a simple object, you might use the Begin/Vertex/End method. But for more complex objects, or objects where you already have an array of vertexes in the proper order (for example, a triangle strip), then you'd use the IDirect3DDevice2::DrawPrimitive interface. Using it's even simpler than the previous example, since it assumes that the vertex data is in order.

This is a pretty simple way to construct an object, although it might look a little confusing at first. That code behaves the same as this code:

     for ( int i=0; i<numVertices; i++ )
         Direct3DDevice->Vertex(  vertexArray[i]  );
 Direct3DDevice->End( 0 );
It's preferable to use DrawPrimitive rather than the individual vertex method because you only need to pass in a pointer to the array of data, not every individual vertex.
The next new primitive interface is perhaps the most powerful and, not too surprisingly, is also the most complicated. One of the problems that you might have noticed with both of the previous methods is that you must provide duplicate vertex information if the vertexes are not quite in the correct order for a collection of primitive types. Look at the wireframe cube shown in Figure 4. The cube has six faces, with each face containing four vertexes. Thus, using the Begin/Vertex/End method for a list of triangles, there would have to be (6 faces)(2 triangles/face)(3 vertexes/triangle) = 36 vertexes specified. Thirty-six seems like a lot just to specify a cube. Simplify by using two triangle fan primitives, picking one corner as the first fan's origin, and then using the three sides that join at that point as the leaves of the fan.
Figure 4
Figure 4
In Figure 4, vertex 0 could be the fan origin, and then vertexes 1, 2, 3, 4, 5, 6 would make up the rest of the fan. You'd then do the same for the opposite side of the cube using vertex 7 as the origin of the second triangle fan, and then vertexes 1 to 6 again. In this case there'd be (2 sides)(7 vertexes/side) = 14 vertexes—a more than 60 percent reduction in the information you need to supply. Now you see why I stress using shared vertexes! You know that a cube is made up of 8 vertexes; the problem is that you need to create a primitive using a list of vertexes that are not in any particular order. This is where DrawIndexedPrimitive comes in.
DrawIndexedPrimitive takes the same arguments as DrawPrimitive, but also takes a list of indexes that it uses to look up the vertexes, rather than assuming they're in the correct order. Figure 5 shows how the vertex list and the index list are used to process vertexes. The biggest advantage of DrawIndexedPrimitive is that you frequently get model information in this format. Most modeling programs save models by using a vertex list followed by a vertex index list. While it may seem to be an extra step to do the lookup, remember that each vertex in the primitive goes through multiple matrix transformations each frame. It's much simpler to look up an already transformed vertex than to take a duplicate vertex and run it though a few dozen floating-point calculations, particularly when you've got a model made up of thousands of vertexes with all of them shared between two or more primitive shapes.
Figure 5  DrawIndexedPrimitive Vertex Processing
Figure 5 DrawIndexedPrimitive Vertex Processing

The last method is a combination of Begin and DrawIndexedPrimitive. The IDirect3DDevice2::BeginIndexed method defines the start of a primitive based on indexing into an array of vertexes, just as you did with DrawIndexedPrimitive. Instead of calling the Vertex interface as you would if you were using Begin, you use the Index method to specify the index into the vertex array. Below, I use an array of the indexes into the vertex array to specify the order of vertexes, just as with DrawIndexedPrimitive.

 Direct3DDevice->BeginIndexed( DIRECT3DPT_TRIANGLEFAN,  
     for ( int i=0; i<numVertices; i++ )
         Direct3DDevice->Index(  vertexArrayIndex[i]  );
 Direct3DDevice->End( 0 );
Well, that's essentially what DrawPrimitive is, and how you'd use it. Now, how would you actually use it in a program? Microsoft has done a great job in converting most of its Immediate Mode sample programs into DrawPrimitive, and since everyone who's interested in DrawPrimitive can get the SDK from Microsoft's Web site (http://www., I've decided to use one of their sample programs in this article. You can get the SDK from the Web sites noted at the end of the article. Before I jump into that code, take a look at the parameters that you can use in the IDirect3DDevice2::Begin call in Figure 6. And you should examine the SDK documentation, for there are some really interesting capabilities hinted at there.


The SDK program that I'm going to use as a sample is called boids, which is either how a Brooklynite would pronounce "birds," or, more likely, a cross between "birds" and "droids." Boids is a really cool program, one that shows that all that effort that Microsoft has been spending snatching up a lot of graphics talent is showing some results. Seeing reference to a SIGGRAPH paper in a Microsoft example is certainly a far cry from the old "we'll do it our way" that used to predominate.
Boids is a program that models "flocking" behavior—in this case the flocking behavior of birds, but I suspect that if you modeled fish it would work just fine. In the program, various delta shapes (da "boids") congregate and start flocking, with the viewpoint following the center of the flock as it moves around the landscape. The landscape is essentially a flat plane with a random pattern, with the occasional sphere. The spheres and the boids are rendered using DrawPrimitive, and they provide an excellent study example. The boids are pretty simple, while the sphere is rendered from a dynamically generated data set and has the extra complexity of texture coordinates as well.
Figure 7 Boid
Figure 7 Boid
A boid is a simple delta shape that looks much like an arrowhead. There's much trickery going on in these deceptively simple shapes, tricks that make them appear visually much richer. Figure 7 shows a view of a boid from three different angles. There are 16 vertexes in each boid, and they are used in the construction of 10 triangles. If vertexes weren't reused, then there would be a total of 30 vertexes instead of just 16. Figure 8 shows the initialization of the boid data structures for both the vertexes and the vertex indexes. The top and bottom of the boid are made up of three triangles consisting of five shared vertexes. The rear of the boid contains six vertexes that make up four triangles. Figure 9 shows how the vertexes that make up the rear of the boid are used to construct the triangles.
Figure 9  Boid Rear
Figure 9 Boid Rear
The whole key to using DrawPrimitive is to recognize that most 3D objects are composed of many shared vertexes. You normally don't want the object to have "gaps" in its exterior, so having triangles that share vertexes is a prerequisite to creating smooth surfaces. You may also be wondering if it would be possible to share more vertexes in the boid model—after all, there's no difference among most of the vertexes. In fact, there are just seven vertexes with different x, y, z, values. Could you further simplify the model? Well, the answer is yes, with a few restrictions.
If you examine the vertex information, you'll notice that all of the vertexes differ in either location or in the vertex normal. In fact, you'll notice that all of the "duplicated" vertexes differ in their normal values. This is a very important point. When you say "vertex," you mean not only position, but any other type of information associated with that particular vertex. Positional data is the most obvious type of vertex information, but remember there's also texture coordinates, vertex normals, color information, edge flags, and so on, that may be associated with a particular vertex. So if any of these values are different, then you'll have to specify an entirely new vertex value to account for these differences.
Figure 10 Boid
Figure 10 Boid
In fact, this is used to great advantage in the boids program. You'd expect a shape made up of 10 triangles to show some blockiness, but if you look at the left image in Figure 10 you'll notice that the top and rear of the boid look very smooth—nothing indicates that each side is made up of three flat surfaces. Compare this with the right figure, which clearly shows the facets of each individual surface. The right figure is flat-shaded (that is, uses only one normal for each triangle) while the left figure is smooth-shaded (one normal for each vertex is used). You can use smooth shading to generate a smooth transition of the vertex normals.
Figure 11 shows two views of the shape of the rear of a boid (looking edge-on). On the left side, you can see that each surface has its own normal. Where the surfaces meet there would be a sharp change in the reflective angle of the surface when lighting calculations are performed. In other words, what you'd see would be three distinct surfaces making up the rear of a boid. On the right side you can see a representation of the actual values that are used with each vertex. Notice that the two vertexes in the center have normals associated with them that are averaged values of the normals of the surfaces that share the particular vertex.
Figure 11  Boid Normals
Figure 11 Boid Normals

Since lighting values are interpolated between vertexes of triangles, performing your own averaging of surface normals between adjacent triangles gives the effect of one smooth, curving surface when lighting calculations are performed between these vertexes. Since the colors of the triangles are the same, the only visible difference is due to lighting effects. By performing this interpolation of normals between triangles, you can extend the inner-triangle vertex interpolation that Direct3D will perform for you and make the entire rear surface appear to be one smooth, curved surface. This effect is also used on the top and bottom surfaces to make them appear smooth. This is an important and powerful trick that you can use to reduce the complexity of your models by making sharp-edged corners appear smooth and continuous.


The spheres in the scene also have some interesting properties. The spheres have many more vertexes than the boids and are generated by an equation. This is one of those interesting points in 3D graphics where you dredge up some memory of some long forgotten math class and get excited that, yes, you really will be able to use the equation of a sphere at some point in your life. To refresh your memory, the equation of a sphere of radius r, centered at the origin, is x2 + y2 + z2 = r2. Or, in spherical coordinates: x = r sin
q cos F, y = r sin q sin F, z = r cos q, where F is the angle from the x axis in the xy plane, and q is the angle from the z axis.
Now, the easiest way to get a nice sphere is to simply pick the number of lines of latitude and longitude. For 8 lines of latitude (east-west lines), you'd need to subdivide q's range into 8 parts, or 180° divided by 8 means 22.5° increments. Likewise, for 16 lines of longitude, the 360° range of F means there will be 16 increments of 22.5°. Looping though these values gives you the points you need.
One word of caution when constructing models from equations: don't trust the math to get it exactly right. In other words, if you've already calculated sin (0°), then don't trust your for loop to calculate the final increment for sin(360°). More likely it'll be something like sin(359.99999°), and thus you'll have a tiny but sometimes very noticeable gap in your sphere. Take advantage of shared vertexes, and simply reuse the coordinates calculated for sin(0°).
Now that the details are out of the way, Figure 12 shows you the code that is used to generate the sphere. Note that the north and south pole coordinates are hardcoded. You can see that taking the equations I've given above and transforming them into actual usable code isn't that hard. The tricky part is actually generating the vertex indexes to connect up the correct vertexes. It may occur to you that a sphere object could be made up of two types of primitives, a triangle fan for the top and bottom cap, and a series of triangle strips for each band between lines of latitude.
However, if you examine the code in Figure 12 closely, you'll see that it looks like the coder took the easy way out. Instead of determining what makes up the triangle fan and strip components, the sphere is constructed entirely from individual triangles. While this makes it easy to generate the primitive (and you end up with one long list of triangles instead of 2+MESH_SIZE primitives to render), you do end up with a more complicated primitive to render, which might slow things down.
Remember that to reuse a vertex, everything about that vertex must be shared. In the case of the texture being wrapped around the sphere, the texture is shown on the front and the back of the sphere. This means that the vertexes where the texture starts and ends would have to be done twice, once to start a new texture going off of one side of the vertexes, and again to complete the texture that's coming around from the other side. These seams in the texture would require the same positional data, but different texture coordinates. If you were mapping an image of the world onto the sphere, you'd only need to duplicate the vertexes on the single seam where the texture maps east met its west. Using individual triangles eliminates a lot of this thinking, so in retrospect it's understandable why the code uses triangles instead of strips or fans.
If you look at Figure 12, you can also see where the texture coordinates are being generated, with the texture being flipped when the coordinates reach the 180° mark. You should also note that the surface normal is also the same as the positional point. A moment's thought will assure you that for a unit sphere, these are the correct values for a normal perpendicular to the surface (that is, the x, y, z surface position for a unit sphere is the same as a unit surface normal at that position). One final note: while this sphere generation code is typical and is fairly simple to implement, the triangles produced vary greatly in size, which can cause artifacts to be generated when the sphere is in motion. The code produced also doesn't use shared vertexes, which is pretty much a requirement for fast rendering. A better approach would be to take a unit octahedron (8 sides) or icosahedron (20 sides), and subdivide the faces (a simple operation) till the desired accuracy is reached. This method guarantees all the triangles are of the same size and it lets you easily generate triangle strips instead of individual triangles.

Other New DirectX 5.0 Features

There are other new features besides DrawPrimitive in DirectX 5.0. Soon, new motherboards that support the AGP (accelerated graphics port) will be available. Video cards supporting AGP are already shipping from ATI, and other cards will follow. AGP is one way of reducing the time it takes to get information across the memory bus. AGP is a dedicated and either double (AGP-1, 66MHz) or quadruple (AGP-2, 133MHz) speed memory bus dedicated strictly to the graphics pipeline. AGP means that a card/motherboard/OS combination that supports AGP will enable the video card to treat system memory as video memory. There's nothing preventing you from doing this already; in fact most graphic intensive programs already store textures and images on system memory. However, AGP will transparently treat system memory as video memory without the programmer having to know anything about AGP.
When you ask how much video memory is available, you'll get an answer like 10MB (depending upon how much memory is free). The driver will take care of managing the memory. You can use AGP memory just as you would use video memory—for textures, blitting, and so on. AGP memory is slower than regular onboard video memory (but still faster than regular system memory) and the memory is uncached, so it's not a free ride. It just means that soon you won't have to worry if the app is running on a 2MB or 4MB video card. AGP is available with DirectX 5.0, but you'll need a video card that supports AGP (such as an ATI, Accelgraphics, or Rendition), a mother board that has an AGP slot (such as Intel's 440LX or 440BX), and an operating system (such as Windows 98) that supports AGP.
There are also some new capability bits, shown in Figure 13. As you can see, there's cap bits to detect AGP support, DrawPrimitive support, and support for some new texture features.
The last two items that I'll mention are the ability to dynamically change the rendering target and the viewport. The new IDirect3DDevice2::SetRenderTarget method permits you to easily change the rendering output to a new DirectDraw object. When you change the rendering target, all of the handles associated with the previous rendering target become invalid—you'll have to reacquire all of the texture handles. If you're using ramp mode, you'll need to update the texture handles inside materials by calling IDirect3DMaterial2::SetMaterial. Any execute buffers (which have embedded handles, an excellent reason to avoid them) also need to be updated.
The IDirect3DDevice2::SetRenderState was designed to be used with applications that use DrawPrimitive, especially when these applications do not use ramp mode. If the new render target surface has different dimensions from the old (length, width, color-depth), this method marks the viewport as invalid and it will have to be revalidated by calling IDirect3DViewport2::SetViewport to restate viewport parameters that are compatible with the new surface. Be aware that capabilities do not change with changes in the properties of the render target surface. Direct3D has only one opportunity to expose capabilities to the application. The system cannot expose different sets of capabilities depending on the format of the destination surface. If a z-buffer is attached to the new render target, it replaces the previous z-buffer for the context. Otherwise, the old z-buffer is detached and z-buffering is disabled. If more than one z-buffer is attached to the render target, this function will fail. The new IDirect3DViewport::IDirect3DViewport2 interface introduces a closer correspondence between the dimensions of the clipping volume and the viewport than was true for the old IDirect3DViewport interface.
As you can see, there have been substantial changes to improve the functionality of this release of DirectX. What this means is that if you were intimidated by the complexity of using Direct3D Immediate Mode, then you might want to reevaluate DirectX 5.0. With its improved functionality, simpler primitive construction methods, and greatly improved sample code and documentation, DirectX 5.0 may be the release you've been waiting for to start riding the 3D graphics wave.

From the February 1998 issue of Microsoft Systems Journal. Get it at your local newsstand, or better yet, subscribe.

For related information see: Basics of DirectDraw Game Programming,
Also check for daily updates on developer programs, resources and events.

© 1997 Microsoft Corporation. All rights reserved.
Terms of Use

© 2017 Microsoft Corporation. All rights reserved. Contact Us |Terms of Use |Trademarks |Privacy & Cookies