GPU Billboards in a Space Scene

Whether you’re building a space-based game or something closer to home, assuming it’s in 3D, you may find yourself needing to render lots and lots of objects that don’t necessarily require complex geometry. They might be objects that are off in the distance, such as  asteroids or even an entire starfield. They could be trees, grass, or other vegetation. Either way, you may not be able to render a complex model of this many objects without bringing your game render-loop to a crawl.

A common solution is to use billboards. Billboards are simple 2D polygons, usually quadrilaterals (or two joined triangles), with a texture rendered on it that represents a 2D version of the object you would otherwise have rendered as a fully detailed model. The key to making this illusion work is that the billboards will automatically face in a certain direction so as to appear to be 3D, or “have volume” [Lengyel 12].  They come in three auto-directional flavors: Screen-Aligned, Viewpoint-Aligned, and Axial [Akenine-Möller, Haines, and Hoffman 08], and each one may be more effective under certain circumstances.

Below are some screenshots from my billboard shader in the Unity Editor, which illustrates the difference between Screen-Aligned and Viewpoint-Aligned Billboards.

 

A rule of thumb for this might be to use Screen-Aligned for distant or smaller objects. A starfield, made up of thousands of visible billboards (or more), can be screen-aligned without any noticeable effects. Objects that are closer and/or larger should probably be viewpoint-aligned [Lengyel 12].

For objects that will only be seen in 2 of the 3 dimensions, Axial-aligned them. Trees are the classic example of this, as a person walking by or through a forest will only see the trees from ground level, assuming climbing trees is off-limits. One more point, while viewpoint-aligned may yield the superior illusion, screen-aligned billboards are the cheapest computationally, so that might be the first choice for rendering mass amounts of billboards.

This blog entry is about rendering billboards using the GPU, in a vertex shader, where you can render thousands of billboards in a single draw-call. There is a top limit to how many billboards you can draw in a single call, such as in Unity, where a model can only have 65,000 vertices (a larger model needs to be split into pieces). So, I am using Unity as my rendering application, I adhere to this limit. This vertex shader technique is somewhat wasteful in that it requires the duplication of four vertices, bringing us down to 16,250 billboards per draw call. A newer technique is to use the Geometry Shader to accomplish this, which an implementation can be found in [Cozzi and Ring 11].

Here is frame capture of a space scene using the shader for the starfield and Nebulae.

Below is a video of my starfield shader with a manually controlled fighter prototype and a camera rig.

 

There are two parts to this technique. First is setting up the vertex buffer, which is done in the application code. The second part is done in the vertex (and pixel/fragment) shader. One thing I’d like to make sure I point out is that there is no single-right way to accomplish this. How you define the vertex definition can be done to suit your needs. I’ll simply show you the way I’m doing this to accomplish my goals, which is to create a spherical starfield around the player.

Application Code (C#, Unity API)

First I create my star positions like this:

Vector3[] starCenterPositions = new Vector3[numberOfStars];
starCenterPositions[i] = Random.onUnitSphere * this.radius;

Next, I create the Mesh object in a method and fill it in according to my custom vertex definition:

Mesh mesh = new Mesh();
GetComponent<MeshFilter>().mesh = mesh;

// Every position is duplicated as 4 vertices,
// which will get turned into quads by the vertex shader.
int vertexCount = numberOfStars * 4;

// Set the index buffer for defining our quads (2 tris)
int triangleCount = numberOfBillboards * 6;

float halfScale = 0.5f;

// Define our vertices, triangles, UVs, sizes, and colors.
Vector3[] vertices = new Vector3[vertexCount];
int[] triangles = new int[triangleCount];
Vector2[] uvs = new Vector2[vertexCount];
Vector2[] size = new Vector2[vertexCount];
Color[] colors = new Color[vertexCount];

for (int i = 0; i < numberOfStars; ++i)
{
   // Set vertices
   vertices[i * 4 + 0] = positions[i];
   vertices[i * 4 + 1] = positions[i];
   vertices[i * 4 + 2] = positions[i];
   vertices[i * 4 + 3] = positions[i];

   // Set triangles
   triangles[i * 6 + 0] = i * 4 + 0;
   triangles[i * 6 + 1] = i * 4 + 3;
   triangles[i * 6 + 2] = i * 4 + 1;
   triangles[i * 6 + 3] = i * 4 + 3;
   triangles[i * 6 + 4] = i * 4 + 2;
   triangles[i * 6 + 5] = i * 4 + 1;

   // Set uv values from texture
   uvs[i * 4 + 0] = new Vector2(0.0f, 0.0f);
   uvs[i * 4 + 1] = new Vector2(1.0f, 0.0f);
   uvs[i * 4 + 2] = new Vector2(1.0f, 1.0f);
   uvs[i * 4 + 3] = new Vector2(0.0f, 1.0f);

   halfScale = Random.Range(minSize, maxSize) * 0.5f;
   // Set uv2 values for size
   size[i * 4 + 0] = new Vector2(-halfScale, -halfScale);
   size[i * 4 + 1] = new Vector2(halfScale, -halfScale);
   size[i * 4 + 2] = new Vector2(halfScale, halfScale);
   size[i * 4 + 3] = new Vector2(-halfScale, halfScale);

   // Set colors
   // Used to randomize the star colors.
   float randomValue = Random.value;
   float brightness = Random.Range(0.75f, 1.0f);

   // Most stars will be white, but pick up a few red, green, blue, or yellows.
   Color starColor = primaryStarColor * brightness;

   // Used to randomize the star colors.
   if (randomValue < 0.01)
   {
      // red-ish
      starColor = new Color(1.0f, 0.5f, 0.5f) * brightness;
    }
    else if (randomValue < 0.05)
    {
       // green-ish
       starColor = new Color(0.5f, 1.0f, 0.5f) * brightness;
    }
    else if (randomValue < 0.1)
    {
       // blue-ish
       starColor = new Color(0.5f, 0.5f, 1.0f) * brightness;
    }
    else if (randomValue < 0.3)
    {
       // yellow-ish
       starColor = new Color(1.0f, 1.0f, 0.5f) * brightness;
    }
    starColor.a = 1.0f;

    colors[i * 4 + 0] = starColor;
    colors[i * 4 + 1] = starColor;
    colors[i * 4 + 2] = starColor;
    colors[i * 4 + 3] = starColor;
 }

 mesh.vertices = vertices;
 mesh.triangles = triangles;
 mesh.uv = uvs;
 mesh.uv2 = size;
 mesh.colors = colors;

 return mesh;

Note that I’m using the UV2 channel for the size/scale of my stars.

You can see the data duplication in the vertices. They get set to the same point. This point is actually the center, world position of each star. They were pre-generated in the code you saw above this block, that is, randomly positioned onto the surface of a sphere. The triangles are set to use indexed triangles. The UVs are set assuming counter-clockwise, front-facing polygons.

What goes into UV2 is the interesting part of this (simple) technique. These represent ‘corners’ or vectors that we will be translating the vertex positions in to make our quadrilateral in the vertex shader. The first four vertices that go through the vertex shader will be our first quad. Each vertex in this definition is actually a corner in the order of: bottom-left, bottom-right, top-right, and top-left, following the UV ordering. They are moved by the ‘half-scale’ of the size of the billboard, which could be the texture width/height or whatever scale you want to use to be the quad-size.

Now for the vertex shader to see how this gets implemented.

Here is my application-to-vertex shader structure, vertex-to-fragment structure, and uniform data:

struct appdata_t
{
   float4 vertex : POSITION;
   float4 color : COLOR;
   float2 texcoord : TEXCOORD0;
   float2 corners : TEXCOORD1;
};

struct v2f
{
   float4 vertex : SV_POSITION;
   fixed4 color : COLOR;
   float2 texcoord : TEXCOORD0;
};

fixed4 _Color;

sampler2D _MainTex;
float4 _MainTex_ST;

 

Vertex Shader for Screen-Aligned Billboards

There are at least a couple ways to create our quad in the vertex shader for screen-aligned billboards. The goal here is to align the quad to the view screen. This means orienting the billboard with the plane of the camera. Both ways make use of the View Matrix Transform.

The first way I’ll demonstrate is to simply transform our quad to View Space, which is a World coordinate system with the Camera aligned along the Z-axis, with an Up vector of (0, 1, 0) and a Right vector of (0, 0, 1). If we can move our quad’s center position to View Space, the move the corners using the Up and Right vectors from the camera, the math is extremely basic.

It’s trivial to transform our center position to View Space since we should have a View Matrix (from Unity or from our Engine of choice). It comes from the application level, passed in as a uniform matrix. Unity gives us a built-in function called UnityObjectToViewPos(). Here’s the vertex shader for this:

v2f vert_screen_aligned_1(appdata_t v)
{
   v2f o;

   // First, project into View-Space
   float4 vpos = float4(UnityObjectToViewPos(v.vertex), 1.0);

   // Expand the four corners and add to the billboard center position.
   vpos.xy += v.corners.xy;

   // Transform each expanded vertex from view to projection/clip.
   o.vertex = mul(UNITY_MATRIX_P, vpos);
   o.color = v.color;
   o.texcoord.xy = v.texcoord.xy;
   return o;
 }

So, it’s pretty simple to do this in View Space. As the four duplicated positions stream into the vertex shader, they get transformed to View Space. Each position has a corner associated with it, which is actually a 2D vector to position down-left, down-right, up-right, and up-left. It’s just vector addition.

Now we have a quad, aligned with the camera screen, and we do the final step required of our vertex shader, that is, to perform the vertex projection from View Space to Projection Space. Unity gives us the built-in variable for this transform called UNITY_MATRIX_P. Color and UVs are just passed to the fragment shader unaltered.

So, the downside to this simplicity is that we perform two transformations. One to get from Model Space to View Space, and then one more to go from View Space to Projection Space. We can lower the cost some by doing only what is really necessary in the next version of this vertex shader. Additionally, we don’t use this View Space trick for doing Viewpoint-Aligned billboards.

Here is the alternative version of the Screen-Aligned Billboard vertex shader:

v2f vert_screen_aligned_2(appdata_t v)
{
   v2f o;

   // Expand the four corners in model space.
   float4 vCorners = float4(v.corners.x, v.corners.y, 0.0, 0.0);

   // We want to orient each quad corner to the view matrix using the view's -forward direction.
   float4x4 m = UNITY_MATRIX_V;

   // multiply each corner position by the modified view matrix and then translate to final position.
   float4 vpos = mul(vCorners, m) + v.vertex;

   // Transform each expanded vertex to projection/clip.
   o.vertex = float4(UnityObjectToClipPos(vpos));
   o.color = v.color;
   o.texcoord.xy = v.texcoord.xy;

   return o;
}

So, first I create a vCorners vector, then multiply by the View Transform Matrix and translate by the center position of the billboard. Take care here to determine what your model-world hierarchy actually is. I don’t have any parent node, so model-to-world is just the Identity Matrix Transform. The next statement transforms the expanded quad points from Model Space to Projection Space, as most basic vertex shaders do. Unity gives us the UnityObjectToClipPos() method, replacing the usual model-view-projection (MVP) matrix transform.

Viewpoint-Aligned Billboards

If Screen-Aligned Billboards are not creating the illusion of 3D we need, Viewpoint-Aligned Billboards might help. The closer we get to a screen-aligned quad, the easier it is to see around it, and notice that it is flat and fake. In considering view-point aligned billboards, all one has to do is look at the typical multi-screen computer workspace, where the larger and closer the 2 or 3 monitors are to your face, the more you’d like them angled to your seated (or better, standing) viewpoint. This is how you do that for billboards in the vertex shader, using the same data structures, inputs and outputs, as before, except we need the World Space Camera position.

Here is the Viewpoint-Aligned Billboard shader, following by the explanation.

 v2f vert_cam_pos_normal(appdata_t v)
 {
    v2f o;
    float3 N = -normalize(_WorldSpaceCameraPos - v.vertex);
    float3 A = cross(UNITY_MATRIX_V[1], N);
    float3 B;

    if (length(A) < 0.01)
    {
       // U x Z is too small, use alternate vectors.
       B = normalize(cross(N, UNITY_MATRIX_V[0]));
       A = cross(B, N);
    }
    else
    {
       A = normalize(A);
       B = cross(N, A);
    }

    // Expand the four corners and add to the billboard center position.
    //float4 vpos = v.vertex + float4((A * v.corners.x + B * v.corners.y).xyz, 0);

    float4x4 m = float4x4
    (
       float4(A.x, A.y, A.z, 0), // Derived Side
       float4(B.x, B.y, B.z, 0), // Derived Up
       float4(N.x, N.y, N.z, 0), // Forward
       float4(0 , 0 , 0 , 1)
    );

    float4 vCorners = float4(v.corners.x, v.corners.y, 0.0, 0.0);
    float4 vpos = v.vertex + mul(vCorners, m);

    // Transform each expanded vertex to projection/clip.
    o.vertex = float4(UnityObjectToClipPos(vpos));
    o.color = v.color;
    o.texcoord.xy = v.texcoord.xy;
    return o;
 }

 

We do the calculations for this in the World Space coordinate system. We need a vector from the camera position to the vertex, and we will call that N. Make sure to normalize it! We need to generate two more vectors that are perpendicular to this N vector, in order to form a basis coordinate system our viewpoint-aligned billboard, that is a derived Side vector and a derived Up vector. We can’t just simply use the ones from the camera or we’d be back to screen-aligned billboards, but we can use them to derive the new basis.

The first one is a side-vector we can call A. Use the Camera Up vector from the View Matrix. For Unity, we get this from the built-in View Matrix variable, UNITY_MATRIX[1]. To calculate A, we take the cross-product of the Camera-Up vector with N, the vector from the vertex to the camera position.

One caveat with simply marching forward with this derived side vector, A, is that it might be in line with the vector N. If they are pointed in the same direction, which very well could be close to the case, we will not get good results. Check the length of the cross-product we just calculated. It will be zero if they are pointed in the same direction. I do a check to see if the length is close to zero.

If we cannot use the A vector, due to this issue, we can use the Camera’s right vector, instead, to repeat the last step. We will now be generating the derived Up vector (B) by performing the cross-product of N with the Camera Right vector. We know this must be valid in this case. We must normalize this because we need to use it to do one more cross-product. We now calculate the derived side vector again (A) by performing a cross-product of the derived Up vector (B) with N.

Back to the case where N and that first derived side vector (A) was fine (not pointed close to or in the same direction). We normalize A, then calculate the cross-product of N with A.

Now we can create a 4×4 matrix  that will orient each billboard to our viewpoint. We will also need to build a simple vector that represents the corner for our vertex, similar as we did in the second version of our screen-aligned billboard vertex shader and then perform the same math to expand our quad’s corners, translate to the final position, and finally transform our vertex to Projection Space.

Again, the color and UV coordinates pass through to the fragment shader.

Fragment Shader

The fragment shader is as simple or as complex as you want it to be. You can simply sample the texture and return the color, or make some modifications, such as I’m doing below with the color defined when filling in the vertex data in the application. Additionally, I have another color I multiply in, mostly just for prototyping in the scene.

 fixed4 frag (v2f i) : SV_Target
 {
    // sample the texture
    fixed4 col = tex2D(_MainTex, i.texcoord) * i.color * _Color;
    return col;
 }

Note that you could perform vertex animations in the vertex shader or color animations in the fragment shader, such as blinking stars or whatever you want.

Well, that wraps up my blog entry for GPU Rendered Billboards. I have been using this technique for a number of years to generate and render starfields or other mass quantities of background ‘stuff’. I adjust the inputs, outputs, and animations to suit my needs for the effect, so this is one example and provides a base for deriving from. It’s good to be able to implement and test out the different billboard alignments, to see what works best and then weigh the costs and benefits associated with each.

Thanks for reading,

-David

 

Bibliography

[Akenine-Möller, Haines, and Hoffman 08] Tomas Akenine-Möller, Eric Haines, and Naty Hoffman. “Billboarding.” In Real-Time Rendering Third Edition, Chapter 10.6, pp. 446-455. Boca Raton, FL: CRC Press, 2008.

[Cozzi and Ring 11] Patrick Cozzi and Kevin Ring. “Basic Rendering.”  In 3D Engine Design for Virtual Globes, Chapter 9.1, pp 252-258. Boca Raton, FL: CRC Press, 2011.

[Lengyel 12] Eric Lengyel.  “Billboarding.” In Mathematics for 3D Game Programming and Computer Graphics Third Edition, Chapter 9.3, pp. 254-258. Boston, MA: Course Technology, a part of Cengage Learning, 2012.

 

 

Leave a comment