Welcome to our third Development Blog post.
Simulation games often try to present realistic environments, to increase player’s immersion into virtual world. In this short article we will focus on technical aspects of rendering interactive dynamic vegetation system, which it gives “a new life” for extraction maps in our game.
When game designer is arranging assets and objects on level, sometimes artistic vision is restrained by various factors. Computer graphics rules are absolute: more geometry = worse performance. We come to the border, when we achieve average 60 FPS at maximum video settings on high-end PC machine. But we want to add even more details to our game, more and more! We took advantages of classic optimization tweaks (Occlusion Culling, LOD, baking high-poly mesh to normal maps and so on) and what next?
In order to reduce the number of drawcalls (commands executed by central processor to submit data to graphic card), Unity Engine allows us to use geometry instancing. It is a technique, where device can draw multiple copies of the same game object at once. At first look, it doesn’t seem too much promising, because how we can increase organicness in 3D scenes by simply representing repeated pieces of geometry?
GPU instancing real power is hidden in compute buffers. These are abstract single-dimensional data structures, which are used by programmers to run arithmetic calculations paralelly in GPU cores and – optionally – easily send results back to CPU. So, we said that at the beginning, we handle only individual item, for example, let’s say it can be a single quad constructed from four vertices. In this situation, CPU only knows that in our normal rendering pipeline we define quad and nothing more. And in this place, we call buffers. They will store additional vertex data (positions, colors) of our virtual quads, which we will fill manually, depending on our needs.
We want to render dense grass to the west of the river, accurately in X place and Y place? No problem, prepare a texture with encoded world space coordinates, relative to current terrain chunk, to specify vegetation map. In serious game development, especially when we work with 3D landscapes, spatial thinking really help us to design abstract geometric algorithms and solutions. Then it is easier to imagine that a texture it is a nothing more than two-dimensional array of vectors, so also we can add some extra data, like impact direction and speed passed by colliding object. At the next step, we make some black magic: Unity C# built-in function Graphics.DrawMeshInstancedIndirect in easy way let us to configure instancing setup (we select material, mesh, buffers, bounds and properties) and immediately executes whole process. On the GPU side, HLSL DirectX semantic SV_InstanceID gives us access to certain quad copy.
Next steps are quite straightforward:
- read data from compute buffer;
- generate pseudorandom number for individual quad instance for two reasons: simulation foliage wind animation independently and randomly rotate/scale/offset every quad;
- dynamic quad bending, depends on interaction between player and grass (quad), write changed state to the buffer;
- for displaced quads, discard alpha channel values below 0.5 and linearly set modified pixel color (for example to simulate vegetation dirtiness when quad is lying on the ground).
Final effect you can see on video below.If You have any suggestions, ideas or questions, write to us at firstname.lastname@example.org or visit our Discord server – https://discord.gg/HPbJj9y