Welcome to Tomway's Game of Life! This project is my personal testbed for graphics engineering and other systems programming work. It is a C++ implementation of Conway's Game of Life rendered with Vulkan. Overkill, you say? Probably, but the goal is to have a simple simulation where I can implement real-time rendering techniques, profile and optimize them, then compare them against previous iterations.
As of v1.0, this project is not particularly user-friendly or intuitive. It's still mainly meant to be built and run from source by me and hasn't been tested on any machine but mine. You can find releases of the project here and on my Itch.io page.
Control | Binding |
---|---|
Toggle mouse/move modes | F1 |
Save | F2 |
Load | F3 |
Move | WASD |
Look | Mouse movement |
Step simulation | Space |
Pause/unpause simulation | L |
Reset Application | R |
Tomway's Game of Life is a highly simplified "game engine". As of v1.0 there is no scene graph, no components or game objects, and certainly no editor or tools. The "engine" handles user input to control the camera, the simulation, and the UI. There is ambient music and a few sound effects. I'm not going to detail anything but the simulation and audio here because the other systems are not the point of this project.
The simulation is contained in simulation_system
. The simulation keeps two frames of state for the game - new and old. Each tick the simulation updates the new frame's state based on the data in the old one. I also allow "teleporting" between the edges of the board, meaning that a cell on the left edge of the board has neighbors to its "left" made up of the right edge of the board and vice versa.
As of v1.0, the simulation is single-threaded.
Rendering is done in render_system
. The important part of the API is a single function - tomway::render_system::draw_frame
. This function takes an updated transform for the camera, conditionally fetches vertices and transfers them, draws the current vertex buffer, and then draws the UI.
This first implementation was intentionally primitive and simple. My goal was to get something in place as a baseline for additional rendering techniques. Each time the simulation steps, new vertices are generated for every living cell and the entire vertex buffer is updated. The vertices are rendered in chunks no larger than the maximum memory allocation size for the GPU. Each chunk is a single model with no tranform - vertices are placed in world space. Inefficent, you say? Probably! But the goal for v1.0 isn't to find the most efficient method of rendering, it's to provide a baseline for comparison.
The vertices are generated in cell_geometry
. This class exists to bridge the simulation and the rendering system and keep each unaware of the other's concerns. cell_geometry
iterates through the list of cells and generates new vertices for each living cell based on a set of hard-coded vertices. The generated vertices are grouped into the chunks mentioned above.
Fair warning - as of v1.0 this class is internally a kitchen sink. It is a conscious decision to leave it this way because I believe I will want to significantly restructure it when I start implementing additional rendering methods.
Technique | Implementation |
---|---|
Indexed Draws | None |
Instanced Draws | None |
Antialiasing | None |
Lighting | Single directional, local shading |
Shadows | None |
CPU culling | None |
Level of Detail | None |
I profiled v1.0 with Tracy, NVIDIA NSight Graphics, and NVIDIA Nsight Systems. This version performs well at rendering static geometry but doesn't handle updating geometry very well. A 600x600 grid with the standard test file (test/600.json
) and the simulation paused renders at ~2900 FPS with an average of 0.35 milliseconds per frame. Once the simulation is unpaused the framerate drops down to ~2300 FPS and then typically climbs from there as cells die during the simulation.
~2300 FPS for unpaused simulation is a great number but the issue is with frame time stability. The simulation ticks five times per second and on those frames the frame time jumps from 0.35ms to ~37ms. This huge jump causes visible jerkiness in movement due to a large delta time for the frame. My profiling shows that this is largely caused by vertex generation and vertex transfer to the graphics card. One solution would be to simulate ahead of time then generate and transfer vertex chunks one by one in the intermediate frames. This strategy might reduce framerate but should produce better user experience.
See the Techniques table above - Antialiasing, CPU culling, indexed draws, and LOD would probably make a difference here in performance or looks. I will consider instancing to be a different implementation.
Main() is basically a pile of messy glue code. Improving input_system
to support actions and action callbacks would help.
I want an in-game help screen/button. It's also a PITA to get close to the simulation, so some sort of turbo button that makes you fly fast would be great.
SoLoud doesn't implement Rule of Five for its WAV audio sources so they can't be copied around. I ended up creating my own audio
class that wraps it and declares friend audio_system
, as do several other handler-type classes for audio. This feels questionable and needs to be revisited.
It only supports the specific buttons I need. As mentioned above, an action system for inputs would help me clean up code elsewhere too.
System | Library | License |
---|---|---|
Windowing and Platform | SDL | Zlib |
Rendering | Vulkan SDK | Multiple |
Audio | SoLoud | Zlib |
File Dialog | Native File Dialog | Zlib |
JSON | RapidJSON | Multiple |
Profiling | Tracy | 3-Clause BSD |
This project is licensed under the public domain. Please see LICENSE for more info. Vendor software licensing can be seen above. Licenses for the individual assets can be found next to the asset files and are all public domain assets.