January 1, 2025
(This blog post is part of a series on my relativistic softbody simulator and raytracer. See the first blog post or project page for added context.)
First quarter of college is over, and it’s time to take stock of my project!
Sadly, I come lacking a sophisticated raytracer to present you with.
During the last weeks of the quarter, I came to realize what should have been obvious earlier: that simulating particles is far easier than tracking and raytracing their worldlines.
I can simulate basic physics for a few hundred thousand particles in realtime just fine, but I cannot also store each particle’s state at hundreds of points in time.
I contemplated several solutions and compromises:
- Boundary meshes — convert each set of ground frame particle states into a 2d boundary mesh, then stitch it together with the previous frames to create a spacetime triangle mesh. Raytracing triangle meshes is nice, and we get significant memory savings for dense shapes. But we lose all particle density information, and when large bodies explode the memory requirements go way up.
- Voxels — similar to boundary meshes, we leverage our relatively granular spatial lookup to identify occupied cells. We compare with previous frames and form boundary meshes, with significant savings on triangle count if we’re smart about how we turn the voxels into triangle meshes. We might get enough savings to be able to create multiple meshes, each for a range of particle densities, so that we may preserve some density information. Still likely not performant unless mesh granularity is set unacceptably low.
- Just store their past states — extremely memory intensive, and I can’t use hardware raytracing extensions since they operate on triangles. But it retains enough information for true volumetric rendering, so that partially transparent bodies may be rendered in all their glory.
My heart wants volumetric rendering, and for that I need the worldlines entire. Those other compromise plans would likely be inadequate to handle huge situations anyways.
So, our softbodies must be reduced by at least 2 orders of magnitude, brought down to a few thousand particles. I probably wouldn’t need any more than this for a game anyways. I’ll also need to implement a separate system for static bodies and non-deformable moving bodies.
Where does that leave me going into next quarter? Do I continue with this project or find something new to work on?
I’m planning to put this project on pause for now, and investigate something else for my next quarter of 1L.
This codebase could certainly be extended into a proof-of-concept game. But I think that whenever I return to this idea I’ll reboot the codebase, if for no other reason than that Rust’s graphics ecosystem will have had more time to marinate.
I’ll likely begin again with easier static or aloof born-rigid bodies, and spend more time thinking about the kinds of puzzles I can craft out of this more limited raytracing engine. But that’s for future me.
For now, that’s all.
Stay tuned :)