I’ve kinda just had a thought, I don’t know if it’s horrible or not, so you tell me.
- 3d terrain in video games is commonly starts with a fractal perlin noise function.
- This doesn’t look very good alone, so some passes are employed to make it look better, such as changing the scale.
- One such technique is employing hydraulic erosion with a heavy gpu simulation, to create riverbeds and folds in terrain.
- However hydraulic erosion is VERY slow, and is as such not viable for a game like Minecraft that works in real time. It also doesn’t chunk well.
But what if it didn’t have to? Why not train something like a diffusion image model off of thousands of pre-rendered high quality simulations, and then have it transform a function like fractal perlin noise? Basically “baking” a terrain pass inside a neural network. This’d still be slow, but slower than simulating thousands of rain droplets? It could easily be deterministic to loop across chunk borders too. You could even train off of real world GIS data.
Has this been tried before?


We dont yet have proof AI can “imagine” new things, just interpolates between existing. For complex relationships such as realistic fluid/particle dynamics it also requires billions of inputs before approximating reasonable outputs - so the cost to potentially nonexistent ROI timeline just doesnt add up. Its made even worse if youre already simulating billions of viable simulations, just to generate thousands.
This is why most modern techbro AI requires massive internet piracy, without already having the training data readily available (but not efficiently simulated) the algorithms arent worth much.
Tangentially this is why such algorithms have many applications in the medical field, they generally have access to a large dataset of human annotated diagnosis that can’t readily be created by a computer.