Procedural Nature

It’s fascinating to follow the progress of procedural nature and landscape rendering. Especially because it is such a hard problem and because it involves possibly the most complex geometry one can think of and a plethora of geometries and materials, all things which must be synthesized parametrically rather than fully premodelled.

I don’t know if this shows the state of the art but apparently the program (Vue) involved in the pictures you’re about to see, is used in the movie industry to synthesize (at least quasi-) photorealistic landscapes.


Back in the 80s it was possible for me to run World Construction Set on an Amiga 1200 computer with an FPU accelleration board. It was hopelessly slow though. I imagine if I’d have have the patience to follow through on the first render, it would’ve taken weeks.

Enter 2009 and Vue Infinite. Take a look.

  • Stefan Menzel on Vue                                                                        

Things are happening. I look forward to not just real-time raytraced 3D but realtime raytraced 3D procedural nature in my lifetime.

Then couple this with 3D touch and vision and UX as we know it today will become hopelessly pathetic in comparison.

See also


About xosfaere

Software Developer
This entry was posted in Art, Computer Science, Nature, Program, Software, Technical and tagged , , , , , , , . Bookmark the permalink.

4 Responses to Procedural Nature

  1. fishead says:

    That’s REALLY amazing. And you say this is a real-time render??? Wow. I know there’s some incredible stuff out there, but this is pretty darned realistic. But how is it defined–is this just a bunch of equations and programming to create? How does one visualize the environment as it is being built, or do you start from a base framework and then begin manipulations?

    The software I use most often (3dsMax) has some new processing coming out in the next version that makes on-screen rendering and sceen development close to final view in real-time, but certainly nothing like this. At least not yet.

  2. xosfaere says:

    No this is not real-time.

          There is however a company that has created an FPGA based board which is capable of an order of magnitude faster than current generation software raytracing on 8 processor machines. Next year it will release a second generation of that hardware which will push this another order of magnitude forward in terms of speed.

          Real-time is a thankful concept, it can be misused easily because what does real-time really mean in terms of computational rendering? Nothing can be real-time given infinitely complex environments so it all depends on complexity of the environment. But realism is improving and computational capacity is growing with Moores law, even faster at times. Even if not in terms of clock speed.

          That current sequential algorithms do not scale with parallel hardware is no mystery but furtunately raytracing is a so-called embarassingly parallel problem which means that one can parallelize it down to every pixel – or even subpixel (anti-aliasing).

          That means that if you have a million processors (or processor cores on a chip), you can render a million pixels at the same time. So assuming you step up from a 1-core processor to a megacore processor from one day to the next, you would theoretically have a millionfold faster performance.

          A six orders-of-magnitude leap doesn’t just happen but it shows you that raytracing is such a parallizable problem that it scales theoretically perfectly with parallel hardware.

          Now there are some bad news ahead for manycore as well because other problem domains don’t scale that well and there are inherently sequential problems which mean that one simply cannot break them up and tackle independently in a divide and conquer manner.

          This problem will be become exponentially worse in the future as we see a greater and greater rift between the performance of two classes of algorithms – those that have been parallelised and those that are theoretically unparallelizable.

          Raytracing, however, is not such a problem.

          Maybe the best we can hope for is implicit parallelization in languages designed with theoretical purity and paralleization in mind.


          The precise modelling of these environments is not something I’m intimately familiar with.

          They are fractal, procedural and parametric by nature. That means that parametric models exist where the designer helps shape the landscapes but not in complete detail.

          There is not a premodelled tree where the complete geometry exists in advance, rather (in the abstract) a function exists, with parameters for creating near-arbitrary tree geometries. And at the same time a function exists to parametrically model a particular material, like wood.

          As you know this is already in use in shaders in regular 3D software and games (there’s even hardware support for shaders which procedurally create textures and geometry and tessellate coarsely-premodelled geometry (not parametric geometry, but geometry that is not fine and detailed; recursive subdivision or tesselllation, depending on the distance to the object is now hardware accellerated, I believe tessellation was added to DirectX 11, for instance).

          So even games and gaming hardware have been moving in the direction of procedural modelling where rather than explicitly defining every little detail upfront, they define coarse geometries or geometry shaders and the same with materials – where instead of loading massive texture files onto the accelleration board, they can increasingly use hardware shaders.

          Think of water. Do you want to precisely premodel the geometry of an ocean and the millions of iterations it runs through as it moves and ripples? Surely not.

          That is not to speak of even the effects of objects moving around in said geometry. Say a ship or a person swimming in it. Nature is dynamic, we can’t possibly hope to design every little detail. So fractal nature is the wave of the future.

          But as you’re a 3D artist, I’m just preaching to the choir here. I felt like writing 🙂

  3. fishead says:

    “…felt like writing…”hahahaha–that’s not a response, that’s a whole separate blog entry. Awsome!

  4. xosfaere says:

    See also

    4096 bytes (or less) of assembly and data creates a full audiovisual experience.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s