Turning Objects Into Clouds in Blender 5: A Look at Volume Displacement

With Blender 5 introducing its brand-new Grid system, we’ve suddenly unlocked a whole new category of effects that weren’t possible in Blender before, at least not without Houdini-level workflows or a lot of technical gymnastics.
One of the most exciting of these is volume displacement, the ability to take any mesh and distort it into a fully volumetric, cloud-like form.

This is something I’ve personally been waiting for years to explore properly in Blender, and in this post I want to walk through what the technique is, why it’s so interesting, and how you can try it yourself.


What is Volume Displacement?

Traditionally, volumes in Blender have been limited to voxel grids imported from elsewhere (e.g., simulations, OpenVDB files) or simple procedural shaders. With Blender 5’s new Grid data type, we can now generate and modify volumetric data directly in Geometry Nodes.

This means you can:

  • Convert any watertight mesh into a density field
  • Manipulate that field with noise, curl noise, or custom logic
  • Push and pull voxels around to create wispy, cloud-like forms
  • Render the result in Cycles as a fully physical volume
  • Even bake it out as a VDB for use in other software

It’s surprisingly intuitive once you understand the concept, and creatively, it opens up a huge number of possibilities.


Watch the Full Tutorial

I’ve put together a detailed video walkthrough covering the entire setup from start to finish:

In the video I cover:

  • What the new Grid system is
  • How to convert a mesh into a density grid
  • How noise fields can drive organic volumetric distortion
  • Why we use curl noise for direction
  • How to optimise the bounding volume for better performance
  • How to preview and light volumes in Cycles
  • How to bake the result to a .vdb file

If you want to understand the technique properly, this is the best place to start.


⚠️ A Quick Warning Before You Dive In

It’s worth mentioning that this technique isn’t for the faint-hearted.
Volume displacement in Blender 5 is both conceptually involved and processor-intensive, especially when working with small voxel sizes. Even on higher-end machines, you may experience slow updates, heavy GPU usage, and long bake times. This workflow leans heavily on Blender’s new Grid system, which is powerful but still demanding so expect a bit of patience (and a capable PC) to be part of the process.
Still, once everything comes together, the results are absolutely worth it.


Introducing Cloudify: A Modifier That Turns Meshes Into Clouds

After experimenting with the grid workflow, I bundled the entire setup into a neat, easy-to-use modifier called Cloudify.

If you’d like to experiment with mesh-to-cloud transformations without building the node tree yourself, Cloudify offers:

  • A non-destructive modifier
  • Adjustable voxel size, density, expand, and gradient controls
  • Built-in noise shaping (Standard, Voronoi, or Custom)
  • Curl noise displacement
  • Full VDB export using Blender’s Bake node
  • High-quality example scenes
  • A good foundation for your own experiments

It’s very much aimed at users comfortable exploring Blender’s new volumetric tools, and is designed as both a modifier asset and a learning resource.

👉 Try Cloudify Here


Why This Technique Is Exciting

Volume displacement isn’t just a party trick. It has real creative applications:

  • Stylised VFX (dissolves, apparitions, dream sequences)
  • Cloud creatures and volumetric characters
  • Atmospheric props and environmental storytelling
  • Concept art sculpting with fog
  • Scientific and abstract volumetric visualisation
  • Floating “cloud cities” or surreal architecture
  • Logo/introduction animations
  • Exporting VDBs for Houdini, Unreal, or film pipelines

Blender 5’s grid system makes these workflows accessible without requiring a dedicated simulation tool.


Final Thoughts

I’m only just scratching the surface of what Blender’s new Grid system can do.
Volumetric modelling opens the door to entirely new artistic directions: from surreal atmospheric scenes to cinematic VFX and anything in between.

If you’re curious:

  • Watch the tutorial to understand the workflow
  • Try Cloudify if you want a ready-made modifier to experiment with
  • And feel free to ask questions or share your experiments: I’d love to see what people create with this technique.

👉 info@configurate.net

Animating Nebulae with Blender and Midjourney Video

Tutorial Video

Setting aside the controversies around generative AI tools like Midjourney, this post explores a practical workflow for combining Blender with Midjourney to create volumetric nebula animations more quickly than traditional rendering techniques.

Volumetric nebulae can look fantastic in Blender, but animating them can be time-consuming and prone to artifacts in both EEVEE and Cycles. Midjourney, on the other hand, can generate animations quickly — especially flythroughs of cloud-like structures. In this post, I’ll walk through a workflow that combines both tools: using Blender’s Nebula Generator to create a unique still image, and then feeding that into Midjourney to produce animations — including seamless(ish) loops.

Why Use Midjourney for Animation?

Blender’s Eevee renderer is excellent for creating volumetric still images, but animating those volumes often requires lots of samples to avoid flicker. That translates to long render times, and even then, some artifacts can remain. The same goes for Cycles, which can be more accurate but can take much longer to render.

Midjourney, however, has been trained on massive amounts of cloud-like images and videos. This makes it surprisingly good at generating flythroughs of nebulae. While you lose some fine control over the camera, you gain speed — producing a short animation in seconds instead of hours.

Blender-rendered still image
Midjourney flythrough video frame

Step 1: Create a Seed Image in Blender

I start with my Nebula Generator add-on in Blender.

  • Tweak noise and lighting parameters to shape the nebula.
  • Adjust the coloring to get the atmosphere you want.
  • Increase the volumetric resolution for higher detail.
  • Render out a single still image.

I confess that I find this stage the most enjoyable – it lets you stay in control of the artistic look before moving into the scarily efficient world of AI-driven animation.

Step 2: Generate a Flythrough in Midjourney

With the Blender still rendered, I switch to Midjourney.

  • Upload the image into the Creation tab.
  • Use it not as a finished still, but as a starting frame for an animation.
  • A simple prompt like “nebula flythrough, NASA image of the day” works well — the phrase flythrough seems to make a big difference.

After hitting generate, Midjourney takes about 30 seconds to produce four short animations. Some will be better than others, but usually at least one is more than workable.

Midjourney interface showing prompt + animation previews

Step 3: Create a Looping Animation

One question I was asked when I first shared this workflow was: Can you make it loop?
The answer is yes — with mixed results.

If you set the end frame equal to the start frame and optionally set the Motion to High, Midjourney will attempt to generate a seamless loop. Sometimes it works beautifully, sometimes it doesn’t. A few retries usually yield at least one good loop.

Start vs end frame comparison, showing matching images

Here’s an example where the nebula flythrough loops smoothly, making it perfect for background visuals.

Other examples

Here are some of the more successful examples using this technique. In some cases, I used the still image as a heavily weighted seed to create another still. The last one was rendered in Houdini with the Redshift renderer using a Houdini course I created some time ago.

This last one was created in Houdini and Redshift

Here are a few failures, especially when attempting looped animations – which, in all fairness, would be a challenge for human or machine:

Pros, Cons, and Applications

This Blender + Midjourney workflow offers:

Speed — animations in under a minute.
Uniqueness — still images designed in Blender give your animations a personal touch.
Flexibility — you can prototype quickly, then refine later in Blender if needed.

But there are trade-offs:

⚠️ Less control — you can’t direct the camera as precisely as in Blender.
⚠️ Mixed results — especially with looping animations, some attempts won’t quite work.

Despite this, it’s an excellent way to rapidly prototype or generate atmospheric background sequences.

Wrap Up

By combining Blender’s creative control with Midjourney’s speed, you can create unique nebula animations — both straightforward flythroughs and seamless loops — in a fraction of the time it would take using traditional volumetric rendering alone.

If you’d like to try this workflow yourself, you can check out the Nebula Generator add-on. You don’t have to use it — any still nebula image will work — but it’s a great way to get started.

Have you tried mixing Blender with AI tools like Midjourney? It might feel a little unsettling, but after spending hours myself rendering animations, I must say the results are undeniably impressive.

Nebula Creation Course

Hi everyone,

Building on my popular Nebula Generator in Blender, last year I decided to take things even further and used Houdini to create sophisticated nebula effects, deciding on the Redshift Renderer for its speed when rendering 3D volumes. The results are on my ArtStation:

The course itself is a culmination of that learning over the year, and after some long hours in the editing room, it certainly has been a labor of love.

So, I present to you the 3D Nebula Creation Course – here is the trailer:

The course videos start by assuming a very basic knowledge of Houdini and Redshift, building in complexity as time goes on.  

The 3 hours of step-by-step 4K videos have sample Houdini Indie files for each stage of my process, including:

  • Creation of customizable cloud set ups using VDBs and Volume VOPs.
  • Adding customizable effects such as stars and volumetric gas.
  • Setting up an automated lighting rig.
  • Using meta-ball objects and particles to shape the nebula.
  • Rendering the nebula for both stills and animation.
  • Post processing techniques on the final result.

I hope you’ll find it useful – for more info, screenshots and pre-requisites, visit the course page on Gumroad, and if you have any questions do get in touch.

Using L Systems to model cities

I’d been meaning to research L Systems in Houdini for some time, and wow had I been missing something. I first came across them in some of Akira Saito‘s posts where he had made some interesting mech-like beings using their organic structure:

A recent tweet from Akira Saito

I’d squinted at them before in Houdini’s L System documentation but didn’t really make sense of it. I then came across a great in-depth tutorial (if you’re as geeky as me it’s well worth the time) by the eloquent houdinikitchen who overviews the theory well with lots of examples to give a good understanding:

HoudiniKitchen’s tutorial on L-Systems

The basic idea is that you provide a set of simpl(ish) instructions called Turtle commands that describe a starting state, such as:

F+FF

…which means “Branch up one (F), rotate 90 degrees (+), and then branch up twice (FF)

Then you give it more ‘simple’ rules using the same language that alter this basic structure every generation, such as:

F=FF++F++F+F++F-F

…which says “next generation, replace all the Fs in the previous statement with the instructions here”.

This can give rise to some complex plant-like structures like the ones shipped with Houdini:

Basic L-System Tree

But interestingly you can create things like hexagonal structures as well. After a little experimentation I quickly got what looks like a snowflake:

I then tweaked the “Generations” parameter so that Houdini was part-way through a generation, which gives a more distorted structure like this:

The same hexaganol structure but at generation 4.8

I then used my own random 3D Shape Generator node and a Copy to Points node to randomly create building like structures for each point in the ‘tree’ and got this effect:

Zoomed out…
…zoomed in a bit more….
…and closer

You can download the sample file here – it will require the Shape Generator asset to work, which maybe you’ll humbly consider supporting me by taking a look at it on Gumroad here:

Blender Add-On: Window Generator

My latest add-on will create many windows at once on the faces of a mesh:

Years ago, I had a very hard time modelling window patterns onto models like this one:

ae4ab5b082274201b26b0e56a57634df

The user wanted more and more windows added to the 3D model, and each time I found myself painstakingly adding each one in a random pattern onto the faces.  This took up many hours and days of my time, far more than modelling the overall model itself. After this painful experience, I thought there surely must be a better way.  So when using Blender, I decided to create an add-on that would do the job for me.

The add-on has applications beyond just modelling spaceships, and would also apply when needing to quickly model many windows onto architectural buildings.

Features:

  • Select faces and then add a configurable pattern of windows, where the amount of coverage and randomness can be controlled.
  • Control how many windows are mapped across each face and how many are mapped down them.
  • The width and height of the windows can be changed.
  • Different window styles can be created by adding corner bevels, and outer bevels can be added to make the window edges smooth.
  • Option to disable top or bottom bevels to create different effects.
  • Ability to assign a material to the newly created windows by specifying a material slot id.
  • Also assign a lights-off material to give the impression that some window lights are switched off.
  • Introduce further variations by adding a random “jitter” to the width and height of each window.
  • Option to perform edge split operations to create a quicker clean look.
  • The process automatically creates uv seams to aid in uv mapping for textures.
  • Faces are mapped from the top-to-bottom of a face by default, but the orientation can be changed to either left-right or front-back.
  • Additional refinement options that will attempt to remove unwanted edges or vertices from the created window patterns.

If you have a new feature suggestion or feedback on the add-on feel free to contact me through this website or contact me on twitter @markkingsnorth.

Blender Add-on: Plating Generator & Tutorial

testingAnimGifCap3plate_gen_front

I have created a Blender Add-on that can quickly create hull plating patterns on an existing mesh. One of the most time consuming things I’ve found is how to create an interlocking plating pattern on top of meshes such as spaceship hulls.  The most effective method is to manually extrude edge loops over and over, which can be very time consuming. The more detailed you want the mesh to be, the longer it will take.

You can view a tutorial on how to use the add-on here:

Features:

  • Quickly generate a plating pattern from a random seed.
  • The pattern can be generated on a whole quad based mesh or on a sub selection of quad faces.
  • Control the amount of grooves cut.
  • Control the depth of the grooves between the plates.
  • Control the thickness of the grooves between the plates.
  • Option to split the edges on a smooth mesh to ensure the edges are cleanly defined.
  • Option to completely remove the grooves and leave the plates intact.
  • No hidden geometry created, actions are performed directly on the mesh.

topols

The add-on is available for commercial download at BlenderMarket.

Any comments or queries feel free to message me via this page or on my Twitter account.

Blender Nebula Group Node: Tutorial & Download

Download Add-On

I thought it would be good to share my work on creating nebula effects in Blender and make an adjustable Blender node called “Nebula” that Blender users can download and create different effects with, seeing as I don’t have much time to create the images myself.

With the “Nebula” node you can produce a range of effects, such as:

nebulanode_blueneb nebulanode_purpleneb1nebulanode_purplenebnebulanode_orangegreenneb3nebulanode_darkredneb

Tutorial

What follows in an overview about how to use the “Nebula” Group Node and a selection of other example .blend files you can download with the group node in it.  I’ve also added for upload some samples from my previous post on the subject.

The tutorial assumes that you have a working knowledge of Blender and how to install a group node in it. If you’re new to Blender, it’s all open source and is definitely worth spending the time learning it.

Here is a screenshot of the Nebula Node in Blender’s Cycles Compositor:

Screen Shot 2017-01-31 at 19.15.40

The node works by overlaying a series of different noise effects to produce the clouds and the stars.  The clouds are produced by overlaying 3 coloured noise layers, and the ambient large “suns” are layered on top.  Finally, the smaller stars are mixed in.

Here is a sample node setup that adds the nebula to the background environment in blender cycles (this is also in the sample .blend files):

Screen Shot 2017-01-31 at 14.13.01

The options for the Nebula Node are as follows:

Vectors

The first set of options define the position of each layer.  They are separated out because the key to producing a good effect is offsetting the vector (position) of the layers:

  • Small Stars Vector: The position of the small background stars.
  • Large Stars Vector: The position of the larger ambient stars.  Note that these aren’t exactly star-like, and more produce the ambient lighting of the nebula.
  • Clouds 1 Vector: position of the first cloud layer.
  • Clouds 2 Vector: position of the second cloud layer.
  • Clouds 3 Vector: position of the third cloud layer.

You will find that offsetting the cloud vectors using Blender’s mapping node will produce different cloud shapes and effects.

Cloud settings

As noted, the key to getting different nebula effects is by adjusting the 3 layers of cloud noise.  Each layer, labelled Cloud 1-3, has the following settings:

  • Color: The individual colour of each cloud layer, mixed in by the rest by the Screen layer effect.
  • Mix: How strongly mixed the cloud layer is with the overall effect (Default: 1.0)
  • Scale: The size of the noise in the cloud layer.
  • Distortion: The distortion effect applied to the cloud layer.
  • Detail: The amount of variation in the cloud texture.  Higher levels do produce more detail, but be careful not to overly distress the cloud effect.
  • Detail distortion: Applies a distortion effect to the detail.

There are then some more global settings you can play with for the clouds:

  • Cloud darkness: How dark a contrast the overall nebula effect is producing.
  • Cloud Dark Start: The position in the noise where the dark parts of the cloud begin.
  • Cloud Light Start: The position in the noise where the light parts of the cloud begin.

Sun and Star settings

The following settings control the intensity and position of the ambient light (suns) and stars.

  • Large Suns Mix: The intensity of the ambient light of the nebula.
  • Small Stars Mix: How much small stars shine through the nebula.
  • Large Sun Scale: The size of the ambient light on the nebula.
  • Large Sun Ramp Pos 1: The position where the light that brightens the nebula starts.
  • Large Sun Ramp Pos 2: The position where the darker part of the ambient light tails off.

…phew! That’s it.

Below are some of the effects you can get using the Nebula Node and the associated .blend file to load into Blender:

nebulanode_blueneb

Download .blend file

Download .blend file

Download .blend file

Download .blend file

Other Nebula Blender files

Finally, here are some .blend files of earlier effects I’ve done using the same principle but not in a group node.  You might find them useful to adjust for your own projects:

mutara_effect1

Download .blend file

redneb1

Download .blend file

horseheadnebula

Download .blend file

And finally…. Any questions, let me know and I will do my best to respond.