When I first built the Bend Modifier, I almost didn’t release it.
On the surface, it looked too simple, just a way to bend geometry in Blender. Behind the scenes though, it involved quite a lot of maths: arcs, tangents, limits, and all the fiddly details to make sure it behaved cleanly with tails, pivots, and arbitrary axes. Not as straightforward as some might have you believe.
I worried it might not be enough of a tool to stand on its own.
But since releasing it on Superhive(formerly Blender Market) just the other day, it’s taken off with over 200 downloads already. It turns out that sometimes the simple, focused tools are the ones people need most. Blender’s built-in Simple Deform has always been a bit awkward (one word: empties), so giving people a true modifier with a proper gizmo and real-world controls filled a gap I wasn’t sure existed until now.
The feedback has been great, from users coming over from 3ds Max who miss its bend modifier, to Blender modelers who just want a clean, UV-safe, stackable bend that doesn’t need extra objects cluttering the scene.
It’s been a reminder for me: even if a tool feels “too simple,” if it removes friction from everyday workflows, it can be valuable.
Here’s an introduction:
What’s next?
My next project is a major overhaul of Curves to Mesh. It’s just come out of testing and early feedback has been really positive. This one is a lot more complex under the hood, but I’ve learned not to underestimate the power of simple presentation paired with solid maths.
Thanks to everyone who’s tried the Bend Modifier so far, it’s been encouraging to see it bend its way into so many hearts so quickly.
When I first built Quad Maker, I wanted it to be a straightforward tool—something inspired by Maya’s Quad Draw but designed to fit neatly into Blender. That philosophy hasn’t changed.
Keeping Quad Maker focused and “boring” has some real advantages:
It stays simple and easy to learn (see the documentation and video to get started).
It stays stable and reliable. Blender’s Python API already gets pushed hard by Quad Maker, so focusing on core features avoids adding unnecessary complexity.
It helps keep development costs under control. Since Quad Maker is a one-off purchase rather than a subscription, I want to make sure that updates add real value without inflating ongoing costs.
So while I am always open to feature requests, I’m careful to only add the ones that really matter. The result is that Quad Maker remains exactly what it was meant to be: a dependable, no-fuss retopology tool.
Setting aside the controversies around generative AI tools like Midjourney, this post explores a practical workflow for combining Blender with Midjourney to create volumetric nebula animations more quickly than traditional rendering techniques.
Volumetric nebulae can look fantastic in Blender, but animating them can be time-consuming and prone to artifacts in both EEVEE and Cycles. Midjourney, on the other hand, can generate animations quickly — especially flythroughs of cloud-like structures. In this post, I’ll walk through a workflow that combines both tools: using Blender’s Nebula Generator to create a unique still image, and then feeding that into Midjourney to produce animations — including seamless(ish) loops.
Why Use Midjourney for Animation?
Blender’s Eevee renderer is excellent for creating volumetric still images, but animating those volumes often requires lots of samples to avoid flicker. That translates to long render times, and even then, some artifacts can remain. The same goes for Cycles, which can be more accurate but can take much longer to render.
Midjourney, however, has been trained on massive amounts of cloud-like images and videos. This makes it surprisingly good at generating flythroughs of nebulae. While you lose some fine control over the camera, you gain speed — producing a short animation in seconds instead of hours.
Blender-rendered still imageMidjourney flythrough video frame
Tweak noise and lighting parameters to shape the nebula.
Adjust the coloring to get the atmosphere you want.
Increase the volumetric resolution for higher detail.
Render out a single still image.
I confess that I find this stage the most enjoyable – it lets you stay in control of the artistic look before moving into the scarily efficient world of AI-driven animation.
Step 2: Generate a Flythrough in Midjourney
With the Blender still rendered, I switch to Midjourney.
Upload the image into the Creation tab.
Use it not as a finished still, but as a starting frame for an animation.
A simple prompt like “nebula flythrough, NASA image of the day” works well — the phrase flythrough seems to make a big difference.
After hitting generate, Midjourney takes about 30 seconds to produce four short animations. Some will be better than others, but usually at least one is more than workable.
One question I was asked when I first shared this workflow was: Can you make it loop? The answer is yes — with mixed results.
If you set the end frame equal to the start frame and optionally set the Motion to High, Midjourney will attempt to generate a seamless loop. Sometimes it works beautifully, sometimes it doesn’t. A few retries usually yield at least one good loop.
Start vs end frame comparison, showing matching images
Here’s an example where the nebula flythrough loops smoothly, making it perfect for background visuals.
Other examples
Here are some of the more successful examples using this technique. In some cases, I used the still image as a heavily weighted seed to create another still. The last one was rendered in Houdini with the Redshift renderer using a Houdini course I created some time ago.
This last one was created in Houdini and Redshift
Here are a few failures, especially when attempting looped animations – which, in all fairness, would be a challenge for human or machine:
Pros, Cons, and Applications
This Blender + Midjourney workflow offers:
✅ Speed — animations in under a minute. ✅ Uniqueness — still images designed in Blender give your animations a personal touch. ✅ Flexibility — you can prototype quickly, then refine later in Blender if needed.
But there are trade-offs:
⚠️ Less control — you can’t direct the camera as precisely as in Blender. ⚠️ Mixed results — especially with looping animations, some attempts won’t quite work.
Despite this, it’s an excellent way to rapidly prototype or generate atmospheric background sequences.
Wrap Up
By combining Blender’s creative control with Midjourney’s speed, you can create unique nebula animations — both straightforward flythroughs and seamless loops — in a fraction of the time it would take using traditional volumetric rendering alone.
If you’d like to try this workflow yourself, you can check out the Nebula Generator add-on. You don’t have to use it — any still nebula image will work — but it’s a great way to get started.
Have you tried mixing Blender with AI tools like Midjourney? It might feel a little unsettling, but after spending hours myself rendering animations, I must say the results are undeniably impressive.
By supporting these add-ons you also support their continual improvement, so it is a mega help – you also help out the Blender Foundation too as a percentage of each of the add-ons go to them as well.
My current add-ons are:
The Plating Generator – Quickly create a configurable mesh in the form of a fighter jet or starfighter.
Shape Generator – Quickly create and configure countless random shapes for a wide variety of purposes.
Hull Texture – A versatile, reusable base texture for creating irregular plating effects.
The Shipwright – A bundle of the above add-ons combined with a custom set up for automatic sci fi model generation.
Curves To Mesh – Create and configure mesh surfaces from bezier curves.
Mesh Materializer – Map objects onto another object using its active UV coordinates.
Window Generator – Model many windows at once on the faces of a mesh.
Bevelled Extrude – Create extrusion effects that have configurable bevel effects on the base, corners and tops.
3 years ago I built the 3D Shape Generator to automatically create a variety of unique 3D shapes and objects.
For the latest update I got talking with Marco Iozzi, a professional who has been working in the industry since 1998 as a 3D Artist, Matte Painter and now Concept Artist on productions like Harry Potter, Elysium, Game of Thrones and is currently doing concept art and design on Thor: Love and Thunder.
He downloaded the Shape Generator to help him with his own design work.
He pointed me at Sinix Design, who provides a set of videos on the topic of Design Theory describing the best methods for creating shapes that are appealing and unique.
Sinix spoke about the concept of Big/Medium/Small; the theory that the majority of good designs can be broken down into one large main shape; a medium shape or shapes taking up a smaller amount of the design; and some smaller shapes taking up an even smaller area:
This got me thinking about the Shape Generator; if I can first generate just one large object, automatically scatter medium objects across that, and then scatter smaller random objects across that, I might be onto something.
Once I set this up, it was surprising how many more unique and interesting designs I could quickly generate with the change of a single number.
A sample of the generated shapes using the Shape Generator’s Big/Medium/Small settingsA Screenshot of the Shape Generator’s Big/Medium/Small settings in Blender (Also available in Houdini)
With the help of Marco’s feedback, I’ve also introduced a host of other improvements:
Along with the Big/Medium/Small controls, a new integrated panel allows you to easily revisit and change your settings which now include more material controls, Boolean settings, and easier to install Presets.
A new Iterator function, much like the one I coded with Chipp Walters in KIT OPS SYNTH, allows you to quickly render out countless variations to quickly let you pick a design to take forward.
The new Bake feature allows you to combine all the individual shapes into one so you can continue to model or sculpt your chosen design to the next level of detail. I’ll talk about each of these new features more in the next few videos. I hope you’ll enjoy these updates to the Shape Generator and find it useful in developing your own designs.
Each of these features are covered in the following videos:
I hope you’ll enjoy these updates to the Shape Generator and find it useful in developing your own designs.
It’s often been requested, and here it is – an introductory tutorial on converting a curve network to a mesh in Blender using my Curves to Mesh add-on. Any questions, do let me know….enjoy!
I stumbled across an interesting displacement technique that’s useful for subtle parallax type effects and ripple distortions. Using this technique, you can add movement effects to still images such as clouds to give them the appearance of movement as the camera travels over them.
The above video talks about the technique, and I’ve added a simple Node Group I developed for it up for download at Gumroad and Blender Market.
The collection contains 271 3D meshes that you can use to spread across the walls and hulls of your spaceships, stations, futuristic cities and other projects you may have, in order to quickly make them look much more detailed and complicated in your final renders.
The pack contains various objects arranged in folders ranging from sensors, vents, signs, fuel storage, hatches, scaffolding and many more other random objects and shapes.
I’m gradually making my Blender Market add-ons also available on Gumroad, one by one…next one up is the Plating Generator, which adds a range of plating patterns onto an object, as well as the ability to add greeble effects – that is, adding lots of smaller objects onto a larger object’s faces.
KIT OPS is able to apply a wide range of 3D objects (called INSERTs) that can be used to instantly cut and add to existing objects or create standalone ones with the goal of rapidly creating and exploring new designs. Read more about it here.
They had been interested in my generative modelling work in the Plating Generator and Shape Generator, and had a vision for an extension to KIT OPS called KIT OPS SYNTH.
The original requirement for SYNTH was to simply overlay these INSERTs in a grid like fashion on top of 3D surfaces, but with Chipp’s design patterns thinking and background in NASA we are taking it a lot further….
A range of layouts can be used: Arrange INSERTs in rows, columns, grids, randomly, and around edges and borders.
User ‘layers’ of INSERTs to manage and apply groups of INSERTs in one go.
Control the frequency and placement of INSERTS with a variety of parameters: apply padding to the INSERTs, scale them individually, add random rotations.
Use Blender 2.91’s new Booleans feature for more accurate cutting.
Load and save your configurations to share with others or apply later.
This has already given rise to a variety of promising results you can achieve very quickly with SYNTH. Here is a short video of the random layout being applied using a set of INSERT cutters, by just changing the random seed value:
A short video showing SYNTH’s random layout…and this isn’t even the most powerful layout!Results achieved in just as few seconds.
There’s still plenty of work to do and a good round of testing to be done before the first release, but I thought it would be good to show the progress so far. In the future, Chipp is looking forward to taking SYNTH to the next level with Machine Learning…but that is definitely a blog for another time.