When I first built the Bend Modifier, I almost didn’t release it.
On the surface, it looked too simple, just a way to bend geometry in Blender. Behind the scenes though, it involved quite a lot of maths: arcs, tangents, limits, and all the fiddly details to make sure it behaved cleanly with tails, pivots, and arbitrary axes. Not as straightforward as some might have you believe.
I worried it might not be enough of a tool to stand on its own.
But since releasing it on Superhive(formerly Blender Market) just the other day, it’s taken off with over 200 downloads already. It turns out that sometimes the simple, focused tools are the ones people need most. Blender’s built-in Simple Deform has always been a bit awkward (one word: empties), so giving people a true modifier with a proper gizmo and real-world controls filled a gap I wasn’t sure existed until now.
The feedback has been great, from users coming over from 3ds Max who miss its bend modifier, to Blender modelers who just want a clean, UV-safe, stackable bend that doesn’t need extra objects cluttering the scene.
It’s been a reminder for me: even if a tool feels “too simple,” if it removes friction from everyday workflows, it can be valuable.
Here’s an introduction:
What’s next?
My next project is a major overhaul of Curves to Mesh. It’s just come out of testing and early feedback has been really positive. This one is a lot more complex under the hood, but I’ve learned not to underestimate the power of simple presentation paired with solid maths.
Thanks to everyone who’s tried the Bend Modifier so far, it’s been encouraging to see it bend its way into so many hearts so quickly.
When I first built Quad Maker, I wanted it to be a straightforward tool—something inspired by Maya’s Quad Draw but designed to fit neatly into Blender. That philosophy hasn’t changed.
Keeping Quad Maker focused and “boring” has some real advantages:
It stays simple and easy to learn (see the documentation and video to get started).
It stays stable and reliable. Blender’s Python API already gets pushed hard by Quad Maker, so focusing on core features avoids adding unnecessary complexity.
It helps keep development costs under control. Since Quad Maker is a one-off purchase rather than a subscription, I want to make sure that updates add real value without inflating ongoing costs.
So while I am always open to feature requests, I’m careful to only add the ones that really matter. The result is that Quad Maker remains exactly what it was meant to be: a dependable, no-fuss retopology tool.
“I think I hate [the scientist] because deep down I suspect he may be right. That what he claims is true. That science has now proved beyond doubt there’s nothing so unique…nothing there our modern tools can’t excavate, copy, transfer. That people have been living with one another all this time, centuries, loving and hating each other, and all on a mistaken premise. A kind of superstition we kept going while we didn’t know better.” —Klara and the Sun, Kazuo Ishiguro
The fear that human uniqueness could be reduced to something reproducible I think captures much of today’s anxiety around AI.
After my fretful post on the rise of AI in generative art, some signs seem to be emerging in Big Tech that might indicate the AI juggernaut may be slowing, or at least recalibrating.
Meanwhile, analysis from Epoch AI (via TechCrunch) suggests that gains from reasoning models—particularly those trained with reinforcement learning—could plateau within a year.
I personally find 3D generation tools like TripoAI amazing, although they are still act as a springboard for further retopology and refinement, as this feedback suggests. Their recent v3 model release still does not negate the need for these steps.
Taken together, these shifts may hint at the more practical boundaries of AI’s potential.
An AI slowdown may comfort those worried about job displacement. However, I think we will never feel quite as unique as we once were.
Setting aside the controversies around generative AI tools like Midjourney, this post explores a practical workflow for combining Blender with Midjourney to create volumetric nebula animations more quickly than traditional rendering techniques.
Volumetric nebulae can look fantastic in Blender, but animating them can be time-consuming and prone to artifacts in both EEVEE and Cycles. Midjourney, on the other hand, can generate animations quickly — especially flythroughs of cloud-like structures. In this post, I’ll walk through a workflow that combines both tools: using Blender’s Nebula Generator to create a unique still image, and then feeding that into Midjourney to produce animations — including seamless(ish) loops.
Why Use Midjourney for Animation?
Blender’s Eevee renderer is excellent for creating volumetric still images, but animating those volumes often requires lots of samples to avoid flicker. That translates to long render times, and even then, some artifacts can remain. The same goes for Cycles, which can be more accurate but can take much longer to render.
Midjourney, however, has been trained on massive amounts of cloud-like images and videos. This makes it surprisingly good at generating flythroughs of nebulae. While you lose some fine control over the camera, you gain speed — producing a short animation in seconds instead of hours.
Blender-rendered still imageMidjourney flythrough video frame
Tweak noise and lighting parameters to shape the nebula.
Adjust the coloring to get the atmosphere you want.
Increase the volumetric resolution for higher detail.
Render out a single still image.
I confess that I find this stage the most enjoyable – it lets you stay in control of the artistic look before moving into the scarily efficient world of AI-driven animation.
Step 2: Generate a Flythrough in Midjourney
With the Blender still rendered, I switch to Midjourney.
Upload the image into the Creation tab.
Use it not as a finished still, but as a starting frame for an animation.
A simple prompt like “nebula flythrough, NASA image of the day” works well — the phrase flythrough seems to make a big difference.
After hitting generate, Midjourney takes about 30 seconds to produce four short animations. Some will be better than others, but usually at least one is more than workable.
One question I was asked when I first shared this workflow was: Can you make it loop? The answer is yes — with mixed results.
If you set the end frame equal to the start frame and optionally set the Motion to High, Midjourney will attempt to generate a seamless loop. Sometimes it works beautifully, sometimes it doesn’t. A few retries usually yield at least one good loop.
Start vs end frame comparison, showing matching images
Here’s an example where the nebula flythrough loops smoothly, making it perfect for background visuals.
Other examples
Here are some of the more successful examples using this technique. In some cases, I used the still image as a heavily weighted seed to create another still. The last one was rendered in Houdini with the Redshift renderer using a Houdini course I created some time ago.
This last one was created in Houdini and Redshift
Here are a few failures, especially when attempting looped animations – which, in all fairness, would be a challenge for human or machine:
Pros, Cons, and Applications
This Blender + Midjourney workflow offers:
✅ Speed — animations in under a minute. ✅ Uniqueness — still images designed in Blender give your animations a personal touch. ✅ Flexibility — you can prototype quickly, then refine later in Blender if needed.
But there are trade-offs:
⚠️ Less control — you can’t direct the camera as precisely as in Blender. ⚠️ Mixed results — especially with looping animations, some attempts won’t quite work.
Despite this, it’s an excellent way to rapidly prototype or generate atmospheric background sequences.
Wrap Up
By combining Blender’s creative control with Midjourney’s speed, you can create unique nebula animations — both straightforward flythroughs and seamless loops — in a fraction of the time it would take using traditional volumetric rendering alone.
If you’d like to try this workflow yourself, you can check out the Nebula Generator add-on. You don’t have to use it — any still nebula image will work — but it’s a great way to get started.
Have you tried mixing Blender with AI tools like Midjourney? It might feel a little unsettling, but after spending hours myself rendering animations, I must say the results are undeniably impressive.
The first image I created with Midjourney in 2022 vs the same prompt in 2025: “space opera with planets”
Anything you read about the advent of AI in the world of software and the arts is already out of date. Think of this post as a checkpoint – my thoughts on the state of AI in 2025, something I’ll look back on in five years to see how it all panned out.
My Background
I’m no stranger to the world of computing. I earned my 1st class honours degree in Computer Science back in 2005, kept servers alive in Colorado, developed banking systems in London, and spent 15 years in government big data analytics and systems design — from software developer to “software architect.”
That last title always felt a little grand for someone mostly drawing boxes and arrows, figuring out how data flows from one system to the next, whilst others did the harder work of making it happen.
I gave up on that about 4 years ago, which is another story. I decided to revive this blog to chronicle the various random projects that have always filled my time — usually around software and particularly computer graphics, which pulled me into computing in the first place.
Ok, AI, You Can Help
In resurrecting this blog, I even resorted to ChatGPT to take the stress out of bug hunting.
Upgrading the website to PHP 8.x broke it, and instead of poking through long error messages, stack traces and Reddit forums, I fed them into the chat bot. It gave me pointers to what the problem might be, and helped guide me through to a solution in a much faster time. This is an undeniable fact that I wrestle with – in many ways it is faster than me. It is only my experience that helped me understand when it is on the right track, allowing me to hunt down the solution perhaps a little more effectively compared to a new starter.
A Bubble That Might Burst
With tools like Midjourney, Grok Imagine and a myriad of other AI services out there, it does feel like a bubble that might burst – but, much like the dot com bubble before it, it will likely still leave the world forever changed.
The Universal Approximation Theorem
Underneath the flashy images and videos lies a mathematical idea from the 1980s. More recent large scale interconnected systems like AWS and Azure take this idea from theory to practice.
This idea is that of the Universal Approximation Theorem: given enough data, ‘neurons’, and time, the function of an AI system can approximate any other real-world function: be that the function of a calculator, to human language, to image and video generation.
I think this graph outlines this idea well, where over time the AI function ‘learns’ to approximate the real world counterpart the more data it consumes:
Its Limits
This high level concept does, and should, inject a healthy level of fear. Yet it thankfully has constraints:
It will only be as good as the data you feed or train it. This data needs to be labelled with the right expected output. This means we are limited to training it with what we know. It will make inferences that a regular human may not, however, which is not to be underestimated…which could lead it to training itself…
The more data it is fed, the more time and resource it will consume. These are, sadly and/or thankfully, finite.
The more neurons it requires to store the weights that make its brain tick, the more hardware it will need.
So, there will be a limit – where that limit is, I think we are yet to see.
The Artistic Debate
There is a lot of controversy over the use of AI and how it is trained on other artist’s data. Unfortunately, I feel this argument falls down a little when you consider a phrase that I heard a lot a few years ago in the artistic community: “Steal like an Artist”.
This phrase goes to the heart of the artistic process. Artists are taught to study others’ work, collecting moodboards before they begin, so they can mix influences into their own creation. It applies to Software Engineers as well: when they start out, they are put on bug fixes and support requests. This trains them, helps them learn their trade.
To me, sadly, this is exactly what large scale AIs are doing, only at scale.
Tools I’m Watching
With all that off my chest, here is a checkpoint of the things I am interested in seeing develop:
Google DeepMind’s Genie 3: Only this morning prominent artist Rui Huang got Google’s research team to take one of his still images and generate an interactive video from it. This is when real time video is generated on the fly. The user is able to move a controller and the next frames of the video are automatically generated. This results in a fully interactive environment that the user can explore.
The current limitation is that the video starts to break down the longer the simulation is ‘played’. However, this leap frogs over the need for traditional 3D creation and simulation required in game development environments like Unreal Engine.
Midjourney Video and its competitors: Midjourney has been around for some time at this point. Its recent releases in video generation has caused me pause. Re-animating old photographs of my grandparents can be done with ease:
Re-creating short animations of my old Star Trek 3D models can be effective, the more accurate the original image the better:
Tripo AI and Sparc 3D Image-to-3D generation: These and other 3D model generation services have been coming along more dramatically than I anticipated. They take the traditional AI diffusion process and apply it to 3D. The general process evolves a volume “cloud” and iteratively tests that against the input image. This is then remeshed into a 3D surface.
Some limitations may come in: these AI systems may not find as much data compared to the wealth of 2D images and video out there – but as outlined in this video by Ryuu of Blender Bros fame, it is a good tool for a block out to develop a finished 3D model. Good model topology (edges, faces) and good texture mapping is still an elusive holy grail at this point, and may be too subjective a skill to effectively nail by an AI system – maybe.
Whilst trying Tripo out, it made me wonder if I would have even gotten into 3D in the first place if it had existed back then. One of the first things I was fascinated by as a kid playing top-down 2D shooters was what this sci fi ship might look like in 3D:
I am now able to recreate the basic 3D shape of this ship with Tripo in moments, and long lost childhood wonderings vanished in an instant. It needed more work, but was an effective start:
In a funny kind of way, what a time to be alive: the advent of large scale AI services like these opens up possibilities and imaginings that weren’t even a dream you could grasp a few years ago. Yet, it does make you wonder how one might fit in over the next few years.
I think traditional tools like Blender, 3DS Max, Maya and Houdini may become more bespoke or expert. For when you need a greater level of accuracy and refinement in 3D modelling that an AI may not be able to attain.
Here I used a simple Blender Lattice to modify an AI model into something a bit more interesting using Fit Lattice:
Many of the users of the Blender Add-ons I create seem to be gravitating towards manufacturing and 3D printing. This appears less touched by the tendrils of AI due to the need for more physical and accurate design thinking. I also think more CAD focussed tools where accuracy is key will also live on, perhaps alongside creative AI services.
Generated AI still requires guidance towards a refined solution. The challenge is that human experts who can guide them, who cut their teeth learning things the hard way, may be in shorter supply.
Conclusions for Now
Looking through the deluge of arguably impressive AI generated images and video, it’s hard not to feel a little despondent. When I create something with one of these AI tools, I also feel an emptiness: where was the struggle, the effort? The satisfaction you can feel at the end of crafting something?
The human mind is a problem solving machine that finds a sense of joy in the creation. I think this accounts for the slight emptiness or soullessness that can be felt when using an AI creative system. How all this is reconciled with the inarguable efficiencies that AI presents, I do not know – or does it even matter.
I think the natural human need to strive will be displaced into something else – I would say hobbies, although, money is needed for hobbies…and I think therein lies another issue. The AI that currently fuels my interests may not fuel me.
I don’t think AI will necessarily kill Art – perhaps, when history looks back, this will be seen as a time of massive artistic explosion and exploration. It may, however, redefine what it is to be an Artist…or someone who provides tools to them.
By supporting these add-ons you also support their continual improvement, so it is a mega help – you also help out the Blender Foundation too as a percentage of each of the add-ons go to them as well.
My current add-ons are:
The Plating Generator – Quickly create a configurable mesh in the form of a fighter jet or starfighter.
Shape Generator – Quickly create and configure countless random shapes for a wide variety of purposes.
Hull Texture – A versatile, reusable base texture for creating irregular plating effects.
The Shipwright – A bundle of the above add-ons combined with a custom set up for automatic sci fi model generation.
Curves To Mesh – Create and configure mesh surfaces from bezier curves.
Mesh Materializer – Map objects onto another object using its active UV coordinates.
Window Generator – Model many windows at once on the faces of a mesh.
Bevelled Extrude – Create extrusion effects that have configurable bevel effects on the base, corners and tops.
3 years ago I built the 3D Shape Generator to automatically create a variety of unique 3D shapes and objects.
For the latest update I got talking with Marco Iozzi, a professional who has been working in the industry since 1998 as a 3D Artist, Matte Painter and now Concept Artist on productions like Harry Potter, Elysium, Game of Thrones and is currently doing concept art and design on Thor: Love and Thunder.
He downloaded the Shape Generator to help him with his own design work.
He pointed me at Sinix Design, who provides a set of videos on the topic of Design Theory describing the best methods for creating shapes that are appealing and unique.
Sinix spoke about the concept of Big/Medium/Small; the theory that the majority of good designs can be broken down into one large main shape; a medium shape or shapes taking up a smaller amount of the design; and some smaller shapes taking up an even smaller area:
This got me thinking about the Shape Generator; if I can first generate just one large object, automatically scatter medium objects across that, and then scatter smaller random objects across that, I might be onto something.
Once I set this up, it was surprising how many more unique and interesting designs I could quickly generate with the change of a single number.
A sample of the generated shapes using the Shape Generator’s Big/Medium/Small settingsA Screenshot of the Shape Generator’s Big/Medium/Small settings in Blender (Also available in Houdini)
With the help of Marco’s feedback, I’ve also introduced a host of other improvements:
Along with the Big/Medium/Small controls, a new integrated panel allows you to easily revisit and change your settings which now include more material controls, Boolean settings, and easier to install Presets.
A new Iterator function, much like the one I coded with Chipp Walters in KIT OPS SYNTH, allows you to quickly render out countless variations to quickly let you pick a design to take forward.
The new Bake feature allows you to combine all the individual shapes into one so you can continue to model or sculpt your chosen design to the next level of detail. I’ll talk about each of these new features more in the next few videos. I hope you’ll enjoy these updates to the Shape Generator and find it useful in developing your own designs.
Each of these features are covered in the following videos:
I hope you’ll enjoy these updates to the Shape Generator and find it useful in developing your own designs.
Building on my popular Nebula Generator in Blender, last year I decided to take things even further and used Houdini to create sophisticated nebula effects, deciding on the Redshift Renderer for its speed when rendering 3D volumes. The results are on my ArtStation:
The course itself is a culmination of that learning over the year, and after some long hours in the editing room, it certainly has been a labor of love.
The course videos start by assuming a very basic knowledge of Houdini and Redshift, building in complexity as time goes on.
The 3 hours of step-by-step 4K videos have sample Houdini Indie files for each stage of my process, including:
Creation of customizable cloud set ups using VDBs and Volume VOPs.
Adding customizable effects such as stars and volumetric gas.
Setting up an automated lighting rig.
Using meta-ball objects and particles to shape the nebula.
Rendering the nebula for both stills and animation.
Post processing techniques on the final result.
I hope you’ll find it useful – for more info, screenshots and pre-requisites, visit the course page on Gumroad, and if you have any questions do get in touch.
It’s often been requested, and here it is – an introductory tutorial on converting a curve network to a mesh in Blender using my Curves to Mesh add-on. Any questions, do let me know….enjoy!
I stumbled across an interesting displacement technique that’s useful for subtle parallax type effects and ripple distortions. Using this technique, you can add movement effects to still images such as clouds to give them the appearance of movement as the camera travels over them.
The above video talks about the technique, and I’ve added a simple Node Group I developed for it up for download at Gumroad and Blender Market.