AI technology slowdown or wishful thinking for some?

“I think I hate [the scientist] because deep down I suspect he may be right. That what he claims is true. That science has now proved beyond doubt there’s nothing so unique…nothing there our modern tools can’t excavate, copy, transfer. That people have been living with one another all this time, centuries, loving and hating each other, and all on a mistaken premise. A kind of superstition we kept going while we didn’t know better.”
Klara and the Sun, Kazuo Ishiguro

The fear that human uniqueness could be reduced to something reproducible I think captures much of today’s anxiety around AI.

After my fretful post on the rise of AI in generative art, some signs seem to be emerging in Big Tech that might indicate the AI juggernaut may be slowing, or at least recalibrating.

  • ChatGPT-5 launched with much fanfare, but many users came away underwhelmed, noting incremental improvements rather than a leap forward.
  • Meta has also restructured its AI division, breaking it into smaller departments and reducing staff—moves that rattled investors and knocked confidence in its stock.
  • Meanwhile, analysis from Epoch AI (via TechCrunch) suggests that gains from reasoning models—particularly those trained with reinforcement learning—could plateau within a year.
  • I personally find 3D generation tools like TripoAI amazing, although they are still act as a springboard for further retopology and refinement, as this feedback suggests. Their recent v3 model release still does not negate the need for these steps.

Taken together, these shifts may hint at the more practical boundaries of AI’s potential.

An AI slowdown may comfort those worried about job displacement. However, I think we will never feel quite as unique as we once were.

The Art of Struggle in the Age of AI

The first image I created with Midjourney in 2022 vs the same prompt in 2025: “space opera with planets”

Anything you read about the advent of AI in the world of software and the arts is already out of date. Think of this post as a checkpoint – my thoughts on the state of AI in 2025, something I’ll look back on in five years to see how it all panned out.

My Background

I’m no stranger to the world of computing. I earned my 1st class honours degree in Computer Science back in 2005, kept servers alive in Colorado, developed banking systems in London, and spent 15 years in government big data analytics and systems design — from software developer to “software architect.”

That last title always felt a little grand for someone mostly drawing boxes and arrows, figuring out how data flows from one system to the next, whilst others did the harder work of making it happen.

I gave up on that about 4 years ago, which is another story. I decided to revive this blog to chronicle the various random projects that have always filled my time — usually around software and particularly computer graphics, which pulled me into computing in the first place.

Ok, AI, You Can Help

In resurrecting this blog, I even resorted to ChatGPT to take the stress out of bug hunting.

Upgrading the website to PHP 8.x broke it, and instead of poking through long error messages, stack traces and Reddit forums, I fed them into the chat bot. It gave me pointers to what the problem might be, and helped guide me through to a solution in a much faster time. This is an undeniable fact that I wrestle with – in many ways it is faster than me. It is only my experience that helped me understand when it is on the right track, allowing me to hunt down the solution perhaps a little more effectively compared to a new starter.

A Bubble That Might Burst

With tools like Midjourney, Grok Imagine and a myriad of other AI services out there, it does feel like a bubble that might burst – but, much like the dot com bubble before it, it will likely still leave the world forever changed.

The Universal Approximation Theorem

Underneath the flashy images and videos lies a mathematical idea from the 1980s. More recent large scale interconnected systems like AWS and Azure take this idea from theory to practice.

This idea is that of the Universal Approximation Theorem: given enough data, ‘neurons’, and time, the function of an AI system can approximate any other real-world function: be that the function of a calculator, to human language, to image and video generation.

I think this graph outlines this idea well, where over time the AI function ‘learns’ to approximate the real world counterpart the more data it consumes:

Its Limits

This high level concept does, and should, inject a healthy level of fear. Yet it thankfully has constraints:

  • It will only be as good as the data you feed or train it. This data needs to be labelled with the right expected output. This means we are limited to training it with what we know. It will make inferences that a regular human may not, however, which is not to be underestimated…which could lead it to training itself…
  • The more data it is fed, the more time and resource it will consume. These are, sadly and/or thankfully, finite.
  • The more neurons it requires to store the weights that make its brain tick, the more hardware it will need.

So, there will be a limit – where that limit is, I think we are yet to see.

The Artistic Debate

There is a lot of controversy over the use of AI and how it is trained on other artist’s data. Unfortunately, I feel this argument falls down a little when you consider a phrase that I heard a lot a few years ago in the artistic community: “Steal like an Artist”.

This phrase goes to the heart of the artistic process. Artists are taught to study others’ work, collecting moodboards before they begin, so they can mix influences into their own creation. It applies to Software Engineers as well: when they start out, they are put on bug fixes and support requests. This trains them, helps them learn their trade.

To me, sadly, this is exactly what large scale AIs are doing, only at scale.

Tools I’m Watching

With all that off my chest, here is a checkpoint of the things I am interested in seeing develop:

  • Google DeepMind’s Genie 3: Only this morning prominent artist Rui Huang got Google’s research team to take one of his still images and generate an interactive video from it. This is when real time video is generated on the fly. The user is able to move a controller and the next frames of the video are automatically generated. This results in a fully interactive environment that the user can explore.

The current limitation is that the video starts to break down the longer the simulation is ‘played’. However, this leap frogs over the need for traditional 3D creation and simulation required in game development environments like Unreal Engine.

  • Midjourney Video and its competitors: Midjourney has been around for some time at this point. Its recent releases in video generation has caused me pause. Re-animating old photographs of my grandparents can be done with ease:

…and taking still images of nebula effects I painstakingly created in Houdini with Redshift can now be animated in seconds:

Re-creating short animations of my old Star Trek 3D models can be effective, the more accurate the original image the better:

  • Tripo AI and Sparc 3D Image-to-3D generation: These and other 3D model generation services have been coming along more dramatically than I anticipated. They take the traditional AI diffusion process and apply it to 3D. The general process evolves a volume “cloud” and iteratively tests that against the input image. This is then remeshed into a 3D surface.

    Some limitations may come in: these AI systems may not find as much data compared to the wealth of 2D images and video out there – but as outlined in this video by Ryuu of Blender Bros fame, it is a good tool for a block out to develop a finished 3D model. Good model topology (edges, faces) and good texture mapping is still an elusive holy grail at this point, and may be too subjective a skill to effectively nail by an AI system – maybe.

    Whilst trying Tripo out, it made me wonder if I would have even gotten into 3D in the first place if it had existed back then. One of the first things I was fascinated by as a kid playing top-down 2D shooters was what this sci fi ship might look like in 3D:

I am now able to recreate the basic 3D shape of this ship with Tripo in moments, and long lost childhood wonderings vanished in an instant. It needed more work, but was an effective start:

This led me on to some further experiments:

Some I developed into ‘proper’ 3D models using more traditional tools like Blender and Substance 3D Painter:

The Future of 3D

In a funny kind of way, what a time to be alive: the advent of large scale AI services like these opens up possibilities and imaginings that weren’t even a dream you could grasp a few years ago. Yet, it does make you wonder how one might fit in over the next few years.

I think traditional tools like Blender, 3DS Max, Maya and Houdini may become more bespoke or expert. For when you need a greater level of accuracy and refinement in 3D modelling that an AI may not be able to attain.

Here I used a simple Blender Lattice to modify an AI model into something a bit more interesting using Fit Lattice:

Many of the users of the Blender Add-ons I create seem to be gravitating towards manufacturing and 3D printing. This appears less touched by the tendrils of AI due to the need for more physical and accurate design thinking. I also think more CAD focussed tools where accuracy is key will also live on, perhaps alongside creative AI services.

Generated AI still requires guidance towards a refined solution. The challenge is that human experts who can guide them, who cut their teeth learning things the hard way, may be in shorter supply.

Conclusions for Now

Looking through the deluge of arguably impressive AI generated images and video, it’s hard not to feel a little despondent. When I create something with one of these AI tools, I also feel an emptiness: where was the struggle, the effort? The satisfaction you can feel at the end of crafting something?

The human mind is a problem solving machine that finds a sense of joy in the creation. I think this accounts for the slight emptiness or soullessness that can be felt when using an AI creative system. How all this is reconciled with the inarguable efficiencies that AI presents, I do not know – or does it even matter.

I think the natural human need to strive will be displaced into something else – I would say hobbies, although, money is needed for hobbies…and I think therein lies another issue. The AI that currently fuels my interests may not fuel me.

I don’t think AI will necessarily kill Art – perhaps, when history looks back, this will be seen as a time of massive artistic explosion and exploration. It may, however, redefine what it is to be an Artist…or someone who provides tools to them.

What have I been up to lately…? KIT OPS SYNTH

Original vision for SYNTH (Credit: Chipp Walters)

I was approached a few months ago by Chipp Walters and Masterxeon101, famous for some of the most leading tools in 3D modelling such as KIT OPS and HARD OPS.

KIT OPS is able to apply a wide range of 3D objects (called INSERTs) that can be used to instantly cut and add to existing objects or create standalone ones with the goal of rapidly creating and exploring new designs. Read more about it here.

They had been interested in my generative modelling work in the Plating Generator and Shape Generator, and had a vision for an extension to KIT OPS called KIT OPS SYNTH.

The original requirement for SYNTH was to simply overlay these INSERTs in a grid like fashion on top of 3D surfaces, but with Chipp’s design patterns thinking and background in NASA we are taking it a lot further….

KIT OPs SYNTH early results, being applied with the new KIT OPS BEVEL (Credit: Chipp Walters)

Emerging features include:

  • A range of layouts can be used: Arrange INSERTs in rows, columns, grids, randomly, and around edges and borders.
  • User ‘layers’ of INSERTs to manage and apply groups of INSERTs in one go.
  • Control the frequency and placement of INSERTS with a variety of parameters: apply padding to the INSERTs, scale them individually, add random rotations.
  • Use Blender 2.91’s new Booleans feature for more accurate cutting.
  • Load and save your configurations to share with others or apply later.

This has already given rise to a variety of promising results you can achieve very quickly with SYNTH. Here is a short video of the random layout being applied using a set of INSERT cutters, by just changing the random seed value:

A short video showing SYNTH’s random layout…and this isn’t even the most powerful layout!
Results achieved in just as few seconds.

There’s still plenty of work to do and a good round of testing to be done before the first release, but I thought it would be good to show the progress so far. In the future, Chipp is looking forward to taking SYNTH to the next level with Machine Learning…but that is definitely a blog for another time.

Re-lifeing the Blog

The storefront on Blender Market

It’s been a long road getting from there to here, as they say – I decided to temporarily close this blog a few months ago due to the high level of neglect I’d been giving it, as I have been super busy creating add-ons on the Blender Market site.

Whilst I continue to do that, I think the time’s right to bring it back as I delve even deeper into the realms of Blender, Houdini, 3DS Max and renderers like Octane.

I thought I’d use this blog to share a more permanent record of my work and experiments in the world of CG (whilst still keeping up my Twitter and Youtube presence) along with my continuing development of add-ons and plugins.

I’d also like to get some feedback from you, dear reader – so if there’s anything you’d like me to write about, such as tips n tricks in the world of 3D, let me know.