You Have the Power. Why the democratization of 3D changes the game for boutique agencies.

Tobias Sugar

January 31, 2024 | 8 min read

In the beginning...

I still remember, years ago, I sat down at my desk and behind me was my soon-to-be coworker playing with the facial expressions of a 3D human head. It was a bit of a mind-bender and a behind-the-curtain moment where I realized the inner workings of this type of animation—how it all actually happened. As neat as it was, it was certainly far removed from the impressive graphics coming out of Hollywood. Fast forward to today, and, all by myself, I’m modeling environments, coloring and lighting them on my home PC, and spitting out renders in a few minutes. The ability to create studio-level output as an individual has arrived. So, what changed? And how has this arrival leveled the playing field for smaller agencies?


Why it used to suck.

Traditional hurdles to 3D development and animation often boiled down to two main issues: tools and time. For starters, the hardware required to create 3D designs and render them was often out of reach. Until recently, desktop machines frequently lacked the power and speed necessary to handle these demands. Top-of-the-line graphics cards and processors were expensive, making it difficult for individuals or agencies to justify the cost. Additionally, creating a desirable outcome often required a team of people, which was hard to staff for, given the required skills.

Achieving the desired end result often meant lots of experimentation, as rendering was a slow process, often at odds with the timeline a client would find acceptable. Storage was another significant challenge. File sizes were enormous, particularly when it came to frame sequences, putting a strain on hard drive space and forcing many to resort to render farms, or remote rendering, which were also costly and counterproductive when managing workflows across larger teams. Download and upload times made it difficult to transfer files and collaborate with others. As a result, many designers relied on stylization, such as flat shading or cell shading, to shorten rendering times. But this meant sacrificing photorealism.

So what’s changed?

Enter Blender. “Free 3D software?” I remember thinking to myself, “No way.” The tool was in its infancy and presented itself very much as a co-op situation. “There is no way this could be as good,” I thought. I mean you get what you pay for, right? But in time, I began to experiment.

Real-time rendering. The rendering process that I’ve mentioned a few times by now is probably one of the biggest obstacles anyone faces. To render means to turn the wireframe’s composition in the preview window of any software into a finished, fully colored and lit rendering or image. Or an image sequence, for that matter, when dealing with animation.

Just like drawing, rendering in 3D takes geometry and turns it into the finished look.

In Blender, you have two types of render styles, the first called Eevee, which is more or less real-time though reduced in quality (in other words not photorealistic), and Cycles, which leverages full path tracing like the big guys, and is effectively real-time calculating (but the process is still time-intensive). Notwithstanding, the flame is lit. It's my prediction that in a few short years, almost all 3D software will deploy fully realized, real-time rendering of some sort.

Unreal Engine.

There are times in one’s career when we have full-circle moments. Where we realize that everything we’ve been working towards finally culminates, and all knowledge gained now has a place to thrive. For me, this happened when I discovered Unreal Engine. The original reason I had been attracted to 3D software was that it appeared to be a tool in which if I could imagine something—anything—I could create it. By way of Blender, I discovered Unreal as a true real-time renderer and, when I dug deeper, it quickly became evident that it was the tool that realized the dream I had so many years ago.

Unreal Engine: Lumen. Optimized geometry and heavy use of normal maps allows massive-scale environments to be easily handled.

Unreal is a gaming engine by its nature, but its capabilities had a whole other side in the space of animation and previs. Since it used a different way of visualizing—let’s call it a different type of render engine—it bypassed most of the challenges I’d previously faced. And because it was initially a game engine, it had efficiency as a main driver in its approach. Developers of UE closed the “look” gap over the top of an engine that was made to reduce memory weight and enable quick prototyping. This was completely at odds with traditional tools. Existing in the walled garden of big Hollywood studios, there were never limits, you could just add more compute power and pay for it. Here was a tool that was designed to empower small teams, and from my perspective this is based on three most important factors.

Level of detail.

Let’s use the real world to help explain. From our perspective, the data we get when looking around us is primarily focused on the subject we are looking at. For example, you look at your hand. The hand is in focus, and your eye is producing an effect in your brain similar to what you see in photos. Less information is absorbed about the background—in a photo, this is represented as blur. This means there is less information, and the detail of an object not in focus can be reduced.

As objects get farther away, their detail and data decrease.


This is referred to as “level of detail” (LOD), and in Unreal, this is precisely how massive environments and detailed models are visualized within systems that previously would have struggled. Very much unlike traditional 3D, lesser objects of focus actually have reduced data—simpler geometry, less color, etc. The engine only calculates what it’s looking at. There’s no need to remove the back faces of models, for example. If the camera doesn’t see it, it’s not a cost. So if a background of a scene is blurry or far away, the LOD drops on those objects based on what the camera picks up, and this can be controlled. This lower poly approach is balanced by a heavy use of normal maps which overcomes the reduction making objects look higher mesh than they actually are.

Light is right.

The second part of UE that’s a game changer? Light. There’s no need to leverage specialized render engines like V-ray or Octane to capture hyperrealistic light simulation. UE’s native light renderer is an almost magical simulation of real-world photon behavior, certainly, when it comes to real-time results. Drag a light into the scene and out of the box, even as a default—the light behaves as you’d expect. For the first time, you don’t need to do preview renders or area renders to check your thinking about the final look. What you see is what you get, within a very small margin of error. It’s like the whole idea of a “render” is gone. In many ways it's like real life.

Perfect light every time.

On a production line, this is incredibly significant. One of the largest burdens on time is the amount of checking and testing that goes along with any 3D development. With confidence, I can build a scene and that default lighting gets me halfway to the look I want. The presence of false positives is reduced. What does that mean? Things I think will look a certain way won’t turn out differently in the final output.

Camera, action.

Beyond standard 3D, the third thing that makes this approachable is the camera system. Unreal has sets of cameras with lense types built in that behave as you would expect from their real-world counterparts. Aperture, squeeze factor, depth of field, histograms, gamma, bokeh, ISO, shutter speed, and more all emulate with incredible accuracy.

Real-world cameras bring real-world knowledge into the software.

So much to say, so little time.

To close, let me say a little about real-time rendering to traditionalists out there: This is a slight misnomer. Yes, there is a real-time visualization happening as I described—it's a game engine. But it does have a render feature. You can build a timeline, animate objects as you’d expect, and then “render” that timeline as a frame sequence. This method allows you to boost the quality of what you saw in the real-time preview even further, getting you even better results and—as compared with other tools—simply crush time. Frames that would normally take 10-20 seconds to render spit out in 1 or 2 seconds. This savings adds up quickly.

UE also has a Path Tracer setting. This means I could turn UE into a traditional tool that doesn’t leverage the real-time engine (but that's not what this article is about). It also has simpler modeling features as compared to traditional tools. So for me, the ultimate pipeline has come together—I work in UE with Blender as a support component. This powerful combination means I can create fully realized, massive-scale environments and beautiful naturally lit scenes all on an RTX 2080 graphics card. The community is huge and supportive, and almost every bump in the road can be solved with some digging on YouTube.

This is the swell animation is currently breaking through. Agencies like ours now have broad, previously unattainable capabilities. And this means clients finally get access to studio-level output at a fraction of the time and cost. If you’d like to know more about our process or have any questions about 3D animation at Fiction Tribe, please reach out.