AI Renders 3D Models
Written by David Conrad   
Wednesday, 04 June 2025

Could it be that all of that computer graphics you had to learn to implement 3D rendering is obsolete? Is this another example of AI doing just about anything you can think of?

Yes.

A team from Microsoft Research and College of William & Mary and Zhejiang University has released RenderFormer which can take a specification of a triangle-based 3D model and lighting and can turn these into a fully rendered 3D scene. Traditionally this would be done using a physics based rendering engine which used the properties of light and surfaces to work out how bright each pixel in the scene was. Much ingenuity has, over the years, gone into finding approximations and clever ways of working this out, but now it seems we don't need to fret about physics, we can just ask a neural network to do the job.

Back in the early days of AI, it was common to refer to neural networks as" function approximators". This is a point of view that has largely gone out of fashion, but it is still true. We used to think of taking samples from a complex multidimensional function and training a neural network to generate the correct output for each input. The network remembered the points it was trained on, but generalized to produce results for points that were not in the training set.

This is the approach taken by RenderFormer, but it uses a modern transformer style neural network - two of them to be exact:

renderform1

"RenderFormer follows a two stage pipeline: a view-independent stage that models triangle-to-triangle light transport, and a view-dependent stage that transforms a token representing a bundle of rays to the corresponding pixel values guided by the triangle-sequence from the the view-independent stage. Both stages are based on the transformer architecture and are learned with minimal prior constraints. No rasterization, no ray tracing."

What is surprising is that the network can extract the "rules" for rendering in this way - no physics, no theories as to how light works, just some examples. It is reasonably successful at rendering scenes that were not in the training set so whatever it has done it hasn't just memorized the mappings from triangles to pixels. It seems to have generalized the process and hence must have extracted some innate rules in the problem.

Some people are commenting that the process doesn't scale well - quadratic in the number of triangles - and standard ray tracing scales better. This probably means that it isn't going to be replacing any games or graphics engines any time soon, but it is another remarkable example of what neural networks can learn.

The examples presented include some classic benchmarks such as the Utah teapot:

renderform2 

There are lots of other examples on the group's website - take a look they are all interesting.

More Information

RenderFormer

Related Articles

AI Creates Breakthrough Realistic Animation

AI Creates Flintstones Cartoons From A Description

Real-time Face Animation

Create Your Favourite Actor From Nothing But Photos 

Better 3D Meshes Using The Nash Embedding Theorem

3-Sweep - 3D Models From Photos

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


MCP For Debugging And Reverse Engineering
27/05/2025

Model Context Protocol is now taking control over Windbg and Ghidra to automate the tedious tasks that reverse engineers have to go through in their day-to-day work.



Turing Papers At Auction
08/06/2025

Alan Turing's personal copy of his PhD dissertation and an original offprint of "On Computable Numbers" together with a  loose-leaf copy of his portrait photograph that bears his signature are th [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

 

 

 

 

 

 

 

 

 

 

 

Last Updated ( Wednesday, 04 June 2025 )