Motion-Controlled Razor Crest: An ILM Inspired Project

It's funny because when you think you know something, but realize that you had it completely wrong, your whole world is flipped again.
It's even funnier because I had all the answers right in front of me! I actually own a copy of the ILM: Art of Special Effects book and it gives so much insight into the motion-control process. For one, ILM had a base formula for frame rates depending on ship sizes and hints to look out for when shooting
IMG_0041.jpg

EDIT: Disregard this formula. It has no bearing on what I'm looking for.

"In order to get the maximum f/stop the camera exposures are usually longer than one second each, so a typical shot might take half a day to shoot. With each click of the shutter, the camera advances a fraction of an inch along its sixty-foot rail."

One of the issues I ran into as evident of my video is blue spill. When you have a reflective object in front of a blue screen, often enough, you may find the blue light from the blue screen on the model, so keying out your model is tough as hell. It's so funny that I am running into the same issues as the ILM people did back in the day.
 
Last edited:
I was super impressed with their use of the projected interactive lighting on the Razor Crest model when filming the Mandalorian. It obviously serves to create realistic reflections but it probably also helps a little bit with spill issue.

Very clever stuff.
 
It's funny because when you think you know something, but realize that you had it completely wrong, your whole world is flipped again.
It's even funnier because I had all the answers right in front of me! I actually own a copy of the ILM: Art of Special Effects book and it gives so much insight into the motion-control process. For one, ILM had a base formula for frame rates depending on ship sizes and hints to look out for when shooting
View attachment 1529059
"In order to get the maximum f/stop the camera exposures are usually longer than one second each, so a typical shot might take half a day to shoot. With each click of the shutter, the camera advances a fraction of an inch along its sixty-foot rail."

One of the issues I ran into as evident of my video is blue spill. When you have a reflective object in front of a blue screen, often enough, you may find the blue light from the blue screen on the model, so keying out your model is tough as hell. It's so funny that I am running into the same issues as the ILM people did back in the day.
That formula is really only of use in determining the frame rate for filming an action miniature, one that is in motion such as a crashing vehicle. It has no bearing on the frame rate of a motion control shot which is dictated by the shutter exposure time required for the aperture that is desired for maximum depth of field. The use of a spaceship in the example in the book is misleading as that would generally be shot motion control and the formula in that case would not be relevant.

Blue spill is the reason that models with flat paint finishes were common. Shiny objects is generally asking for trouble. With a motion control camera you have the ability to shoot multiple passes so you don't necessarily have to capture the matte and the beauty pass at the same time.
One of the alternatives to blue(or any other colour) screens are a front light back light matte pass where the shiny model is filmed against black for the beauty pass. The unlit model is then filmed as a silhouette against a brightly lit (usually) white background. This however will not stop the white spill on the model. When I used to work in VFX in the late 1980's one trick we used to alleviate this spill was to cover the shiny model in black tape or some sort of removable paint for the back light pass. This however was prone to to the model shifting slightly while taping it over and then the mattes would not fit. Apogee developed the reverse front light back light matting system where the model would be painted in a fluorescent coating that was invisible in normal light but would glow under UV light. Against a black background the glowing model could then be shot as the matte pass.
Another alternative also developed by Apogee was to use a front projection screen and project the blue background. This resulted in a greatly reduced blue spill as the light from the screen would reflect back along the vector it arrived without all the scatter from a blue backing that could bounce off the model's surfaces.
 
That formula is really only of use in determining the frame rate for filming an action miniature, one that is in motion such as a crashing vehicle. It has no bearing on the frame rate of a motion control shot which is dictated by the shutter exposure time required for the aperture that is desired for maximum depth of field. The use of a spaceship in the example in the book is misleading as that would generally be shot motion control and the formula in that case would not be relevant.

Blue spill is the reason that models with flat paint finishes were common. Shiny objects is generally asking for trouble. With a motion control camera you have the ability to shoot multiple passes so you don't necessarily have to capture the matte and the beauty pass at the same time.
One of the alternatives to blue(or any other colour) screens are a front light back light matte pass where the shiny model is filmed against black for the beauty pass. The unlit model is then filmed as a silhouette against a brightly lit (usually) white background. This however will not stop the white spill on the model. When I used to work in VFX in the late 1980's one trick we used to alleviate this spill was to cover the shiny model in black tape or some sort of removable paint for the back light pass. This however was prone to to the model shifting slightly while taping it over and then the mattes would not fit. Apogee developed the reverse front light back light matting system where the model would be painted in a fluorescent coating that was invisible in normal light but would glow under UV light. Against a black background the glowing model could then be shot as the matte pass.
Another alternative also developed by Apogee was to use a front projection screen and project the blue background. This resulted in a greatly reduced blue spill as the light from the screen would reflect back along the vector it arrived without all the scatter from a blue backing that could bounce off the model's surfaces.
Thank you so much for clarifying this as I started to get confused on how the formula related to the motion control stuff. In that case, I'll still have to find the supposed chart ILM had regarding the speed of the control rig and for what distances each frame was separated by.
 
Thank you so much for clarifying this as I started to get confused on how the formula related to the motion control stuff. In that case, I'll still have to find the supposed chart ILM had regarding the speed of the control rig and for what distances each frame was separated by.
The speed of the rig should be dictated by the exposure time.
Imagine you are filming a shot in real time with a movie camera mounted on a dolly.
The frame rate is set to the normal speed of 24 frames per second. The camera has a 180 degree shutter.
This means that the exposure time is half of a 24th of a second as the shutter is open for 180 degrees of rotation and closed for the other 180 degrees for every frame. The exposure time therefore is 1/48 of a second.
The grip is pushing the dolly at a constant speed so each fame gets exposed and therefore motion blurred for half the distance traveled for each frame.

On your motion control rig you program a simple dolly move that you want to take 5 seconds of screen time.
That is 5 times 24 frames, 120 frames in total.
You set your lens to its smallest aperture because you want maximum depth of field.
This then requires say 5 seconds of exposure time per frame.
The camera has to move for 10 seconds per frame as half the time the shutter needs to be closed to simulate a real movie camera.
Your motion control rig then has to do the 120 frame move in 10x120 seconds - 1200 seconds or 20 minutes. During this continuous movement of the motion control rig the camera is taking a 5 second exposure and then waiting for 5 seconds, repeating 120 times.

Ideally your motion control software should figure this all out. You set your start and end points, any ease in or ease outs, tell it the exposure time and it figures out the speeds to move all the stepper (or servo ) motors on all the axis in sync to achieve the move in the number of frames you want. The software would also control the camera exposing each frame for the required time at the right syncronised interval, and takes control of the focus.

There will naturally be a limitation in the maximum speed any of the axis on your rig can move and how quickly they can stop.
 
Last edited:
I don't know if it matters for what you're doing but I noticed your model is not set up to be the center of rotation. So whenever the rig rotates it either raises or lowers the miniature.
 
This might not be directly relevant, but I was musing about the use of the Mandalorian-style video walls for model photography. The biggest issue would presumably be keeping model mounts out of the shot, if you wanted to capture everything in camera, so there would be no conventional matting-out of the stand arm. If you combined camera and/or model movement with a dynamically changing background, you could increase the apparent motion of the model on camera without actually moving things so much that the model no longer occludes the stand from the camera's view. But then would you still want to do multiple passes, to get separate lighting and exposure passes, or would everything look good if done in one shot?
 
The speed of the rig should be dictated by the exposure time.
Imagine you are filming a shot in real time with a movie camera mounted on a dolly.
The frame rate is set to the normal speed of 24 frames per second. The camera has a 180 degree shutter.
This means that the exposure time is half of a 24th of a second as the shutter is open for 180 degrees of rotation and closed for the other 180 degrees for every frame. The exposure time therefore is 1/48 of a second.
The grip is pushing the dolly at a constant speed so each fame gets exposed and therefore motion blurred for half the distance traveled for each frame.

On your motion control rig you program a simple dolly move that you want to take 5 seconds of screen time.
That is 5 times 24 frames, 120 frames in total.
You set your lens to its smallest aperture because you want maximum depth of field.
This then requires say 5 seconds of exposure time per frame.
The camera has to move for 10 seconds per frame as half the time the shutter needs to be closed to simulate a real movie camera.
Your motion control rig then has to do the 120 frame move in 10x120 seconds - 1200 seconds or 20 minutes. During this continuous movement of the motion control rig the camera is taking a 5 second exposure and then waiting for 5 seconds, repeating 120 times.

Ideally your motion control software should figure this all out. You set your start and end points, any ease in or ease outs, tell it the exposure time and it figures out the speeds to move all the stepper (or servo ) motors on all the axis in sync to achieve the move in the number of frames you want. The software would also control the camera exposing each frame for the required time at the right syncronised interval, and takes control of the focus.

There will naturally be a limitation in the maximum speed any of the axis on your rig can move and how quickly they can stop.
Thank you for the detailed explanation. In terms of the software, I personally don't have the budget to afford the proper software since it requires working with specific hardware which is the main cost deterrent. I'll have to figure it out in Blender where I am currently creating the gcode file. I'll have to work out coding for figuring out the speed of the motors and the required time to expose each frame in Python.
 
Been a while! I've decided to start working on V2 of the rig. I'm upping the track length from 3m to 6 meters and will replace all the 3D printed parts with metal. I've also decided to change up the motor controller with the gecko g540 controller so it can interface with dragon frame.
I was lucky enough to have my rig noticed by ILM, so I got to talk to a few great folks about the rig.

I'll keep this page updated as my design gets fleshed out again.
 
Just to remind everyone, the design of the moco system is not mine (it's Knoll's), I'm merely replicating it. Moving that aside, I'm nearly done rebuilding the cad model for the camera rig. Everything will be in metric minus the stock material used to make the plates. Send Cut Send is a really affordable seller, but they only sell their services in imperial, so I'm opting for all my stock to be either 0.25" or 0.5."

Screenshot 2022-09-13 091145.png


I'm going to do a couple of passes to clean up the model and refine my bom to also account for quantities. My hope is to design both the camera and miniature rigs to share similar hardware as to cut down on costs. Despite that, this version of the project will be far more expensive, so i'm hoping my college can fund the project if I can convince them to.
 
I'm looking at building a motion control rig (when finances permit) that follow the boom and track of the classic motion control rig. I too; have been inspired by what ILM did for 'The Mandalorian', and I would like to get a miniature mounting rig for my system as well.

Software, I might look into Bottango.
 
I'm looking at building a motion control rig (when finances permit) that follow the boom and track of the classic motion control rig. I too; have been inspired by what ILM did for 'The Mandalorian', and I would like to get a miniature mounting rig for my system as well.

Software, I might look into Bottango.
Be sure when you're using a program like bottango you also account for opening and closing the shutter of your camera in intervals that create that signature motion blur
 
I think you can modify Bottango, change some of it to do specific things. Or I might even see if I can find a programmer to create something bespoke.
 
Impressive work! Though if you're seeking a fully-authentic experience, you'll also need to construct a Martian patrol ship from 1990's Spaced Invaders, since John Knoll used a leftover foam stunt model from his first VFX supervisor job to test the rig prior to the Razor Crest's completion. My brother owns the hero miniature, so we could hook you up with excellent reference...

Spaced Invaders patrol ship.jpg
 
Impressive work! Though if you're seeking a fully-authentic experience, you'll also need to construct a Martian patrol ship from 1990's Spaced Invaders, since John Knoll used a leftover foam stunt model from his first VFX supervisor job to test the rig prior to the Razor Crest's completion. My brother owns the hero miniature, so we could hook you up with excellent reference...

View attachment 1621394
I'm intrigued...
 
Finished up the model mover cad a while ago. Now I am just in the process of requesting funding from my college. This project will cost more than I could ever imagine...
Untitled.png


I'm still trying to brainstorm what models I want to produce. Looking back at the mando experience from star wars celebration, I really want to do the razor crest, naboo starfighter and jedi ambassador ship, but that seems like a lot. Thoughts?
 
I guess that the model will be in a 1/72 scale? The razor crest has better movements (visual dynamics) with the wings + the engines.
The Naboo starfighter doesn't have that "wing movement look"(too sleek; not taking a lot of room in a shot) and could not benefit fully, in terms of dynamics, from your rig.
The Jedi Ambassador Ship could be more interesting in terms of wing/visual dynamic and would fill the screen better. Simpler to build also;)
 
Back
Top