Community
Arnold for 3ds Max
Rendering with Arnold in 3ds Max using the MaxtoA plug-in.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

How is Depth of Field calculated in Arnold?

2 REPLIES 2
Reply
Message 1 of 3
peter.andersson.91
266 Views, 2 Replies

How is Depth of Field calculated in Arnold?

Hi!

I am currently writing a paper on renderers. I have made a scene and rendered it using Arnold, Renderman, Cycles and LuxCore. What surprised me is that even though they all have the same settings (aperture size, focal distance etc.), the depth of field in the different images appear quite different from each other. The blur varies in size and appearance in all images. So my question is, how can this be? How is Depth of Field calculated in Arnold?


Any help on this matter is much appreciated!

Tags (1)
Labels (1)
2 REPLIES 2
Message 2 of 3

Hello Peter,

Depth of field in arnold's perspective camera works as a sort of "ideal" thin-lens camera, derived as an extension of an "ideal" pinhole camera, where the size and relative distance between the lens elements and the film are abstracted away through direct control of the camera's angular field of view.

The cameras are "ideal" in the sense that in contrast to physical cameras where changing settings like the focal length of a lens or the aperture impose a balancing act in order to preserve other quantities like vignetting, exposure and field of view, arnold's built-in cameras were designed to allow independent editing of the different camera attributes.

Depth of field defocusing effects can be enabled by setting the "focus_distance" (in world units) and the "aperture" (radius, in world units) on cameras that support this feature. You can see see some examples in the MtoA documentation here: https://docs.arnoldrenderer.com/display/A5AFMUG/Cameras

Message 3 of 3

Probably the original question should be rephrased.. i.e. really because all DOF are implemented as a thin-lens approximations, - how they can be so different ? 😉

And the answer probably is.. because they are approximations.

Another answer could also be that.. having any physical lens a different DOF/bokeh so it is for virtual ones, ie. they are 'implementation-defined'.. and as much their behaviour is well specified and consistent.. it is like it is 🙂

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums