I'd like to check my understanding of how best to use the rendering API to implement a renderview for a third party DCC app.
-For IPR rendering, start a render in free mode and use CameraCreateRay and AiTrace in concert to perform manual raytracing.
-For bucket rendering, start a render in camera mode, using a custom output driver that writes pixel data into memory accessible by the GUI.
Would this be correct, or how else might one get image data out of Arnold in an interactive session? Many thanks.
Solved! Go to Solution.
Solved by Stephen.Blair. Go to Solution.
You create an Arnold scene, call AiRender, and your custom display driver writes to the viewer framebuffer. You don't do manual raytracing.
IPR rendering is just one long render session, where you update the existing Arnold scene.
Gotcha. In that case, how does one cause the initial rendering to be progressive, rather than going straight into bucket workers?
Do you mean the new experimental progressive mode?
Or the "regular" progressive rendering? The progressive rendering (like in the Arnold Render View) is a sequence of renders with different AA values. Create scene, render it with AA = -3, render it again with AA = -2, AAa = -1, AA =1, and then the final AA
That's perfect. I didn't realize that's how it worked, since you never see the buckets when progressive rendering in Maya. Many thanks.
I don't think SItoA uses the new rendering API, but you can see how it was done in that plugin: