Community
Arnold General Rendering Forum
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Questions about the GPU renderer sampling

1 REPLY 1
SOLVED
Reply
Message 1 of 2
schneiderTJPNK
278 Views, 1 Reply

Questions about the GPU renderer sampling

Hi,

I am currently testing the GPU version of arnold and I have a few questions that I would love to get an answer on. I am testing on a 2080 Ti and so far it looks very promising for lookdev when used with the Optix denoiser. Rendertimes of a cornell box at 1080p with high bounce counts are down from about 30min on a i7-6800k at 3.8 Ghz down to about 7 mins on the 2080 Ti and visual quality is almost identical, I even prefer the noise distribution of the GPU render.

So in those regards I must say the development team did a great job so far, even though the sampling is currently subobtimal, which brings me to my questions:


Are there plans to implement the same sampling used for CPU arnold on the GPU?

And if you know, what are the challenges with implementing that sampling approach on GPU compared to CPU?

Also it seems that the variance AOV for the noice denoiser is not working, is it not supported atm?

Thank you for your answers!

Tags (2)
Labels (2)
1 REPLY 1
Message 2 of 2

Yes there are plans, but it's very much a work in progress. There are technical challenges, and it's not just about Optix, but about how GPU threads work in general.

On one hand, we'd like to simplify things on the CPU and move away from having so many sample settings (because they are so often set incorrectly). On the other hand, we believe we can fix it on the GPU without having to make you adjust lots of sampling knobs.

I think the issue with the variance AOV is that it uses an unsupported filter (heatmap)



// Stephen Blair
// Arnold Renderer Support

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Autodesk Design & Make Report