Hi,
I am currently testing the GPU version of arnold and I have a few questions that I would love to get an answer on. I am testing on a 2080 Ti and so far it looks very promising for lookdev when used with the Optix denoiser. Rendertimes of a cornell box at 1080p with high bounce counts are down from about 30min on a i7-6800k at 3.8 Ghz down to about 7 mins on the 2080 Ti and visual quality is almost identical, I even prefer the noise distribution of the GPU render.
So in those regards I must say the development team did a great job so far, even though the sampling is currently subobtimal, which brings me to my questions:
Are there plans to implement the same sampling used for CPU arnold on the GPU?
And if you know, what are the challenges with implementing that sampling approach on GPU compared to CPU?
Also it seems that the variance AOV for the noice denoiser is not working, is it not supported atm?
Thank you for your answers!