Hello Arnold Community.
I've been using Arnold for a month now and after hearing about Arnold GPU, I wanted to join in on the hype train.
My specs are as follows:
CPU: Intel i7 860 @ 2.80 GHz
GPU: Geforce GTX 960 4GB
RAM: 12GB
I'm aware my GPU is lacking dedicated RTX hardware and I only have Game ready drivers installed as of typing this. But I still went ahead and decided to test out both render systems.
I made these tests trying to get closer to Production level/noise free render in mind.
On one hand, I'm surprised my GPU was able to render this at all and when it did, it seems like it's much more faster at doing Adaptive Sampling than the CPU. The only downside is that the VRAM is really small, so I would have to spend more time optimizing when I actually add more textures and polygons.
Sorry for the double post, but I also ran a second test. This time, I upgraded my drivers (from 419.35 to 419.67) and I repopulated the GPU cache (took around 15 ~ 20 mins).
To make the comparison more straight forward, I also made the settings for both CPU & GPU as identical as possible, and I didn't rotate the sun & sky position.
Once again, my conclusion is that the GPU is actually faster than the CPU when dealing with the same settings, but the GPU struggles a bit more with noise.
Go higher with the max samples on GPU. Something like 100. You should compare the noise not the same samples.
And lower the noise trashold.
So I just had my first crash.
I was trying to do a test render with 200 adaptive samples and 30 Camera AA.
Was this just overkill or a legitimate bug?
If you compare the noise levels, you could also optimize the cpu settings for a quicker result.
No need for that many camera AA samples.
Yeah, like I'm only doing minimum 4, maximum 20 samples with the new AA Clamping from HtoA 4.0.1. And I get pretty similar results with noise threshhold set to 0.015 <default>.
Looks like the 2080 Ti is about 3x faster than a 20 core Xeon E5 v3.