Community
Arnold for 3ds Max
Rendering with Arnold in 3ds Max using the MaxtoA plug-in.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Adaptive sampling slower than regular sampling

11 REPLIES 11
Reply
Message 1 of 12
sergeyklesov
2663 Views, 11 Replies

Adaptive sampling slower than regular sampling

Hi guys,

I've tested Arnold's adaptive sampling in couple of really different scenes: from simple sphere on studio background, to outdoor scene with high amount of DOF
And I've found that in 99% cases Adaptive sampling makes render time slower, instead of making it faster.
Here are some numbers:
Camera AA - 5, Adaptive - off, rest samples are - 1 and rendertime is 06:04 min
Camera AA - 3, Adaptive - 5, rest samples are - 1 and rendertime is 07:48 min

In theory Adaptive Sampling should really increase render speed, cause render engine spend less time on "noisefree" areas and focus more on "noisier" ares
Previously I was using Octane and in it Adaptive Sampling works as expected

Can someone please clarify this question?

Cheers

11 REPLIES 11
Message 2 of 12
T0M0X
in reply to: sergeyklesov

I can confirm this, I also stopped using it after I spent whole day testing it with few scenes and in every scenario I had longer rendering times when Adaptive Sampling was enabled.

Message 3 of 12
thiago.ize
in reply to: sergeyklesov

What happens if you increase the bucket size to for instance 256? In my testing that seems to remove almost all the overhead.

Also, adaptive is intended to be used with much higher max AA. In that case you're more likely to see a benefit. For instance, use min AA=3 (or 4) and max AA 16, and then control the amount of work you want done by using the options.AA_adaptive_threshold.

Message 4 of 12
Anonymous
in reply to: sergeyklesov

I agree that it doesn't work like it should

Message 5 of 12
sergeyklesov
in reply to: sergeyklesov

Arnold Team and @Lee Griggs, can you please clarify for us how it should work properly and what should our expectation be in regards of using Adaptive Sampling?

Thank you

Message 6 of 12
thiago.ize
in reply to: sergeyklesov

I think I might have answered this in my answer from yesterday? There's a workaround for the current overhead you likely are encountering and how adaptive sampling is intended to be used.

Ideally in a future release we'll fix the overhead while still letting you use small buckets, but for now using bigger buckets is the only solution I can think of for avoiding this overhead.

Message 7 of 12
sergeyklesov
in reply to: sergeyklesov

@Thiago Ize ,

Thank you for the reply. I've tested your suggested workflow with "Bucket size 256px" and 1080HD image and it is even slower than before, the reason is that with large bucket size areas of the image which require intense sampling and longer render time have been rendered with only one thread of CPU and the rest of the cores are in idle state

In conclusion: in the current version of Arnold for Maya - Adaptive sampling should be used only with large images and bucket size 256. And for IPR sessions, small images, and low bucket size Adaptive Sampling will slow down render and we should use Regular Sampling, am I right?

Cheers

Message 8 of 12
thiago.ize
in reply to: sergeyklesov

Even if there's only a single bucket, all your cores will, for the most part, work on that bucket. When I tried this on a 300 x 187, AA_min=3 AA_max=6 render, I actually found that the 512x512 bucket render was faster than the 64x64 bucket render (46s vs 52s) even though the larger bucket was bigger than the entire image.

Having said that, there are some inefficiencies when all threads render into a single bucket. For expensive renders, like above, it won't be noticed, but for simple renders (at the extreme imagine an empty scene) this overhead could well explain what you saw.

Message 9 of 12
thiago.ize
in reply to: sergeyklesov

Adaptive sampling is great when there is a big range between the min AA and the max AA and the adaptive sampling threshold is set so that most pixels do not end up requiring the max AA samples. If Arnold is making most pixels use the max AA, then adaptive sampling buys you nothing, and as you saw, can even make it slower due to some current inefficiencies. If you would prefer more noise in exchange for faster renders, raise the AA_adaptive_threshold and eventually you'll find that adaptive sampling should give you faster renders for similar noise levels.

Message 10 of 12
thiago.ize
in reply to: sergeyklesov

Just to give an example, if you have AA_min=3 and AA_max=5, but Arnold finds that all the pixels require AA=7 in order to hit the AA_adaptive_threshold, then of course adaptive sampling won't help. I suspect something like this is happening for you. You'll have to raise the threshold so that Arnold stops most pixels around AA=3 and 4 in order to get faster than AA=5 renders.

What might be helpful is to look at the AA_inv_density AOV to make sure that you do have plenty of pixels ending before AA_max.

It might help to read https://docs.arnoldrenderer.com/display/A5AFMUG/Adaptive+Sampling

Message 11 of 12
sergeyklesov
in reply to: sergeyklesov

Hi Thiago,

Thank you for replies

I've done another test and encountered another issue with regards to bucket size.
I have a machine with 64GB of RAM and was trying to render 5000x2822px image, when I set bucket size to 256 - render consumes all 64GB and then give me a fatal error and stop rendering. I guess this is happening due to large required space for each bucket that needs to be stored in RAM and render reach a limit of 64GB and that amount is not enough.
When I reduced bucket size to 128 it started rendering but stopped after 30 min and again give me fatal error.

So for large images set large bucket size seems to be impossible and again we are back with facing issue of overhead and small bucket size.
Hope in future releases Arnold's team can fix these inconveniences

Also I came across this article, maybe team can be inspired by this 3rd party developers and make big changes in Arnold's features, like it was with alShaders and current aiStandardSurface
https://www.rombo.tools/2019/01/08/arnold-updated-adaptive-sampling/

Message 12 of 12
thiago.ize
in reply to: sergeyklesov

You're right: increased memory is indeed the main downside towards using larger buckets. Going from 64x64 to 256x256 currently means Arnold needs to store 16x more data per bucket. If you have lots of AOVs, then yes this can be an issue. The size of the image shouldn't matter that much, provided there are more buckets than threads.

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Autodesk Design & Make Report