Hi all,
Going through the AtSampler documentation, it's unclear how many samples I should be requesting during init.
My use case is that I need to get one random number per sample (create_camera_ray) call. Does this mean I should be requesting one, then during my evaluate loop calling
AtSamplerIterator* AiSamplerIterator(const AtSampler * sampler, const AtShaderGlobals * sg)
in order to have it grab a fresh sample? Or would this end up with the same sample being used across all rays?
If so, I'm not sure how I should be determining the correct number to request? Is this scoped at a render bucket, so I need to know bucket size (including neighboring pixels for filtering) and the number of samples per pixel?
The higher level question is just what's the proper way to generate a random number per camera ray sample in Arnold?
Cheers,
Alan.
Solved! Go to Solution.
Solved by ramon.montoya. Go to Solution.
just an hint : you can use any custom sampler you want to get your random numbers 🙂
Hi Max, thanks for the suggestion. Unfortunately, it means giving up the implicit handling of seeds etc that make it easy to guarantee reproducability while not locking into a consistent pattern. So I'd rather figure out how to use theirs correctly that provides those benefits without my having to find a way to reengineer a solution to that.
The question to ask is: how many samples will I need per shader_eval call?
So you only need to specify a 1 sample sequence, it will be randomized based on when the sample iterator is created, and you will get the same sequence if you call the iterator multiple times from the same rendering state.
Call AiSamplerCreate in node_initialize, and create an iterator every time you enter shader_evaluate (to grab your single sample) and destroy the sampler in node_finish.
One advantage of using the APi is that using AiSampler guarantees a unique sequence of sample points based on the pixel location, subpixel sample, ray-tree depth, etc.
But that might be an issue for you, depending on how your shader works, if you want the same single sample for all shader evaluations and all ray depths for a given camera sample.
Yep but what I was trying to say is that it doesn't make sense to use an offline iterator based sampler here (not even sure you can). What you gonna need that single random number for ?
This does not work here as he is 'sampling' from the camera. Also would you instanciate an iterator just to get one single random number .. and on what actually without shading globals 😉
Ahhh okay, gotcha, thank you. I want to be able to use it to introduce spread into the ray sample direction without tying it to the spatial location of the sample in 2D space.
Hi Max,
Thanks for details on that. So because I'm needing the random sampling in camera create ray, at that stage there's been no AtShaderGlobals constructed in order for the AtSampler API to work?
Have I understood that correctly?
Cheers,
Alan.
(Replying as a new answer, but also following up on the thread comments, to make things clearer)
You can create shader globals with AiShaderGlobals, but you should fill out pixel and sample position in there (from AtCameraInput)
You can also use the AtCameraInput's lensx and lensy random variables if you don't need them for dof,. They are basically used to perturb a ray's origin/direction so they might fit your usage well.
I'd start prototyping some stuff with the c++11 PRNG and then eventually switch to some other sampler to see if things get better. Here some you can easily implement:
Thanks very much. Do you happen to know the distribution of lensx and lensy. Looking at the doc it's unclear whether it's uniform or power based, and whether it's one or both. I'm guessing that x is uniform and y is squared because y is listed with [0,1)^2 (likely intended for distributing on a circle), but the docs aren't specific, with only lensy having a comment.
They are two independent [0,1) random variables. So you will need to map them to the domain you need in your application.
You better print them to see what's up 🙂 and yep they simply look horrible.
Oh wow - yeah. That's surprising - thanks for checking. Guess I best look into options.
The main problem is that there're recurrent duplicates that comes in as triplets of identical numbers for different screen space coords. They can't be really used as general purpose random numbers.
0.990799 -> [-0.401042,0.255208] 0.990799 -> [-0.401040,0.255208] 0.990799 -> [-0.401042,0.255210] 0.135284 -> [-0.398958,0.255208] 0.135284 -> [-0.398956,0.255208] 0.135284 -> [-0.398958,0.255210]
Removing those, it looks like a pseudorandom Halton sequence.
You will get properly stratified and distributed samples if you use AtSampler and request 2 samples per camera ray.
You will get properly stratified and distributed samples if you use AtSampler and request 2 samples per camera ray.
Great to know, thank you 🙂 Looks to be working for what I need, btw, so thanks again.
It's what we use internally, so I hope it should work. Remember to set sg->si and sg->t if you are creating your own shader globals. The sampler takes into account dithering and ray depth too.