Initial post here to ask if there are any known gotchas or obvious reason(s) in Arnold that would explain why a rendered image via CPU only on an Intel based machine looks correct compared to an identical frame rendered on an AMD machine; until you throw it into Baselight and the pixel difference can be seen.
Is a heterogeneous CPU renderfarm supported by Autodesk Arnold?
and...as a follow up question, does the Arnold GPU (NVIDIA-OptiX) renderer match Intel or AMD based CPU renderer?
But apparently there are implementation differences for some instructions that affect the precision. AMD mentioned the inverse square root as a well known difference between Intel and AMD.
If you've got an example that demonstrates the problem, that would be great.
We don't distinguish between AMD and Intel in our code, so the best I can say is that we want GPU to match CPU
Hi Mike -- you can find the relevant details here:
https://docs.arnoldrenderer.com/display/A5ARP/Supported+Features+and+Known+Limitations