Having this issue:
Bought a new System with a Ryzen 5900x and RTX 3080ti and rendering an animation with it. Not gonna talk about the dissapointing increase of render time (expected alot more), but the memory usage...
My old specs usage while rendering was about 7800MB (GTX 1070 8GB), so i adjusted the scenes textures to the limit. Expecting to have more memory to spend with the new PC, i opened the
exact same scene, populated GPU cache and started ARV... just to realize, that memory usage is about 10k...!?
To make things worse, during production render the memory starts to increase with each frame until it reaches its limit (12GB) and crashes... error.txt
Yes am using .tx, drivers are up-to-date, latest Arnold version and no there arent any displacements (cant use them anyway, even one displacement crashes max right away)...
Whats going on???
So, we're talking about CPU memory usage, right?
Hard to say much with more detailed logs. Also, the CPU memory usage goes up after the GPU crash, so those are not valid renders.
Did you try batch rendering on the command line?
No, its only the GPUs Vram...
Thought i was clear enough with the tag and Geforce cards.
I cannot know what the GPU usage was per frame, I'd need to see a more verbose log.
When the render starts, there's only 8GB of GPU memory available:
00:00:00 5421MB | GPU 0: NVIDIA GeForce RTX 3080 Ti @ 1665MHz (compute 8.6) with 12288MB (8709MB available) (NVLink:0)
Is this the first frame to render, or where there other frames before it?
How much memory was available at the start for each of those previous frames?
This frame crashes with a generic error. It could be memory, or it could be something else, like a bug, or something in the scene.
00:01:17 13719MB ERROR | [gpu] an error happened during rendering. OptiX error is: Unknown error (Details: Function "_rtContextLaunch2D" caught exception: Encountered a CUDA error: cudaDriver().CuEventSynchronize( m_event ) returned (700): Illegal address, file: <internal>, line: 0) GPU 0 had 8709MB free before rendering started and 2114MB free when crash occurred
If you Ignore Textures, does it render?
Had the time now and made couple of logs with GPU and CPU.
These are logs without texures.
hi I had this issue too, memory usage increased every frame I render in batch render, maybe the Garbage collector we missing here [proggramer prespective]
So far we've been unable to reproduce this GPU memory leak on our end. Could anyone supply a scene and instructions on what we need to do in order to reproduce the GPU memory leak? Thanks!
Dont know, if this is any helpful for the memory leak issue, but i wonder if this GPU ram management
is normal?
Its a basic scene with just one object and light. 4k textures and a 3k poly table, but the render takes almost half of my memory. Also downscaling resolution doesnt seem to respond.
FYI: Populate GPU is the only method for me to empty ram, flush textures etc. does nothing.
Example
Watching your video, it doesn't really look like memory is increasing with every frame. It seems a pretty constant 4.9-5GB over all frames?
I tried your scene on my computer and it stayed a flat 2.6GB all the time. I am running a newer unreleased version of arnold, so I don't know how it'd compare with the version you ran.
The good news is that we did find a GPU memory leak. That one is with skydome_light. If you do have a leak, where every frame more memory gets used, let us know if removing skydome lights helps. If it doesn't, then that is a different leak we still need to fix.
@cera kecera
Have the same problem with the memory leak. And now seeing, that there is a Vram usage issue, too. Thought thats just how Arnold GPU works!?
@Thiago Ize
Tried his scene, too and got similar Vram usage. Is this fixed with the newest release today?
Thats because the video wasnt supposed to show the memory leak issue. I have no idea how to reproduce that bug. But i do have it in my actual project and its impossible to render a sequence.
Anyway i have the latest Arnold build, no change in GPU ram, still consumes more then it should, double the amount compared to your try regarding the uploaded example.
FYI: Yesterday i did a complete reinstall of OS, 3ds max, Arnold, drivers, etc...
Still the same issues. What else could be causing this? Plugins? Corrupted scene? Wrong texture formats???
Need to bring this up, since the over usage of Vram persists in the latest maxtoa version.
What else could be causing this? Plugins? Corrupted scene? Wrong texture formats?
I have the same issue, no matter what the scene is, GPU memory will increase until it reaches total V-Ram and Arnold stops rendering, tried batch and sequence render and got the same problem!
I'm using Arnold 7.1.1 for maya
Hdiab, you should upgrade your Arnold as newer Arnolds have fixed a few memory leaks. Let us know if you're still noticing a leak (latest Arnold is currently 7.1.2.2).
I installed latest version of arnold and it didn't render at all (CPU and GPU)
I got this warning:
00:01:32 2339MB WARNING | [color_manager_ocio] unable to find default OCIO config, expected in C:\Program Files\Autodesk\Maya2023\bin\../ocio/configs/arnold/config.ocio
00:01:32 2352MB WARNING | [mtoa] [translator polymesh] ShadingGroup standardSurface2SG has no surfaceShader input
00:01:32 2352MB WARNING | [mtoa] [translator polymesh] ShadingGroup standardSurface3SG has no surfaceShader input