Eevee slowdown when getting closer to objects

i noticed a issue on my desktop system with a AMD RX 480 that when i get close to a object with eevee the frame time significantly increases. its most noticeable if i increase the AO precision. In this case with a simple cube frame time goes from about 2ms to about 40+ms when getting close to it.

I was looking through some bug reports to see if i could find something about this topic but the closest thing i could find was the performance issues on mac (im on a windows pc)

So i wanted to ask if anyone knows something about this and if it is a known issue or if its something expected to be hapenig?

EDIT: just tested on my laptop with a 2070 as well and there i can also see a significant change from about 2ms to about 6-7ms. although its not by any means drastic on this system its also clearly visible.

are you using transparent materials cause that’s normal behavior and there’s not much that can be done about. The closer you get the more the overdraw increases in terms of screen percentage and the more the processing that needs to be done.


nono just the default new material without any modifications.

sorry i just assumed you were using transparent materials without testing but it does look like enabling ambient occlusion produces this behavior on my system as well. No idea why unfortunately??

Well, it really applies for every kind of shader. More pixels of the display occupied by an object, more pixels of its shader eevee will need to calculate.

It just gets more evident with expensive materials.

Regarding AO, I think the way it’s calculated is specifically based on camera distance, so the behaviour makes sense. (but I might be wrong here)

If more of the screen is covered by an object then it will be slower of course. For screenspace AO memory access patterns will be worse as well which can have a significant performance impact. AO distance becomes bigger in screen space and far away pixel lookups lead to more cache misses.

Could you share a .blend file where you can clearly reproduce the problem? So we can all test with the same scene, and get some kind of conclusion.

Ok, apparently you are talking about ms, so I guess you mean final render times result. I was trying to measure fps in viewport.

Edit 2:
Apparently I don’t get much of a render time difference as long as the camera is evenly filled by the objects across the entire camera area. It does not depend so much on zoom in/out, but on how filled with the objects the camera area is, which I think must be correct behavior because of those Screen Space things that Eevee is.

It does not require a specific scene, the default cube will show it perfectly fine. here are 2 screenshots showing the scenario. I understand that it would use more performance since its screen space, however it seems a bit extreme on the costs. the screenshots are with a nvidia 2070 and on the AMD 480 it goes well up to 40ms which is really noticeable.
Obviously i can turn down the precision to avoid it, however i was just wondering if there might be something wrong that its so much more expensive or maybe some trick the programmers can do to avoid the cost increase or limit it:D