-
The structure of the renderer is much aligned with the existing path taken for CPU and CUDA, with little leverage of Apple Silicon’s more unique architecture as of yet. We do leverage the unified memory architecture to avoid duplication of resources, but there’s much more to do on this front and we’re keen to see that leaned on in CPU+GPU rendering modes. There is certainly scope to use the Apple Neural Engine for denoise in the viewport too.
-
Correctness has definitely been a focus for us, with ensuring we get solid results and a renderer users can use and rely on. This is intended to be a tech demo - it is aimed to be a tool that users can use all day every day. Some of the early R&D we’ve done has resulted in render performance being more than doubled over where it is now, but taking these prototypes and productising them is another matter, and takes significant time. The avenue to performance on Apple Silicon means driving the GPU in the way that is most efficient for its architecture. Each GPU architecture is different though, and we need to be able to cleanly drive our GPUs more efficiently but without compromising the existing performance on other GPUs.
-
Optimisation is going to be an ongoing effort, rather than a task we tackle just the once, and I’m hoping the team can see some improvements land in every release. We have big ambitions.
-
Math is a solid foundation to build upon, but the programming skills on top of that need to be gained! I certainly appreciate the sentiment. Outside of programming, we would always welcome testing and feedback to ensure we’re getting as much coverage as possible, and that nothing gets missed, along with efforts to help centralise and coordinate this feedback making it from the wider community back to us.
19 Likes