I’m wanting to be proactive with the discussion about the M1 chip from Apple and its future use with Blender.
I haven’t seen the same level of interest for the 16 core Neural Engine, that Apple included on the M1 chip, as I have for the CPU.
As of now I’m assuming the extra 16 cores would just be dormant while working in Blender?
I know Davinci Resolve talked about already using them for Smart Masking and Smart Tracking which uses Machine Learning to improve the outcomes. Affinity Photo has mentioned future use, or now, for masking and resizing images.
What are some of the ways Blender may be able to take advantage of the extra cores?
-Cycles realtime denoising on each sample rather than at the end; possibly improving the overall outcome?
-Some kind of ML Smart Masking?
-ML object tracking similar to what DR is doing?
-ML predictive “tweening” during animation or baking?
I hope this helps spur on some ideas for future development.