Cycles Requests


To avoid too many topics spawning and other discussion being drowned out, please use this topic to ask if or when features will be implemented. The more permanent list of ranked feature requests is here:

Also see here for planned features:

Texture compression at scene translation
Send portions of mip maps per bucket to vram rather than sending all scene textures at once prior to render
Motion blur on Alembic meshes

Until re-usable vram texture cache is implemented (like redshift), would it be possible to have some form of texture compression at scene translation? Maybe converting textures to DDS for example on the fly (for textures not connected to normal nodes at least).

I’m not allowed to convert them externally, the client want the file to only contain JPG’s or png’s.


As the title says (same as redshift). The reason is, I’m currently unable to have scenes which contain more than 700mb of compressed textures. A limit which doesnt apply to most of the other GPU engines I’ve used.

I love blender by the way, it’s incredible the amount of thought that’s gone into the workflow! Brilliant.

excerpt from redshift page:

Redshift can successfully render scenes containing gigabytes of texture data. It can achieve that by ‘recycling’ the texture cache (in this case 128MB). It will also upload only parts of the texture that are needed instead of the entire texture. So when textures are far away, a lower resolution version of the texture will be used (these are called “MIP maps”) and only specific tiles of that MIP map.
Because of this method of recycling memory, you will very likely see the PCIe-transferred figure grow larger than the texture cache size .

Once reserved memory and rays have been subtracted from free memory, the remaining is split between the geometry (polygons) and the texture cache (textures). The “Percentage” parameter tells the renderer the percentage of free memory that it can use for texturing.


Say we are using a 2GB videocard and what’s left after reserved buffers and rays is 1.7GB. The default 15% for the texture cache means that we can use up to 15% of that 1.7GB, i.e. approx 255MB. If on the other hand, we are using a videocard with 1GB and after reserved buffers and rays we are left with 700MB, the texture cache can be up to 105MB (15% of 700MB).
Once we know how many MB maximum we can use for the texture cache, we can further limit the number using the “Maximum Texture Cache Size” option. This is useful for videocards with a lot of free memory. For example, say you are using a 6GB Quadro and, after reserved buffers and rays you have 5.7GB free. 15% of that is 855MB. There are extremely few scenes that will ever need such a large texture cache! If we didn’t have the “Maximum Texture Cache Size” option you would have to be constantly modifying the “Percentage” option depending on the videocard you are using.


We’re aware of these types of algorithms, see discussion here:


Thanks brecht. I hope someone is able to implement this onto the GPU, it’s not really that important on the CPU (comparatively at least) now the majority of 3d artists generally have a minimum of 32gb system ram.

Apparently I may be able to use DDS in some way in the mean time. Do you know if there’s an addon to convert from png/jpg/hdr at scene translation time (similar to what Fstorm does), or do I have to do the conversion externally and then swap out all of the scenes textures?


Hi guys,
VFX artist here.

I see cycles do not have support for motion blur on alembic meshes with varying point count.
I find it strange since cycles got support for motion blur on blender fluids and other meshes with varying point count.

Is it possible to point predefined Velocity from an alembic file to cycles? Stored as a vertex attribute.
In the blender manual under alembic it says:

"Blender can be used in a hybrid pipeline . For example, other software, such as Houdini or Maya, can export files to Alembic, which can then be loaded, shaded, and rendered in Blender."**

To not have the option to render alembic files with motion blur or even output motion vector as AOV makes alembic partial useless for a lot of studios.

I don’t know how blender/cycles handle/calculate motion blur, but if its like most other application there should be an option to use an predefined vector (velocity attribute).

I am very thankful for the work that been put into implementing alembic support into blender so far!

Best regard, Valo


i use blender & Houdini since some time. Acctualy good that u mentioned alembic because there have to be done some improvments like… currently u can only import one attribute to blender via alembic. would be nice to get more. About motion blure. hmm i acctualy didn;t test this but if u say so u are probably right!

but if u want to do motionblure in post production u can write velocity attribute to your geo. and blender cycles engine will read it. altho u would have to render whole sceen with material that is emission of that attribute and use it as a pass.


Hi there,

Yes I’ve manged to import custom attributes from Houdini via vertex attributes to blender.
I’m not sure it would be as accurate to render it shadeless compared to an actually motion vector pass. Not sure the interpolation would be pixel accurate.



It’s not currently supported, but would indeed be good to add.


Okay, thanks. Do you think it would be hard to implement? Or are there some technical hurdle to overcome. Cant find any good docs on how cycles manages motion blur.


It’s not necessarily that hard, just a matter of doing it.


Now that the tile size has lass of an impact on performance, would it be possible to add dynamic scaling towards the end of the render?

For example, let’s say you use 128px tile size with 1000 samples with multiple gpu’s. The final one or two tiles could be split up in to smaller ones dynamically to keep all of the gpu’s working on the last tile. Same for CPU cores.

Or even to have larger tiles for the gpu and smaller ones for the cpu when using hybrid rendering.

EDIT: This video shows what I’m talking about:


The plan is to make GPU rendering support multiple small tiles at the same time.


a real relation between cycles and eevee :

  • light need to work the same for both engine , same intensity , same color value , same ies value
  • Depth of field need to work the same , same depth distance , same intensity of the DOF …ect , right now they are totally different.
  • Light effect like bloom, ambient occlusion (potential lens flares?? ) need to behave the same too in the two engine , and having bloom and lens flare also inside of cycles could be really awesome
  • same for volumetric similarities in term of looks , and also the possibility to use eevee type of volumetric inside of cycles for optimisation purpose?
  • A solution for displacement from cycles displacement to eevee ? the conversion to a normal map dont work out really well …
    thats why im all in for the introduction of a Parralax occlusion node for the two engine

in general switching between the two engine need to be the seamless as possible :grin:


Maybe it would be good to have a link to the roadmap in the first post to avoid having people request features that are already planned?

@betalars Adaptive sampling for example is already on the roadmap. And apparently it is quite hard thing to get working well. Not sure about the current situation.


(regards a withdrawn post by me)
Oh sorry I didn’t find that. I’m used to looking for a known bugs/ planned features thread in software forums and simply didn’t look into the wiki.
Thanks for the RTFM :laughing:


Are there still plans to support VCM or a similar algorithm anytime in the future? Would be quite useful for complicated light situations and caustics.


It’s not actively being worked on, but would be nice to have.


Micro Roughness for the Principled Shader and/or as an independent node would be nice to have. It can be done with a Node Group too so it’s not that important though.

Moony explained it pretty well in this thread

Also this paper