Cycles Requests

This is a forum for Cycles developers. To avoid too many topics spawning and other discussion being drowned out, use this topic to ask about new features. The recommended website for feature requests is right click select.

There is also Cycles module page listing planned features.

2 Likes

Until re-usable vram texture cache is implemented (like redshift), would it be possible to have some form of texture compression at scene translation? Maybe converting textures to DDS for example on the fly (for textures not connected to normal nodes at least).

Iā€™m not allowed to convert them externally, the client want the file to only contain JPGā€™s or pngā€™s.

1 Like

As the title says (same as redshift). The reason is, Iā€™m currently unable to have scenes which contain more than 700mb of compressed textures. A limit which doesnt apply to most of the other GPU engines Iā€™ve used.

I love blender by the way, itā€™s incredible the amount of thought thatā€™s gone into the workflow! Brilliant.

excerpt from redshift page:

Redshift can successfully render scenes containing gigabytes of texture data. It can achieve that by ā€˜recyclingā€™ the texture cache (in this case 128MB). It will also upload only parts of the texture that are needed instead of the entire texture. So when textures are far away, a lower resolution version of the texture will be used (these are called ā€œMIP mapsā€) and only specific tiles of that MIP map.
Because of this method of recycling memory, you will very likely see the PCIe-transferred figure grow larger than the texture cache size .

Once reserved memory and rays have been subtracted from free memory, the remaining is split between the geometry (polygons) and the texture cache (textures). The ā€œPercentageā€ parameter tells the renderer the percentage of free memory that it can use for texturing.

Example:

Say we are using a 2GB videocard and whatā€™s left after reserved buffers and rays is 1.7GB. The default 15% for the texture cache means that we can use up to 15% of that 1.7GB, i.e. approx 255MB. If on the other hand, we are using a videocard with 1GB and after reserved buffers and rays we are left with 700MB, the texture cache can be up to 105MB (15% of 700MB).
Once we know how many MB maximum we can use for the texture cache, we can further limit the number using the ā€œMaximum Texture Cache Sizeā€ option. This is useful for videocards with a lot of free memory. For example, say you are using a 6GB Quadro and, after reserved buffers and rays you have 5.7GB free. 15% of that is 855MB. There are extremely few scenes that will ever need such a large texture cache! If we didnā€™t have the ā€œMaximum Texture Cache Sizeā€ option you would have to be constantly modifying the ā€œPercentageā€ option depending on the videocard you are using.

1 Like

Weā€™re aware of these types of algorithms, see discussion here:

1 Like

Thanks brecht. I hope someone is able to implement this onto the GPU, itā€™s not really that important on the CPU (comparatively at least) now the majority of 3d artists generally have a minimum of 32gb system ram.

Apparently I may be able to use DDS in some way in the mean time. Do you know if thereā€™s an addon to convert from png/jpg/hdr at scene translation time (similar to what Fstorm does), or do I have to do the conversion externally and then swap out all of the scenes textures?

1 Like

Hi guys,
VFX artist here.

I see cycles do not have support for motion blur on alembic meshes with varying point count.
I find it strange since cycles got support for motion blur on blender fluids and other meshes with varying point count.

Is it possible to point predefined Velocity from an alembic file to cycles? Stored as a vertex attribute.
In the blender manual under alembic it says:

"Blender can be used in a hybrid pipeline . For example, other software, such as Houdini or Maya, can export files to Alembic, which can then be loaded, shaded, and rendered in Blender."**

To not have the option to render alembic files with motion blur or even output motion vector as AOV makes alembic partial useless for a lot of studios.

I donā€™t know how blender/cycles handle/calculate motion blur, but if its like most other application there should be an option to use an predefined vector (velocity attribute).

I am very thankful for the work that been put into implementing alembic support into blender so far!

Best regard, Valo

8 Likes

i use blender & Houdini since some time. Acctualy good that u mentioned alembic because there have to be done some improvments likeā€¦ currently u can only import one attribute to blender via alembic. would be nice to get more. About motion blure. hmm i acctualy didn;t test this but if u say so u are probably right!

but if u want to do motionblure in post production u can write velocity attribute to your geo. and blender cycles engine will read it. altho u would have to render whole sceen with material that is emission of that attribute and use it as a pass.

1 Like

Hi there,

Yes Iā€™ve manged to import custom attributes from Houdini via vertex attributes to blender.
Iā€™m not sure it would be as accurate to render it shadeless compared to an actually motion vector pass. Not sure the interpolation would be pixel accurate.

1 Like

Itā€™s not currently supported, but would indeed be good to add.

Okay, thanks. Do you think it would be hard to implement? Or are there some technical hurdle to overcome. Cant find any good docs on how cycles manages motion blur.

Itā€™s not necessarily that hard, just a matter of doing it.

3 Likes

Now that the tile size has lass of an impact on performance, would it be possible to add dynamic scaling towards the end of the render?

For example, letā€™s say you use 128px tile size with 1000 samples with multiple gpuā€™s. The final one or two tiles could be split up in to smaller ones dynamically to keep all of the gpuā€™s working on the last tile. Same for CPU cores.

Or even to have larger tiles for the gpu and smaller ones for the cpu when using hybrid rendering.

EDIT: This video shows what Iā€™m talking about: https://www.youtube.com/watch?v=gAgbJvcncBs

4 Likes

The plan is to make GPU rendering support multiple small tiles at the same time.

8 Likes

a real relation between cycles and eevee :

  • light need to work the same for both engine , same intensity , same color value , same ies value
  • Depth of field need to work the same , same depth distance , same intensity of the DOF ā€¦ect , right now they are totally different.
  • Light effect like bloom, ambient occlusion (potential lens flares?? ) need to behave the same too in the two engine , and having bloom and lens flare also inside of cycles could be really awesome
  • same for volumetric similarities in term of looks , and also the possibility to use eevee type of volumetric inside of cycles for optimisation purpose?
  • A solution for displacement from cycles displacement to eevee ? the conversion to a normal map dont work out really well ā€¦
    thats why im all in for the introduction of a Parralax occlusion node for the two engine

in general switching between the two engine need to be the seamless as possible :grin:

5 Likes

Maybe it would be good to have a link to the roadmap in the first post to avoid having people request features that are already planned?

https://wiki.blender.org/wiki/Source/Render/Cycles/Roadmap

@betalars Adaptive sampling for example is already on the roadmap. And apparently it is quite hard thing to get working well. Not sure about the current situation.

1 Like

(regards a withdrawn post by me)
Oh sorry I didnā€™t find that. Iā€™m used to looking for a known bugs/ planned features thread in software forums and simply didnā€™t look into the wiki.
Thanks for the RTFM :laughing:

Are there still plans to support VCM or a similar algorithm anytime in the future? Would be quite useful for complicated light situations and caustics.

1 Like

Itā€™s not actively being worked on, but would be nice to have.

2 Likes

Micro Roughness for the Principled Shader and/or as an independent node would be nice to have. It can be done with a Node Group too so itā€™s not that important though.

Moony explained it pretty well in this thread

Also this paper

2 Likes