This is a forum for Cycles developers. To avoid too many topics spawning and other discussion being drowned out, use this topic to ask about new features. The recommended website for feature requests is right click select.
Until re-usable vram texture cache is implemented (like redshift), would it be possible to have some form of texture compression at scene translation? Maybe converting textures to DDS for example on the fly (for textures not connected to normal nodes at least).
Iām not allowed to convert them externally, the client want the file to only contain JPGās or pngās.
As the title says (same as redshift). The reason is, Iām currently unable to have scenes which contain more than 700mb of compressed textures. A limit which doesnt apply to most of the other GPU engines Iāve used.
I love blender by the way, itās incredible the amount of thought thatās gone into the workflow! Brilliant.
excerpt from redshift page:
Redshift can successfully render scenes containing gigabytes of texture data. It can achieve that by ārecyclingā the texture cache (in this case 128MB). It will also upload only parts of the texture that are needed instead of the entire texture. So when textures are far away, a lower resolution version of the texture will be used (these are called āMIP mapsā) and only specific tiles of that MIP map.
Because of this method of recycling memory, you will very likely see the PCIe-transferred figure grow larger than the texture cache size .
Once reserved memory and rays have been subtracted from free memory, the remaining is split between the geometry (polygons) and the texture cache (textures). The āPercentageā parameter tells the renderer the percentage of free memory that it can use for texturing.
Example:
Say we are using a 2GB videocard and whatās left after reserved buffers and rays is 1.7GB. The default 15% for the texture cache means that we can use up to 15% of that 1.7GB, i.e. approx 255MB. If on the other hand, we are using a videocard with 1GB and after reserved buffers and rays we are left with 700MB, the texture cache can be up to 105MB (15% of 700MB).
Once we know how many MB maximum we can use for the texture cache, we can further limit the number using the āMaximum Texture Cache Sizeā option. This is useful for videocards with a lot of free memory. For example, say you are using a 6GB Quadro and, after reserved buffers and rays you have 5.7GB free. 15% of that is 855MB. There are extremely few scenes that will ever need such a large texture cache! If we didnāt have the āMaximum Texture Cache Sizeā option you would have to be constantly modifying the āPercentageā option depending on the videocard you are using.
Thanks brecht. I hope someone is able to implement this onto the GPU, itās not really that important on the CPU (comparatively at least) now the majority of 3d artists generally have a minimum of 32gb system ram.
Apparently I may be able to use DDS in some way in the mean time. Do you know if thereās an addon to convert from png/jpg/hdr at scene translation time (similar to what Fstorm does), or do I have to do the conversion externally and then swap out all of the scenes textures?
I see cycles do not have support for motion blur on alembic meshes with varying point count.
I find it strange since cycles got support for motion blur on blender fluids and other meshes with varying point count.
Is it possible to point predefined Velocity from an alembic file to cycles? Stored as a vertex attribute.
In the blender manual under alembic it says:
"Blender can be used in a hybrid pipeline . For example, other software, such as Houdini or Maya, can export files to Alembic, which can then be loaded, shaded, and rendered in Blender."**
To not have the option to render alembic files with motion blur or even output motion vector as AOV makes alembic partial useless for a lot of studios.
I donāt know how blender/cycles handle/calculate motion blur, but if its like most other application there should be an option to use an predefined vector (velocity attribute).
I am very thankful for the work that been put into implementing alembic support into blender so far!
i use blender & Houdini since some time. Acctualy good that u mentioned alembic because there have to be done some improvments likeā¦ currently u can only import one attribute to blender via alembic. would be nice to get more. About motion blure. hmm i acctualy didn;t test this but if u say so u are probably right!
but if u want to do motionblure in post production u can write velocity attribute to your geo. and blender cycles engine will read it. altho u would have to render whole sceen with material that is emission of that attribute and use it as a pass.
Yes Iāve manged to import custom attributes from Houdini via vertex attributes to blender.
Iām not sure it would be as accurate to render it shadeless compared to an actually motion vector pass. Not sure the interpolation would be pixel accurate.
Okay, thanks. Do you think it would be hard to implement? Or are there some technical hurdle to overcome. Cant find any good docs on how cycles manages motion blur.
Now that the tile size has lass of an impact on performance, would it be possible to add dynamic scaling towards the end of the render?
For example, letās say you use 128px tile size with 1000 samples with multiple gpuās. The final one or two tiles could be split up in to smaller ones dynamically to keep all of the gpuās working on the last tile. Same for CPU cores.
Or even to have larger tiles for the gpu and smaller ones for the cpu when using hybrid rendering.
light need to work the same for both engine , same intensity , same color value , same ies value
Depth of field need to work the same , same depth distance , same intensity of the DOF ā¦ect , right now they are totally different.
Light effect like bloom, ambient occlusion (potential lens flares?? ) need to behave the same too in the two engine , and having bloom and lens flare also inside of cycles could be really awesome
same for volumetric similarities in term of looks , and also the possibility to use eevee type of volumetric inside of cycles for optimisation purpose?
A solution for displacement from cycles displacement to eevee ? the conversion to a normal map dont work out really well ā¦
thats why im all in for the introduction of a Parralax occlusion node for the two engine
in general switching between the two engine need to be the seamless as possible
@betalars Adaptive sampling for example is already on the roadmap. And apparently it is quite hard thing to get working well. Not sure about the current situation.
(regards a withdrawn post by me)
Oh sorry I didnāt find that. Iām used to looking for a known bugs/ planned features thread in software forums and simply didnāt look into the wiki.
Thanks for the RTFM
Are there still plans to support VCM or a similar algorithm anytime in the future? Would be quite useful for complicated light situations and caustics.
Micro Roughness for the Principled Shader and/or as an independent node would be nice to have. It can be done with a Node Group too so itās not that important though.