AI denoiser help

Hi guys, Just as an experiment im adding the nvidia ai denoiser to blender’s code, would really appreciate some help

Ok so I have created all the engine.py,ui.py,and properties.py files needed to add all AI denoiser options into the render settings tab,
the AI denoiser has pre defined states like 1,use only buaety render buffer,2 buety buffer and normal,3 buety,normal,albedo. 4, HDR on/off

Ive added code to blender_session.cpp

scene->film->use_aidenoise = get_boolean(crl, “use_pass_aidenoise”);
scene->film->use_hdr = get_boolean(crl, “ai_hdr”);
scene->film->use_albedo = get_boolean(crl, “ai_albedo”);
scene->film->use_normal = get_boolean(crl, “ai_normal”);
scene->film->ai_denoise_blend = get_float(crl, “ai_denoise_blend”);

Then added to blender_sync.cpp

MAP_PASS(“ai_result”, PASS_AIRESULT);

and

if (get_boolean(crp, "use_pass_aidenoise")) {
	b_engine.add_pass("ai_result", 4, "RGBA", b_srlay.name().c_str());
	Pass::add(PASS_AIRESULT, passes);
}
if (get_boolean(crp, "ai_normal")) {
	b_engine.add_pass("Normal", 3, "XYZ", b_srlay.name().c_str());
	Pass::add(PASS_NORMAL, passes);
}
if (get_boolean(crp, "ai_albedo")) {
	b_engine.add_pass("DiffCol", 3, "RGB", b_srlay.name().c_str());
	Pass::add(PASS_DIFFUSE_COLOR, passes);

Added to kernel_types.h

PASS_AIRESULT,
to the main passes list and the following handles

int use_aidenoise;
float ai_denoise_blend;
int ai_result;
int Normal;
int DiffCol;

Added to film.h

bool use_aidenoise;
bool use_hdr;
bool use_albedo;
bool use_normal;
float ai_denoise_blend;

Added to film.cpp

case PASS_AIRESULT:
pass.components = 4;
pass.exposure = true;
break;

and:

case PASS_AIRESULT:
	kfilm->ai_result = kfilm->pass_stride;
	break;

then under Film::Film()

 use_aidenoise = false;

and paramter:

 kfilm->use_aidenoise = use_aidenoise;

incase I need to add some other buffers for stats etc

Now Blender compiles fine everything works as expected, When I activate Ai denoise in the render settings I get the extra ai_result buffer
and is empty with checker background as I guess should because we havnt copied any data to the empty buffer yet. When I check the box for albedo,
normal, The buffers also then correctly appear in the render layer pases as they should.

Now what im woundering is where’s the best place to actualy feed the buffers combined,normal,albedo to the optix ai denoiser.

Blender_session.cpp seems the best place to me, as is already where the render start and finish setup is located as well as all the render/scene/camera change and update
params are and also the combined render pass buffer that ill want to read from to create the optix context denoiser input buffers & also viewport render buffers.

In some example code to use the optix ai denoiser it’s done in the following way:

// Get our pixel data
std::vector<float> beauty_pixels(b_width * b_height * beauty_roi.nchannels());
input_beauty->get_pixels(beauty_roi, OIIO::TypeDesc::FLOAT, &beauty_pixels[0]);

// Catch optix exceptions
try
{
    // Create our optix context and image buffers
    optix::Context optix_context = optix::Context::create();
    optix::Buffer beauty_buffer = optix_context->createBuffer(RT_BUFFER_INPUT_OUTPUT, RT_FORMAT_FLOAT4, b_width, b_height);
    optix::Buffer albedo_buffer = optix_context->createBuffer(RT_BUFFER_INPUT_OUTPUT, RT_FORMAT_FLOAT4, a_width, a_height);
    optix::Buffer normal_buffer = optix_context->createBuffer(RT_BUFFER_INPUT_OUTPUT, RT_FORMAT_FLOAT4, n_width, n_height);
    optix::Buffer out_buffer = optix_context->createBuffer(RT_BUFFER_INPUT_OUTPUT, RT_FORMAT_FLOAT4, b_width, b_height);


    float* device_ptr = (float*)beauty_buffer->map();
    unsigned int pixel_idx = 0;
    for(unsigned int y=0; y<b_height; y++)
    for(unsigned int x=0; x<b_width; x++)
    {
        memcpy(device_ptr, &beauty_pixels[pixel_idx], sizeof(float) * beauty_roi.nchannels());
        device_ptr += 4;
        pixel_idx += beauty_roi.nchannels();
    }
    beauty_buffer->unmap();
    device_ptr = 0;

    if (a_loaded)
    {
        std::vector<float> albedo_pixels(a_width * a_height * albedo_roi.nchannels());
        input_albedo->get_pixels(albedo_roi, OIIO::TypeDesc::FLOAT, &albedo_pixels[0]);

        device_ptr = (float*)albedo_buffer->map();
        pixel_idx = 0;
        for(unsigned int y=0; y<b_height; y++)
        for(unsigned int x=0; x<b_width; x++)
        {
            memcpy(device_ptr, &albedo_pixels[pixel_idx], sizeof(float) * albedo_roi.nchannels());
            device_ptr += 4;
            pixel_idx += beauty_roi.nchannels();
        }
        albedo_buffer->unmap();
        device_ptr = 0;
    }

    if (n_loaded)
    {
        std::vector<float> normal_pixels(n_width * n_height * normal_roi.nchannels());
        input_normal->get_pixels(normal_roi, OIIO::TypeDesc::FLOAT, &normal_pixels[0]);

        device_ptr = (float*)normal_buffer->map();
        pixel_idx = 0;
        for(unsigned int y=0; y<b_height; y++)
        for(unsigned int x=0; x<b_width; x++)
        {
            memcpy(device_ptr, &normal_pixels[pixel_idx], sizeof(float) * normal_roi.nchannels());
            device_ptr += 4;
            pixel_idx += normal_roi.nchannels();
        }
        normal_buffer->unmap();
        device_ptr = 0;
    }

    // Setup the optix denoiser post processing stage
    optix::PostprocessingStage denoiserStage = optix_context->createBuiltinPostProcessingStage("DLDenoiser");
    denoiserStage->declareVariable("input_buffer")->set(beauty_buffer);
    denoiserStage->declareVariable("output_buffer")->set(out_buffer);
    denoiserStage->declareVariable("blend")->setFloat(blend);
    denoiserStage->declareVariable("hdr")->setUint(hdr);
    denoiserStage->declareVariable("input_albedo_buffer")->set(albedo_buffer);
    denoiserStage->declareVariable("input_normal_buffer")->set(normal_buffer);

    // Add the denoiser to the new optix command list
    optix::CommandList commandList= optix_context->createCommandList();
    commandList->appendPostprocessingStage(denoiserStage, b_width, b_height);
    commandList->finalize();

    // Compile our context. I'm not sure if this is needed given there is no megakernal?
    optix_context->validate();
    optix_context->compile();

    // Execute denoise
    std::cout<<"Denoising..."<<std::endl;
    commandList->execute();
    std::cout<<"Denoising complete"<<std::endl;

    // Copy denoised image back to the cpu
    device_ptr = (float*)out_buffer->map();
    pixel_idx = 0;
    for(unsigned int y=0; y<b_height; y++)
    for(unsigned int x=0; x<b_width; x++)
    {
        memcpy(&beauty_pixels[pixel_idx], device_ptr, sizeof(float) * beauty_roi.nchannels());
        device_ptr += 4;
        pixel_idx += beauty_roi.nchannels();
    }
    out_buffer->unmap();
    device_ptr = 0;

    // Remove our gpu buffers
    beauty_buffer->destroy();
    normal_buffer->destroy();
    albedo_buffer->destroy();
    out_buffer->destroy();
    optix_context->destroy();
                                        
}                                               
catch (std::exception e)                        
{                                               
    std::cerr<<"[OptiX]: "<<e.what()<<std::endl;
    cleanup();                                  
    return EXIT_FAILURE;                        
}

Now this section is where I will add the buffers from blender normal,albedo,combined to the optix context:

	// Setup the optix denoiser post processing stage
    optix::PostprocessingStage denoiserStage = optix_context->createBuiltinPostProcessingStage("DLDenoiser");
    denoiserStage->declareVariable("input_buffer")->set(beauty_buffer);
    denoiserStage->declareVariable("output_buffer")->set(out_buffer);
    denoiserStage->declareVariable("blend")->setFloat(blend);
    denoiserStage->declareVariable("hdr")->setUint(hdr);
    denoiserStage->declareVariable("input_albedo_buffer")->set(albedo_buffer);
    denoiserStage->declareVariable("input_normal_buffer")->set(normal_buffer);

    // Add the denoiser to the new optix command list
    optix::CommandList commandList= optix_context->createCommandList();
    commandList->appendPostprocessingStage(denoiserStage, b_width, b_height);
    commandList->finalize();

    // Compile our context. I'm not sure if this is needed given there is no megakernal?
    optix_context->validate();
    optix_context->compile();

    // Execute denoise
    std::cout<<"Denoising..."<<std::endl;
    commandList->execute();

In Blender session.cpp this code is called for the combined buffer:

	void BlenderSession::do_write_update_render_result(BL::RenderResult& b_rr,
                                               BL::RenderLayer& b_rlay,
                                               RenderTile& rtile,
                                               bool do_update_only)

{
RenderBuffers *buffers = rtile.buffers;

/* copy data from device */
if(!buffers->copy_from_device())
	return;

float exposure = scene->film->exposure;

vector<float> pixels(rtile.w*rtile.h*4);

/* Adjust absolute sample number to the range. */
int sample = rtile.sample;
const int range_start_sample = session->tile_manager.range_start_sample;
if(range_start_sample != -1) {
	sample -= range_start_sample;
}

if(!do_update_only) {
	/* copy each pass */
	BL::RenderLayer::passes_iterator b_iter;

	for(b_rlay.passes.begin(b_iter); b_iter != b_rlay.passes.end(); ++b_iter) {
		BL::RenderPass b_pass(*b_iter);

		/* find matching pass type */
		PassType pass_type = BlenderSync::get_pass_type(b_pass);
		int components = b_pass.channels();

		bool read = false;
		if(pass_type != PASS_NONE) {
			/* copy pixels */
			read = buffers->get_pass_rect(pass_type, exposure, sample, components, &pixels[0]);
		}
		else {
			int denoising_offset = BlenderSync::get_denoising_pass(b_pass);
			if(denoising_offset >= 0) {
				read = buffers->get_denoising_pass_rect(denoising_offset, exposure, sample, components, &pixels[0]);
			}
		}

		if(!read) {
			memset(&pixels[0], 0, pixels.size()*sizeof(float));
		}

		b_pass.rect(&pixels[0]);
	}
}
else {
	/* copy combined pass */
	BL::RenderPass b_combined_pass(b_rlay.passes.find_by_name("Combined", b_rview_name.c_str()));
	if(buffers->get_pass_rect(PASS_COMBINED, exposure, sample, 4, &pixels[0]))
		b_combined_pass.rect(&pixels[0]);
}

/* tag result as updated */
b_engine.update_result(b_rr);

}

Now this part is where im thinking to just add to the optix input buffer:

	/* copy combined pass */
	BL::RenderPass b_combined_pass(b_rlay.passes.find_by_name("Combined", b_rview_name.c_str()));
	if(buffers->get_pass_rect(PASS_COMBINED, exposure, sample, 4, &pixels[0]))
		b_combined_pass.rect(&pixels[0]);
}

so something like this with optix setup:

    // Setup the optix denoiser post processing stage
    optix::PostprocessingStage denoiserStage = optix_context->createBuiltinPostProcessingStage("DLDenoiser");
    denoiserStage->declareVariable("input_buffer")->set(Combined);
    denoiserStage->declareVariable("output_buffer")->set(ai_result);
    denoiserStage->declareVariable("blend")->setFloat(blend);
    denoiserStage->declareVariable("hdr")->setUint(hdr);
    denoiserStage->declareVariable("input_albedo_buffer")->set(DiffCol);
    denoiserStage->declareVariable("input_normal_buffer")->set(Normal);

Is this the best way to do this, and if not how, and if not in blender_session.cpp where? This is a fun little experiment that ive enjoyed playing with today
but would love some help/pointers from anyone. Once ive got the ai denoiser working in the combined render layer ill start looking at the realtime viewport.

9 Likes

I don’t know anything about OptiX in detail and cannot provide a solution to your problem but it’s so cool that you’re working on this! I hope it works out!

3 Likes

You could add this code in render/session.cpp in Session::release_tile. It doesn’t really help to put it in the Blender session rather than the generic one.

However note that since Cycles renders tile based, denoising one tile at a time will give artifacts unless you use one big tile. The existing denoising has a mechanism for retrieving neighboring tiles, and if you want to take advantage of that you should probably integrate it in the same place.

For viewport denoising you do get the whole buffer so it’s easier, and you could put the denoising in Session::tonemap().

3 Likes

Couldn´t denoising be triggered once the render has finished?

In fact I would love to have that option for the actual blender denoiser, the feeling is that denoising tile by tile is slower.

Cheers.

3 Likes

I also can imagine that the “AI” needs to work on the complete render result to “understand” caustics and other scatterings that produce a lot of noise correctly.

Cheers

1 Like

I also wonder if it is possible to put the Denoiser in a compositing node, so it can be mixed with the noisy render if some noise is desired for the project like for example in a atomic war scene

1 Like

Hey Brecht,

Ive looked today at the session.cpp and session::release_tile

Am I missing something but assumed there would be a system already added to read the buffers needed but couldn’t find anything (could easily be missing something here as dont know blenders code base well and only a learner coder)

So ive added the ability to read out render buffers fron session.cpp

This is what im thinking to add:

blender_session.h

I then added this:

void read_render_tile(RenderTile& rtile);
====================================================

void write_render_result(BL::RenderResult& b_rr,
	                     BL::RenderLayer& b_rlay,
	                     RenderTile& rtile);
void write_render_tile(RenderTile& rtile);
void read_render_tile(RenderTile& rtile);

==================================================== 

then added bool do_read_only:

void do_write_update_render_tile(RenderTile& rtile,
	                             bool do_update_only,
	                             bool do_read_only,
	                             bool highlight);

=====================================================

blender_session.cpp

then added bool do_read_only:

void BlenderSession::do_write_update_render_tile(RenderTile& rtile,
	                                             bool do_update_only,
	                                             bool do_read_only,
	                                             bool highlight)

then added this:

if (do_read_only) {
		/* copy each pass */
		BL::RenderLayer::passes_iterator b_iter;
	
		for (b_rlay.passes.begin(b_iter); b_iter != b_rlay.passes.end(); ++b_iter) {
		BL::RenderPass b_pass(*b_iter);
		
		/* find matching pass type for buffer read *out */
		PassType pass_type = BlenderSync::get_pass_type(b_pass);
		int components = b_pass.channels();
		    
		   /*In original code get_pass_rect we also have exposure, samples, but as were just looking to copy as image */
		   /*i dont think exposure or samples will be needed*/
		   rtile.buffers->read_pass_rect(pass_type, components, (float*)b_pass.rect());	   
	}
		end_render_result(b_engine, b_rr, false, false, false);
}
else if (do_update_only) {

then add:

void BlenderSession::read_render_tile(RenderTile& rtile)
{
	do_write_update_render_tile(rtile, false, true, false);
}

and change:

void BlenderSession::write_render_tile(RenderTile& rtile)
{
	do_write_update_render_tile(rtile, false, false);
}

to:

void BlenderSession::write_render_tile(RenderTile& rtile)
{
	do_write_update_render_tile(rtile, false, false, false);
}

then under BlenderSession::update_render_tile I add:

 	if(!b_engine.is_preview())
		do_write_update_render_tile(rtile, true, false, highlight);
 	else
		do_write_update_render_tile(rtile, false, false, false);
 }

 and then under BlenderSession::render() add this:


 /* set callback to read out render results */
	session->read_render_tile_cb = function_bind(&BlenderSession::read_render_tile, this, _1);

	and then:

	/* clear render buffer read callback */
	session->read_render_tile_cb = function_null;


============================================================================
	
buffers.h

then added:

bool read_pass_rect(PassType type, int components, float *pixels);

============================================================================

buffers.cpp

then added:

bool RenderBuffers::read_pass_rect(PassType type, int components, float *pixels)
{
	if (buffer.data() == NULL) {
		return false;
	}
		int pass_offset = 0;
	
		for (size_t j = 0; j < params.passes.size(); j++) {
		Pass & pass = params.passes[j];
		
			if (pass.type != type) {
			pass_offset += pass.components;
			continue;	
		}

		float *out = buffer.data() + pass_offset;
		int pass_stride = params.get_passes_size();
		/*Not sure if Brecht would want more than just buffer width/height here and maybe the full tile params. NEED FEEDBACK*/
		int size = params.width*params.height;
		
			assert(pass.components == components);
	
			for (int i = 0; i < size; i++, out += pass_stride, pixels += components) {
			for (int j = 0; j < components; j++) {
				 out[j] = pixels[j];
			}
		}
			return true;
	}
		return false;
	}

===========================================================

session.h

add:

function<void(RenderTile&)> read_render_tile_cb;

===========================================================

session.cpp

add:

if(read_render_tile_cb) {
		/* This will read any passes needed as input for render buffer access. */
		read_render_tile_cb(rtile);
		rtile.buffers->buffer.copy_to_device();
	}
	else {
		/* This will tag tile as IN PROGRESS in blender-side render pipeline,
		 * which is needed to highlight currently rendering tile before first
		 * sample was processed for it. */
		update_tile_sample(rtile);
	}

Is this OK, or is adding this functionality over kill. To be honest having a method now to read render buffers seems handy no matter what so thinking of creating a .diff to upload as this could be helpfull for other things lke if people want to build addons that do things to render buffers for post pro etc

1 Like

This is difficult to follow, please use diffs:
https://wiki.blender.org/wiki/Tools/Patches

I have trouble understanding what this code is trying to do, I’m not sure why the Blender session is involved at all.

yep I wasn’t sure at all if this was what i needed. What im trying to do is just access the render buffers for Combined,Normal,Diffcol to pass the buffers into optix denoiser from session.cpp Session::release_tile

when I looked at rendertile_write and update I wasnt sure if there was a way to read those combined,normal,diffcol buffers from within session.cpp so the code above was meant to add read buffer access to do_write_update_render_tile.

I thought that this would then create ouput buffers of all passes requested but with only image data not other settings like exposure,samples.

if (do_read_only) {
/* copy each pass */
BL::RenderLayer::passes_iterator b_iter;

	for (b_rlay.passes.begin(b_iter); b_iter != b_rlay.passes.end(); ++b_iter) {
	BL::RenderPass b_pass(*b_iter);
	
	/* find matching pass type for buffer read *out */
	PassType pass_type = BlenderSync::get_pass_type(b_pass);
	int components = b_pass.channels();
	    
	   /*In original code get_pass_rect we also have exposure, samples, but as were just looking to copy as image */
	   /*i dont think exposure or samples will be needed*/
	   rtile.buffers->read_pass_rect(pass_type, components, (float*)b_pass.rect());	   
}
	end_render_result(b_engine, b_rr, false, false, false);

}

I looked at your update patch for texture baking to do this but applied to renderbuffers

I thought then that would make passing the buffers in session.cpp to the optix denoiser easier, but im guessing thats a mistake.

How would you of passed the combined,normal,diffcol to the optix buffers in session::release_tile? a few lines of code how you would do this would be a great help

Have i just over complicated everything, should it be just as simple as just using rtile.buffers->copy_to_device() ?

1 Like

Do something like this:

float exposure = scene->film->exposure
int sample = rtile.sample;

rtile.buffers->get_pass_rect(PASS_COMBINED, exposure, sample, 4, pixels, "Combined"):
rtile.buffers->get_pass_rect(PASS_NORMAL, exposure, sample, 3, normal_pixels, "Normal"):
...
2 Likes

yeah I was massively over complicating this, cheers Brecht

Did you get it working? I would be interested to see/test the code.

2 Likes

What’s the status on this? I’m really interested in seeing OptiX as part of 2.8x. If its on a separate branch, how do I access it/is there an up to date diff?

The OptiX denoiser won’t be a part of 2.80 due to license incompatibility.

Are you sure about that?

Yes, I am sure about that.

Well that’s sad :slightly_frowning_face:

I believe Ton’s tweet was about Optix Ray Tracing not the Optix Denoise.

Can it be in a stand-alone program though? Since cycles is not GPL.

Theoretically yes, practically that makes no sense for Blender.

1 Like