GSoC 2019: Fast Import and Export

Is there any possibility to improve DXF/DWG import in this GSOC? Just asking because usually DXF/DWG files have A LOT of data and they are so slow importing that sometimes they even hang Blender.
The same thing happens with FBX.

(It may not be in the goals for this Gsoc, I’m just asking)

Cheers!

1 Like

You can create data structures that can be read by C++ and Python. So you don’t have to convert anything.

1 Like

Not currently planned. Maybe at the end, but anyone else is welcome to use the building blocks

1 Like

You can create data structures that can be read by C++ and Python.

Can you? ctypes is for C data structures only it seems, C++ has no standard interface, but the importers will be class based. It’s also questionable if you can make use of native C++ data structures in Python. I assume that bpy requires wrapped Python objects. But even if, I don’t see why one would want to do this, because you still add overhead for the context switches, add more complexity and I don’t think that you would save much time if you are already at rewriting the core logic for an importer - why not just port it entirely?

Thank you SO much for addressing this issue! I work with large vertex-colored point clouds in .PLY format, and the vast majority of my time is spent brewing coffee while Blender imports the results. I’ve made basic .PLY import/export modules in C++ for my own OpenGL project, and while naive and crude, they still run way faster than Python.

1 Like

On win64 With L6 Applied suzanne

Implementation Time (size of resultant file)
Python 123.8 s (340mb)
C++ (without duplicates) 27.2 s (299mb)
C++ (with duplicates) 47. s (600mb)

Seems fishy that I have opposite results for the with and without duplicates, you sure on those times?

Also when swapping out fstream for FILE*

Implementation Time (size of resultant file)
C++ (without duplicates) 14.2 s (299mb)
C++ (with duplicates) 21.4 s (600mb)
1 Like

Yes, was going to give this as a suggestion as well, don’t use iostreams. Like never use them… for anything.

That’s strange… And yeah, the filestreams were clearly a temporary solution, but I didn’t expect it to be so bad in this case

The profiler tipped me off when it highlighted all the fs << lines as hotspots.

When writing performance sensitive code always profile … profile … profile.

It’s not open source, but if you’re on linux intel makes vtune available for free for open source developers.

Also not sure how you are testing, but starting the GUI over and over, and clicking on stuff is really eats into your devtime.

blender -b /home/you/suzanne_obj_export.blend --python-expr "import bpy; bpy.ops.wm.obj_export_c(filepath='cube.obj')" 

may speed things up for you (also better for the profiler if you can exclude the time where you’re just clicking about or waiting for shaders to compile)

3 Likes
Implementation Time Jun14 Time Jun19
C++ (without duplicates) 27.2 s 19.4 s
C++ (with duplicates) 47. s 25.0 s

i just got idea. don’t know how is this possible in terms of useing all cores during export don’t know if u guys will do this but. if that is for example not really possible i got idea that for example if u export many objects… each object could be assigned to each core. for example…

You can’t just throw more threads at a problem and assume it’ll be faster, it’ll work for cpu bound problems, for IO bound issues this rarely works out all that well.

I was doing that already, yes :slight_smile:

I realized I post this on the wrong thread, but here is what I’m getting:

Scene 661c98ca146 (base) b9e10ba1b62 (fprintf) d08fbfacb21 (no boost::iterator_facade) d08fbfacb21 (with duplicates)
Default scene 201ms 31.4ms 1ms 1ms
Took 46.1s 43.1s 40.9s 11.6s

I still find it weird that it’s still faster while removing the duplicates for you

Yeah that’s a messup on my part, i switched the numbers by accident typing them into the table, here’s the actual output

Totals: 2051367 2015232
Sizes: 8060928 8060928
Took 19413ms
Totals: 0 0
Sizes: 0 0
Took 25001ms

So, turns out I was using a debug build for testing, by mistake. On a release build, the same test (6 subdiv Suzanne), it takes 8.5 seconds

2 Likes

That’s more like it! The results i posted were off release builds so no gains to be had there sadly.

So, this project is dead?

There’s still a steady stream of commits, so don’t think so?

Hmm ok. So any reason why there’s no more weekly reports?