OK, I have a basic DAW Sequencer working, the notes are read and their position determines the note frequency used, their length determines the note length and the output sound can be written to the VSE, or they can be played as a sound animation:
So, having downloaded a “fart” sound from the internet, now, whenever the little empty is a Z = 1 and has not been there on the frame before, it makes an, err, ummm, farting noise from my speakers… Oh the things that you have to do to amuse yourself when you get to my age and are told by your government to isolate yourself.
BTW Mrs. C has told me she is self-isolating away from the kitchen - not good!
I have a node that has three properties and an execute routine that concatenates them together into a list, this list is returned by a return statement. So far so good, If I plug in a node that reads the output using an operator, I see the correct return. If I plug in a node that reads the incoming socket I get what is in the first property only - frustrating to say the least.
Then I want to output two variables from another node into two output sockets, whatever I do I always get the same value in both sockets, although my return statement says; return var_1, var_2, or words to that effect.
So, clearly I’m missing something in my node, or nodetree definitions, but I don’t know what. The code is all here, maybe not the latest, but it shows my tree definition in the __init__.py file.
Can someone point me in the right direction., like some worked examples, or explanations, as to how to define a tree, nodes, sockets, etc. and how to get multiple output sockets and to correctly read input sockets please.
Believe me, I have tried repeatedly to solve this, but without putting something right I can take this project no further and will have to abandon it.
OK, thanks for the help ( ), I solved the multiple outputs by using a single socket and passing a dictionary of values to it that I can extract what I want from in the connected node - i.e. I didn’t find out how to send data to two output nodes, but no worries, this dictionary method works in all situations for me. I used a dictionary so I can extract for, example a sound, because the dictionary key would be “sound” for a sound, etc.
I also sorted the same method to get Note Data from that node, i.e. it passes keys like “note_name”, note_duration", etc.
I remain “utterly clueless” as to what the base node is supposed to do, I cannot decipher meaning from the examples I quoted at all, oh well, such is life…
Clock, your work is really cool. When Blender 2.90 has particle nodes and when more of Blender is node-based, is your plan to try and get Clockworx nodes integrated into the main branch of Blender? I am a former concert pianist and played in jazz groups, and it would be great to be able to have an official Blender version that I could hook up to my MIDI keyboard and do cool stuff with.
These two cubes are being animated in Z axis scale by the nodes here, fed from the sound input of my Mac.
I just need to work out how to stop “dumb users” connecting sockets that don’t match - any clues anyone, I cannot find a reference to this, but then again I have asked this before and nobody answered me…
Ahah! I have sorted it so sockets only connect to others of the same kind, other than Generic sockets, which can connect to anything else!
So, whilst I was laying under my old lady doing a repetitive task that required little brain power, I solved the issue. Now before you all get the wrong end of the stick, I should point out that by “my old lady” I am referring to my glider and obviously not Mrs. Clockmender (some people’s minds…tut, tut), I hope that is all clear now.
Speaking of which, or is that witch? when I got home I explained to Mrs. C. how I was going to sort the problem, to which she replied that what I had come up with was in fact the correct solution and that she had not told me earlier of this, because she felt it would be better for me if I worked it out for myself. This is presumably what has happened here also…
As a side issue I also found out how to automatically colour code the nodes based upon what they do:
Successfully accomplished, you can specify the sound file, the maximum frequency range, the number of frequency splits (I use harmonic splits based upon the n/12th root of 0.5) and a host of other factors to bake a sound to the controls’ F-Curves.
These controls can then be used to animate objects, although I have not done that in a project yet, but do have the necessary nodes in place already, these are the ones I use to animate from MIDI controls.
PS. As a bonus, I also worked out how to switch the context to different editors…
Soon! I have not resolved the issue of rendering an animation, currently the nodes are not executing between frames when you render animation, although if I render a frame, then advance the Timeline, then render the next frame, they are actioned. So I have two options:
Work out how the “fudge” to get the Blender render animation to work.
Write a simple Render Animation routine where the Timeline is advanced and each frame rendered and the image saved to disc, this I can do quite easily (I hope).
I have one more thing to do before I do a video, which I will record with my camera so you can see the MIDI controller in action, then I will make a compilation of the various bits, like sound production, DAW, Live MIDI, Sound Bake, Sound Animate, etc. That thing is to built the DAW notes from a MIDI file, I have all the code I need in various bits - the ones that Bake the MIDI file to controls.
So, a little patience please and we will get there, there is no manual for this stuff, I am having to experiment all the time to get things to work, sometimes it takes hours to get a “simple” thing to work.
PS. Screen Capture videos on my Mac are not going to to work as “Mac” wants to record at 60fps and that taxes the CPU for this stuff…