Really struggling with selection

Hi, I am having a very hard time understanding selection. Really struggling with this one. I can’t seem to find a way to reliably report the current selection, be it meshes, transforms, geometry components.

For example (and this is just an example–I am looking for a generic way to access selection, irrespective of context, component or object types), if I’ve selected a vertex. The selection isn’t correctly reported if I am in edit mode where the selection is visible. I can get it to be reported by entering object mode, but then the selection isn’t visible or interactive, so don’t actually appear to be selected. This is via mesh.vertices[n].select.

On top of that, I can’t even ask what’s selected. I have to guess by looping through all the vertices I think might be selected and querying each individually.

I know I am doing this wrong. I’ve been googling around a bit, and it’s very confusing, especially because a lot of the information out there predates a bunch of changes in 2.80, so it’s mostly wrong or outdated now. I am coming from Maya if that’s any help. I expect I am not the first person to have this problem, so pointing me to some discussion or documentation on the subject would probably suffice in lieu of a novel answer here. Thanks!

Edit: to emphasise that the vertex example is only one example.

1 Like

Sorry I meant to move this to the python section, the “Blender Development” section is aimed at questions regarding the blender C/C++ core codebase


Have a look here:

So eg this should work:

import bpy
mode = bpy.context.active_object.mode
# we need to switch from Edit mode to Object mode so the selection gets updated
selectedVerts = [v for v in if]
for v in selectedVerts:
# back to whatever mode we were in
1 Like

Thank you, I did read that thread. I didn’t reply there, because I was told that it’s not a forum and replies are not allowed.

What I am asking is, is there a generalised concept of what is selected? How can there not be, since the UI is displaying what’s selected in various places.

Take this example. This will only work for vertices, for one. For two, It requires you to switch to a mode where the selection is not visible. For three, It requires additional logic to keep track of what mode is current. It’s quite long.

In another program it’s ls -sl, and that is all.


Not sure if I get your question completely. I am also just using blender since blender 2.8ff and I don’t know all the “historical” reasons why it grew that way. But there is a bmesh, its not wingedge but sort of like that ( would have to take a look if you want to know what exactly it is). That is the datastructure underneath that gets built to be able to answer topological queries. This object mode forward backward thing is the moment when the structure gets synced with the simpler mesh respresentation.

The historical stuff is one thing, but I’m wondering why whatever’s getting the selection for GL also can’t be queried for its contents, so you don’t have to jump through all these extra hoops, or how an application could be this advanced without having a access to what’s selected persistently available through a simple and uniform query. Or even if there really is a concept of selection–why is there no selection object? This seems pretty ominous for more extensive development, from someone new to the application, coming from the commercial 3D world.


Your scripts runs in python. Blender is a C/C++ developed application. So first, whatever you have access to in your scripts is limited to what is offered by the API. And opposing to that, nearly all of the OpenGL code will be driven by the C side of blender (but there is some access from python). Thats the reason why.

Regarding selections, I don’t think its mandatory for the concept of selections to have an encapsulating class. But eg this object mode edit mode switching feels indeed very unfinished. And yeah, even if I don’t miss such an encapsulating class, I wouldnt mind if it were there. I try not to participate in discussions about what is professional enough or not, but yeah the python API definitely needs more attention, there’s definitely room for improvement.

As I said I also switched roughly about the time 2.8 came out, and I know what you mean. It’s hard to discuss such stuff, many critics get ignored, partially because there are quite some people who confuse constructive criticism with just being rude and also because there are quite some fanboys who defend anything, no matter what. Commercial ecosystems feel more useroriented here.


I am aware that Blender is not written in python. I mean–why isn’t the logic which is being used to send the selection for drawing on the screen also packaging it for delivery to the python API?

It may not be necessary for selection to be an object (although this is very functional). Nothing is really necessary. But still it’s very basic–there doesn’t seem to be a way to actually query the selection generically and reliably. The example above is pretty convoluted, and only works with one class of selection.

I could write my own selection class to methodically look through all the states which might be present, and the different classes of objects which might be selected, but it’d be slow, for one. For two, it’d be bad, because I am a novice to blender. And for three, it’s unlikely that I’m the first or even the 100th developer to encounter this, and there must be some in-depth discussion of how selection in Blender is working which will illuminate what’s going on here.

It would be super cool if someone could point me to that.

I didn’t develop it and also didn’t have a look a its code. With python its as described. If you find that example convoluted, I doubt you’ll ever find what your looking for. There are all kind of collections avaiable and updated, its simply part of the basic structure. I see that you are hoping for high level abstractions, but they dont exist, especially not in the form you hope for. And Objects or Collections are treated separately, as mesh components are, its like that.

Its up to you if you want to work with the mesh structure or bmesh structure, if you use the mesh structure you have to go to object mode first as described.
Otherwise use bmesh:

Have a look at Top Level Mesh Editing API in the design document. But thats not really making it easier for you.

Right, I am hoping that someone who does know will step in and answer. It’s okay if you don’t.

I don’t agree that a method to query what’s selected falls into the category of ‘high level abstraction’.

1 Like

Seems really important for you to know, why its not there.

No an encapsulating class managing all kind of selections including objects, collections is an abstracting one, opposing to how its currently handled.I didnt use that term to describe that its not useful.

I’d like to know why it’s not there because I want to understand what is happening. Accepting an unsatisfactory status quo without understanding why isn’t productive.

If this thread is the whole story, it isn’t currently ‘handled’. For example, if I want to know what edge I have selected, I have to leave the mode where I can see and interact with the selection and then write several lines of code–specific to edges rather than some other kind of selection. This is entirely dissimilar to other 3D applications, so I am assuming that this isn’t actually the whole story. It sounds like you don’t understand what’s happening either, which is fine, but maybe let someone reply who does?


No you simply didn’t understand what I told you already. And it seems you didn’t take any time before to learn anything about it. You obviously don’t understand that there is a special datastructure being built at the entering of edit mode and that for performance reasons the decision was not to sync it permanently with the simpler meshstructure. So work with the one structure or the other. I posted you the links you asked for, but you say if you’d write anything like that it would be bad. Ok thats is how it is then, beside that, this is just a feature request what is placed best on rightclickselect.

Thanks, I appreciate your input.

1 Like

I haven’t looked closely at the Blender code regarding selection, but in my experience the Blender Python API is just a thin layer on top of the underlying C or C++ code. So when it comes to (say) the set of selected vertices there probably is no explicit object representing that, as I suspect each vertex has an associated flag, just like there’s a select attribute in Python. Meaning: selection is represented implicitly by the per-vertex flag, which is why looping through the set of vertices to get the selection is the way to go. But then again, I might be wrong :wink:

Now, there could be an extra operation in the Python API to make it easier to extract that selection, even when in edit-mode, but there isn’t one apparently. I guess this is an example case of how open-source is different from commercial software. If someone wants a certain feature badly enough and is capable of writing it themselves then such a software patch might get submitted to the Blender source repositories and end up in Blender. I recently had a minor feature I was missing (background rectangles on text items in the video sequencer) get accepted in this way. There simply isn’t a marketing/sales department for Blender that is actively monitoring customer satistifaction, driving the roadmap, etc. The contacts the Blender institute and devs have with the larger studios obviously have influence in this respect, but it’s not the same.

Anyways, more concretely, when in edit-mode you can use a BMesh object and loop through that to get the selection, e.g. in the Python console:

# Having a mesh in edit-mode, with some vertices selected
>>> import bmesh
>>> o = bpy.context.active_object
>>> m =
# Create a BMesh object, mesh m needs to be in edit-mode
>>> bm = bmesh.from_edit_mesh(m)
# Retrieve the selection
>>> [v for v in bm.verts if]
[<BMVert(0x7f6260aae590), index=0>, <BMVert(0x7f6260aae5c8), index=1>]
# After changing the vertex selection on the mesh (still in edit-mode!)
>>> [v for v in bm.verts if]
[<BMVert(0x7f6260aae600), index=2>, <BMVert(0x7f6260aae638), index=3>, <BMVert(0x7f6260aae670), index=4>, <BMVert(0x7f6260aae6e0), index=6>]
# When done clean up the BMesh object
>>> bm
<BMesh dead at 0x7f6262c16f00>

To match up the indices shown with the mesh you would need to enable Developer extras under Preferences > Interface > Display, and then in the Viewport overlay settings (the intersecting balls in the top-right) enable Indices under Developer. Yes, very convoluted. But it probably also reflects that this isn’t a widely used feature (or people just give up trying to get it to work). There might also be better ways to do this that I’m not aware off. Perhaps there’s an addon that makes this easier, although a quick google search just now didn’t turn up one.

Edit: just noticed this overlaps quite a bit with the answers from @Debuk, but perhaps gives you a few more concrete things to use


Nice post @PaulMelis. Thank you.

This resonates with my code spelunking experiences, going on about two years now. I think that probably sums up @CMK_blender’s pain points as well. My past scripting experience with Maya was largely pleasant - my humble opinion here, but I had the sense the Maya scripting environment was meant for me. On the other hand, Blender’s Python API is also meant to support a separation of concerns between a user interface - largely implemented in Python - and core Blender. Oh! yes. People can write add-ons written to this Python interface as well. I’m being a bit sarcastic here, but the sense is that the Python API is not quite meant for me, but, hey - presto! - I can use it if I want to!

At the end of the day, there are (my guess) about a thousand animators wanting decent frame rates so they can effectively judge their work for every script writer wanting a API that’s meant for them. But who was Blender really written for, anyway? animators or script writers?

I should bow out, now, as this entire thread is trending toward a feature request better situated at RCS. But for those who would wish otherwise, the Python API is what it is because it serves multiple purposes; it is not entirely (wholly?) geared to my own preferences for ease-of-use, much as I would wish otherwise. Script writing ease-of-use is not a priority, and I’m OK with that, given who the application was written for and understanding that not everything can be high priority. I humbly work with what is there, as many, many people here seem able to do, and successfully too.

1 Like

I agree with you on this point. But to be fair the Python API really has improved over the years. Although there’s still quite a lot of room for improvement. My pet annoyance with it is that certain operations are only available as operators that run as if they’re called from the UI (e.g. due to a button press). For example, if you want to apply a boolean operation to two meshes you have to script it using an modifier. There’s no Python API call boolean_intersect(mesh1, mesh2) that simply operates on two meshes without worrying about the UI context.

In the beginning Blender was probably mostly written for the same folks that were using it, or at least there was quite a bit of overlap. Looking at the history of the Python API it was introduced with 2.10 in December 2000. The immediate use, judging by the release notes seems to have been to allow scripting in the game engine part of Blender. Since then it has been retargeted to allow defining the UI fully (quite a daring feat) and accessing all kinds of internals. All in all, I’d say it grew organically into what it is now.


Thank you Paul. It must be noted that the Python API has improved and is improving and - perhaps - I was unfairly suggesting that the API is a dark corner of Blender that nobody cares about and is going to dust. That certainly is not the case. It has gotten a lot harder to crash Blender by dropping a bomb on it through the Python API. Instead - very frequently now - I get trace backs telling me that RNA properties have changed and my references are stale. My motivation for the post was to manage expectations for new comers. Of course, I wish for a friendlier API. Who wouldn’t? But if wishes were horses, beggars would ride…

1 Like

God yes, coming from decades in commercial software where it’s possible to do this, I cringe at the thought of it, because it’s incredibly fragile and prone to breaking. It’s very bad practice which generally only people who are new to scripting resort to.

This is part of why I started this thread. I’m relatively new to Blender, and so am bumping my way in the dark, and I felt like doing it this way was exactly the same sort of thing. It’s a bit shocking to hear that there isn’t an alternative. E.g., the selection-by-looping-specific-components approach. It is exactly what I came up with, myself, but it felt so limited, covoluted, and wrong, that I figured there had to be another way.

Again, shocking to learn there is not.

1 Like

This is so much like my dive in to Blender. Right away I couldn’t stand how clunky selection behavior was in general so I set out looking for way to make it behave in a way that felt more intuitive and comfortable to me. I ran in to a lot of the issues you’re talking about where just getting Blender to tell you what’s selected became a massive headache.

I basically started writing my own selection operators in python and got decently far along before 2.8 came along and broke all my scripts. I started re-writing them for 2.8 but then they’d get broken again.

Then I realized that writing these operators for objects and bmesh was not enough because there’s completely new operators for every different object types like curves, lights, armatures. Nothing’s unified. And behaviors are inconsistent from one object type to the next.

It became such a massive undertaking I just gave up.

Here’s some of the progress I made with a few places I wound up finding selection code to try and study:

And here’s another thread going over some of the hoops you have to jump through to get bmesh selections if you haven’t seen it already: