I notice that when rendering a shadeless plane (emission shader) textured with an image, if I use 1 sample in Cycles, the result is cleaner than with 2 samples and then increasing the number just seems to tend to the same result as 1 sample.
My assumption is that when using only 1 sample, the light ray is centered on the camera’s pixel while with any number above, it’s oriented following whichever semi-random pattern Cycles uses (and then the result is averaged), which is why 1 sample would give the most perfect result for a shadeless material. Is that correct?
This behavior makes perfect sense. That’s how most renderers handle it from what I understand. It’s mainly because certain render passes/elements must be rendered pixel perfect and without antialiasing. For example motion vector/velocity or world position pass.
If you antialised those, then the colors on antialiased edges would signify random motion vector value somewhere in between or random world position value somewhere in between, as antialiasing is ultimately just averaging of surrounding pixel values. If you were then to use these passes in any compositing packages, those values on the antialiased edges would produce wrong artifacts.
So, the way most renderers handle the ability to output pixel perfect render passes without antialiasing is to always do the very first pass as a perfect grid where each ray goes through the exact pixel center. Otherwise, if you were to randomly jitter rays already on first pass, you’d get noisy edges and equally wrong result. After first pass is done, those render elements/passes with disabled antialiasing are then simply not updated further.