EOS R1 Autofocus: What Sets It Apart from the EOS R5 Mark II?

It’s not a guess, it's optics. In-focus subjects have no phase difference, out-of-focus subjects do, and that difference is proportional to the magnitude of the defocus. The phase difference is a vector quantity – it has magnitude and direction, so the AF system can calculate how far and in which direction to move the focal plane. That’s the fundamental principle of phase detect AF.

In a DSLR, the dedicated AF sensor has lines of pixels to accommodate the spread of the phase difference. In a DPAF sensor, separated pixels are used for the same purpose. The separation is not that large a distance for PDAF (a few pixels apart), in a DSLR AF system there is a secondary image-forming lens in front of the AF sensor that is some distance away, which means greater physical separation of the phases.
OK - After looking at the diagram you posted (thanks) and thinking about it further, I do think that it makes sense to me. How it's implemented is detail and it would be fascinating to know the details and results showing front/back sensing efficiency and IQ loss, which should be considerable.

But I now contend that it's not "phase difference" that being sensed at all, and thus the name is misleading. It is instead sensing the different direction of photos coming from the left vs right of the front of the camera lens. The different angle of travel is what's being sensed. This is not the phase. If you had an (ideal) wide laser beam with a perfectly parallel travel and flat plane of phase syncronized intensity and it reaches the (ideal) camera lens which has it perfectly focused (at infinity), then the photons hitting the edge of the lens have a much further distance to travel to the focus point compared to those hitting the center of the lens. Extra travel can be measured in many wavelengths of light (Eg. green). These waveforms would indeed interfere with each other and would start self-cancelling the strength at the (ideal) focal point. But that's not what is being sensed here - that would be better termed as "geometric difference" detection, or maybe something like baseline distance detection.

Or, If I'm completely wrong on this (I often am), then as Rosanna Rosanna-Danna once said, "Never Mind" :unsure:
 
Last edited:
Upvote 0
But I now contend that it's not "phase difference" that being sensed at all, and thus the name is misleading. It is instead sensing the different direction of photos coming from the left vs right of the front of the camera lens. The different angle of travel is what's being sensed. This is not the phase. If you had an (ideal) wide laser beam with a perfectly parallel travel and flat plane of phase syncronized intensity and it reaches the (ideal) camera lens which has it perfectly focused (at infinity), then the photons hitting the edge of the lens have a much further distance to travel to the focus point compared to those hitting the center of the lens. Extra travel can be measured in many wavelengths of light (Eg. green). These waveforms would indeed interfere with each other and would start self-cancelling the strength at the (ideal) focal point. But that's not what is being sensed here - that would be better termed as "trigonomic difference" detection, or maybe baseline distance detection.

But, If I'm completely wrong on this (I often am), then as Rosanna Rosanna Danna often said, "Never Mind" :unsure:
Sorry, but you’re completely wrong on this. It’s still AF based on detection of the phase difference in out-of-focus light. A DPAF sensor is just detecting that phase difference on a smaller scale (microns instead of millimeters) because it uses microlenses directly in front of each pixel and the two subpixels behind them, instead of a secondary lens well separated from pixel lines to accentuate the phase difference and dedicated lines of pixels on a separate PDAF sensor.
 
Upvote 0
I posted this in another thread. Took the R1 out for a spin at a local fun run and it worked really well AF wise. Had maybe 50 (out of about 6k) that was not fully in focus and much of that was me and/or just multiple runners together trying to get them all. Really impressed so far.
 
Last edited:
  • Like
Reactions: 1 users
Upvote 0
It’s not a guess, it's optics. In-focus subjects have no phase difference, out-of-focus subjects do, and that difference is proportional to the magnitude of the defocus. The phase difference is a vector quantity – it has magnitude and direction, so the AF system can calculate how far and in which direction to move the focal plane. That’s the fundamental principle of phase detect AF.

In a DSLR, the dedicated AF sensor has lines of pixels to accommodate the spread of the phase difference. In a DPAF sensor, separated pixels are used for the same purpose. The separation is not that large a distance for PDAF (a few pixels apart), in a DSLR AF system there is a secondary image-forming lens in front of the AF sensor that is some distance away, which means greater physical separation of the phases.
I may have written my comment in a misleading way, I know that principle (being a physicist). I just guessed how big the distance of the dual pixel "parts" on the sensor is that Canon combines for phase detection in case of strong de-focusing to get a better signal quality. In principle, the only limit for this spread would be the complete sensor area with dual pixels. That might be even more efficient than the outer lines of pixels on a classic (D)SLR AF sensor that were dedicated to this purpose. So we were talking about the same thoughts, though my quick comment obviously wasn't written as clear as it should be.
 
  • Like
Reactions: 1 user
Upvote 0
I may have written my comment in a misleading way, I know that principle (being a physicist). I just guessed how big the distance of the dual pixel "parts" on the sensor is that Canon combines for phase detection in case of strong de-focusing to get a better signal quality. In principle, the only limit for this spread would be the complete sensor area with dual pixels. That might be even more efficient than the outer lines of pixels on a classic (D)SLR AF sensor that were dedicated to this purpose. So we were talking about the same thoughts, though my quick comment obviously wasn't written as clear as it should be.
Sorry, but it seems that you are indeed guessing, because you don't understand the relevant principles. Being a physicist doesn't confer complete understanding of all of physics upon you, any more than being a biologist confers a complete understanding of biology upon me.

In principle, there are multiple factors that limit the maximum distance between pixels used to detect the phase difference. That limit is so far short of the complete sensor area that I literally laughed out loud when reading your statement. The primary limitation is that the light used to drive the AF calculation is coming from a specific part of object space. That part of object space has a corresponding small area of image space, on a DPAF sensor those are represented as the 'AF point' (most DPAF sensors have a several thousand of them, though I suspect that is a computational limit and not an optical one). The other limiting factor is that as the light becomes more defocused and the phases are spread further apart, the intensity of the light reaching any one pixel decreases so with greater defocus, sensitivity becomes limiting. (Another major factor is the maximum aperture of the lens, but I'll skip that for now, to avoid additional confusion.)

The greater spread of the AF lines on a dedicated PDAF sensor in a DSLR is a different matter; with that system, there are secondary lenses in the optical path set a few mm from the AF sensor that separate the incoming light, spreading it across the paired sensor lines that are separated on the sensor. This seems to be a major point you're missing – widely (relatively) separated line pairs are possible on a dedicated PDAF sensor only because of secondary lenses that are not present in front of the image sensor used for DPAF.

The bottom line is that DPAF focus calculations are driven by the phase difference across a relative handful of adjacent pixels on the image sensor. Faster lenses enable a somewhat wider spread of the phase difference, but we're still talking only about a very small distance (perhaps two relative handfuls of pixels) even with an f/1.2 lens.
 
  • Like
Reactions: 1 user
Upvote 0
(most DPAF sensors have a several thousand of them, though I suspect that is a computational limit and not an optical one)
Since the release of the R3 I have been wondering about these numbers:
R5 about 5900, R6 about 6000 and still the highest number of Canon FF bodies.
R3 about 4700, R6 mkII about 4800, R5 mkII about 5800 and the body with the best AF the R1 about 4300.
 
Upvote 0
Sorry, but it seems that you are indeed guessing, because you don't understand the relevant principles. Being a physicist doesn't confer complete understanding of all of physics upon you, any more than being a biologist confers a complete understanding of biology upon me.

In principle, there are multiple factors that limit the maximum distance between pixels used to detect the phase difference. That limit is so far short of the complete sensor area that I literally laughed out loud when reading your statement. The primary limitation is that the light used to drive the AF calculation is coming from a specific part of object space. That part of object space has a corresponding small area of image space, on a DPAF sensor those are represented as the 'AF point' (most DPAF sensors have a several thousand of them, though I suspect that is a computational limit and not an optical one). The other limiting factor is that as the light becomes more defocused and the phases are spread further apart, the intensity of the light reaching any one pixel decreases so with greater defocus, sensitivity becomes limiting. (Another major factor is the maximum aperture of the lens, but I'll skip that for now, to avoid additional confusion.)

The greater spread of the AF lines on a dedicated PDAF sensor in a DSLR is a different matter; with that system, there are secondary lenses in the optical path set a few mm from the AF sensor that separate the incoming light, spreading it across the paired sensor lines that are separated on the sensor. This seems to be a major point you're missing – widely (relatively) separated line pairs are possible on a dedicated PDAF sensor only because of secondary lenses that are not present in front of the image sensor used for DPAF.

The bottom line is that DPAF focus calculations are driven by the phase difference across a relative handful of adjacent pixels on the image sensor. Faster lenses enable a somewhat wider spread of the phase difference, but we're still talking only about a very small distance (perhaps two relative handfuls of pixels) even with an f/1.2 lens.
I wish you didn't have to "put down" those like justacanonuser or me in your responses. It's unnecesarry, and somewhat cruel.

Regarding the issue of "phase detection", each of the two sub pixels is just a bucket collector of photos. It collects a count of photons that hit it. Each of those photons has an EM phase (eg: a sin wave for electric and cos wave for magnetic). At the point the photon reaches the receptor the EM phase is a random value of the 2 sin/cos waves. The bucket cannot sense/detect the actual phase of the photon. You just have left bucket and right bucket of random phase photons, and those buckets have different amounts of photos based on the light coming from the left or right side of the main camera lens. The left & right side of the lens is not a "phase". Maybe you should tell us what you think the "phase" is (and don't just say "optics tell us this"). Tell us exactly what the "phase" is that you're talking about?
 
Upvote 0
Since the release of the R3 I have been wondering about these numbers:
R5 about 5900, R6 about 6000 and still the highest number of Canon FF bodies.
R3 about 4700, R6 mkII about 4800, R5 mkII about 5800 and the body with the best AF the R1 about 4300.
As have I, and I have no idea how Canon determines that but practically I doubt it matters. For automatic selection (which includes tracking), they all use just over 1000 AF points, the higher number is for manual selection.
 
Upvote 0
I wish you didn't have to "put down" those like justacanonuser or me in your responses. It's unnecesarry, and somewhat cruel.
So, in your opinion being told that you are wrong about something and that you don't know everything is a cruel put down? Ok, snowflake (and yes, that was a put down, but it seems rather germane. Still, I suppose I should apologize to you...sorry). People should recognize that they don't know everything. I certainly do, and being corrected is an opportunity to learn.

@justaCanonuser stated that he understands the principle, being a physicist, then went on to suggest that the limit of the pixel spread for DPAF in detecting a phase difference is the full DPAF area of a sensor, e.g., 36mm for a FF DPAF sensor with 100% AF coverage. That suggestion clearly demonstrates that he does not understand the principles involved. I guess you'd prefer that message to be sugarcoated. With respect, that's not my problem. Wrong is wrong.

Regarding the issue of "phase detection", each of the two sub pixels is just a bucket collector of photos. It collects a count of photons that hit it. Each of those photons has an EM phase (eg: a sin wave for electric and cos wave for magnetic). At the point the photon reaches the receptor the EM phase is a random value of the 2 sin/cos waves. The bucket cannot sense/detect the actual phase of the photon. You just have left bucket and right bucket of random phase photons, and those buckets have different amounts of photos based on the light coming from the left or right side of the main camera lens. The left & right side of the lens is not a "phase". Maybe you should tell us what you think the "phase" is (and don't just say "optics tell us this"). Tell us exactly what the "phase" is that you're talking about?
The term 'phase' in phase-detect AF refers to the two 'phases' of the image separated by a lens and spread across a pair of detectors. It's a convention, not a formal term defining a property of light in the sense of quantum mechanics.

When someone talks about the phases of the moon, do you wonder if they mean the moon might be solid, liquid, gas or plasma?

The lens can be a microlens over two sub-pixels (as in DPAF) or the secondary image forming lenses in front of paired sensor lines in a DSLR PDAF system. In either case, since the image is split, there are two 'phases'. For convenience, the terms left and right are used, though for DPAF other detectors have 'up' and 'down' orientations (e.g. in the R1, and in Canon's QPAF patent there are four orientation pairs illustrated, left right, up/down, and both diagonal orientations).

PDAF.png

As a side note, the original patent for PDAF was from Honeywell in the mid-1970's (US Patent 4,002,899):
Visitronic.png
 
  • Like
Reactions: 1 users
Upvote 0
So, in your opinion being told that you are wrong about something and that you don't know everything is a cruel put down? Ok, snowflake (and yes, that was a put down, but it seems rather germane. Still, I suppose I should apologize to you...sorry). People should recognize that they don't know everything. I certainly do, and being corrected is an opportunity to learn.

@justaCanonuser stated that he understands the principle, being a physicist, then went on to suggest that the limit of the pixel spread for DPAF in detecting a phase difference is the full DPAF area of a sensor, e.g., 36mm for a FF DPAF sensor with 100% AF coverage. That suggestion clearly demonstrates that he does not understand the principles involved. I guess you'd prefer that message to be sugarcoated. With respect, that's not my problem. Wrong is wrong.


The term 'phase' in phase-detect AF refers to the two 'phases' of the image separated by a lens and spread across a pair of detectors. It's a convention, not a formal term defining a property of light in the sense of quantum mechanics.

When someone talks about the phases of the moon, do you wonder if they mean the moon might be solid, liquid, gas or plasma?

The lens can be a microlens over two sub-pixels (as in DPAF) or the secondary image forming lenses in front of paired sensor lines in a DSLR PDAF system. In either case, since the image is split, there are two 'phases'. For convenience, the terms left and right are used, though for DPAF other detectors have 'up' and 'down' orientations (e.g. in the R1, and in Canon's QPAF patent there are four orientation pairs illustrated, left right, up/down, and both diagonal orientations).

View attachment 221217

As a side note, the original patent for PDAF was from Honeywell in the mid-1970's (US Patent 4,002,899):
View attachment 221218

Thank you, neuro. You just stated this:
"The term 'phase' in phase-detect AF refers to the two 'phases' of the image separated by a lens and spread across a pair of detectors. It's a convention, not a formal term defining a property of light in the sense of quantum mechanics."

Thanks! So it's not phase as "phase" is defined in the properties of light. It's a "convention". Thanks for the clarification!

Oh, as far as calling me "snowflake", I'll take it, and grant you the one you're often known as: "mean wiseass!".
 
Upvote 0
Sorry, but it seems that you are indeed guessing, because you don't understand the relevant principles. Being a physicist doesn't confer complete understanding of all of physics upon you, any more than being a biologist confers a complete understanding of biology upon me.

In principle, there are multiple factors that limit the maximum distance between pixels used to detect the phase difference. That limit is so far short of the complete sensor area that I literally laughed out loud when reading your statement. The primary limitation is that the light used to drive the AF calculation is coming from a specific part of object space. That part of object space has a corresponding small area of image space, on a DPAF sensor those are represented as the 'AF point' (most DPAF sensors have a several thousand of them, though I suspect that is a computational limit and not an optical one). The other limiting factor is that as the light becomes more defocused and the phases are spread further apart, the intensity of the light reaching any one pixel decreases so with greater defocus, sensitivity becomes limiting. (Another major factor is the maximum aperture of the lens, but I'll skip that for now, to avoid additional confusion.)

The greater spread of the AF lines on a dedicated PDAF sensor in a DSLR is a different matter; with that system, there are secondary lenses in the optical path set a few mm from the AF sensor that separate the incoming light, spreading it across the paired sensor lines that are separated on the sensor. This seems to be a major point you're missing – widely (relatively) separated line pairs are possible on a dedicated PDAF sensor only because of secondary lenses that are not present in front of the image sensor used for DPAF.

The bottom line is that DPAF focus calculations are driven by the phase difference across a relative handful of adjacent pixels on the image sensor. Faster lenses enable a somewhat wider spread of the phase difference, but we're still talking only about a very small distance (perhaps two relative handfuls of pixels) even with an f/1.2 lens.
Thanks for the thorough explanation, now I got it, and I am happy that I made you laughing ;). Bottom line for me: I shouldn't write my comments on such topics when I haven't enough time to really read background material (I have too much quantum information stuff to intellectually digest at moment to really take time for other topics).
 
Upvote 0