Canon EOS R1 to have a version of a DGO sensor?

Learn Japanese, fly to Tokyo, and infiltrate the company HQ. :)

I always skeptical of people who claim to have inside information like that.
Now what if I say that I just happen to have DEEP STATE-LIKE super-secrets from people who CAN give me such information? Would it SCARE YOU to find out that Sony is quaking in its boots right now because Apple might be moving away from Sony sensors to ON Semiconductor or even Canon image sensors for its upcoming 64 megapixel / DCI-8K video smartphones, DSLR and Cinema cameras with on-board LIDAR/SONAR/Infrared 3D scanning technology for 64-bits wide per pixel RGB colour + Z-Axis Depth value recording for AR/VR applications?

Would you believe CANON is now ALSO quaking in their boots in that Apple looks like it has just bought Sigma Foveon Stacked-RGB-Photosite image sensor technology AND will have Sigma be the OEM for the Camera bodies and Lenses of the upcoming Apple 3D imaging lineup?

And would it scare you that Samsung is introducing 120 megapixel and 200 megapixel image sensors into their S-series super-smartphones, super-tablets and DSLR-like camera bodies in 2025 that will give Canon AND Apple a good run for their money in terms of taking over the overall consumer and professional imaging market?

Would it OUTRAGE you that Microsoft was (or STILL IS?) seriously looking into creating a separate-but-combined merger-of-equals super-company with SONY to create a hardware, software and infotainment powerhouse? It will be the Playstation OR the X-Box division that will have to be sold off due to U.S. Federal Antitrust Statute issues and he INITIALLY interested buyer of the off-loaded gaming console division is Comcast/Universal?

Did you know that Disney is looking for a NEW financial partner to fund its theme park and online media expansions and that it is WILLING to sell itself off to the highest bidder and that right now Microsoft and Sony are the TWO NAMES being bandied about as the potential merger suitors?

AND for the final kicker, it looks like Fuji (aka Fujinon) is looking into BUYING ALL of Blackmagic to roundout its cinema camera body lineup AND video processing hardware/software lineup so it can effectively compete against Sony and Panasonic with a more fully-rounded-out single-branded lineup of still photo and cinema cameras, prime and zoom lenses, video editor software, hardware image processors and video networking! It's looking to catch-up with Arri in the Hollywood cinema space and is ALSO introducing DSLR-like medium format cameras with BIG SENSORS to take over the space currently taken up by the Arri Alexa and Sony Venice! The consumer market will ALSO be sent from digital heaven a multi-camera set of NEW full frame and medium format DSLR-like bodies starting at $2000 up to less than $7000 USD with advanced Hollywood-level image recording, image processing and networking capabilities (i.e. 64-bit Colour and DCI-8K video resolutions at 120 fps!) --- No more $11,000 USD cameras, because from now on, it will be Blackmagic pricing merged with PRECISE Fuji colour science!

So if YOU WANT SECRETS HERE RIGHT NOW then you just got them above!

V
 
  • Sad
  • Love
Reactions: 1 users
Upvote 0
I do not believe that's true. Where are you getting that information? As was pointed out earlier, parallax would be an issue with using the DPAF architecture to drive DGO. The Canon white paper makes no mention of it.

DGO makes alot of sense for video, because it's simply capture more frames and blending them together.

how that works in stills though is the $1000 question.
 
  • Like
Reactions: 1 user
Upvote 0
DGO makes alot of sense for video, because it's simply capture more frames and blending them together.

how that works in stills though is the $1000 question.
From that white paper, the exposure time isn't doubled, so there will be no ghosting. It doesn't actually capture additional frames.

But the readout time is doubled. Which probably means DGO won't work in high speed continuous and won't be available in electronic shutter mode.

Time well tell.
 
  • Like
Reactions: 1 users
Upvote 0
DGO makes alot of sense for video, because it's simply capture more frames and blending them together.

how that works in stills though is the $1000 question.
Canon’s Dual Gain Output (DGO) is equivalent to Arri’s Dual Gain Architecture. Both gain levels are captured simultaneously. The R1 is rumored to have an exceptionally fast rolling shutter. Perhaps this super fast rolling shutter was needed to support the extra overhead of reading out both gain stages at once?

If this rumor is true we might be looking at the first mirrorless to have a real world 15+ stops of dynamic range. I imagine it will be rated at 17 to 18 stops by Canon. The smaller and older DGO sensor in the C70 is rated at 16 stops by Canon. CineD rated the C70 at 12.8/sn 2, for almost 13 real-world stops.
 
Upvote 0
From that white paper, the exposure time isn't doubled, so there will be no ghosting. It doesn't actually capture additional frames.

But the readout time is doubled. Which probably means DGO won't work in high speed continuous and won't be available in electronic shutter mode.

Time well tell.

it can't be that simple though or everyone would do it.

think of it another way..

how do we capture highlights - we use a faster shutter speed so the pixel well does not overflow.

how do we capture shadows - slower shutter speeds so the pixel well fills up more above the noise floor.

if the pixel can handle all 16EV and be above the noise floor and not overflow (clip) then you wouldn't even need DGO, it would simply handle the entire exposure in one shot. So reading the same data twice with two implications still wouldn't make up for the data not being there in the first place.

That's the part that I go hmmm over.
 
  • Like
Reactions: 1 users
Upvote 0
it can't be that simple though or everyone would do it.

think of it another way..

how do we capture highlights - we use a faster shutter speed so the pixel well does not overflow.

how do we capture shadows - slower shutter speeds so the pixel well fills up more above the noise floor.

if the pixel can handle all 16EV and be above the noise floor and not overflow (clip) then you wouldn't even need DGO, it would simply handle the entire exposure in one shot. So reading the same data twice with two implications still wouldn't make up for the data not being there in the first place.

That's the part that I go hmmm over.
This is why the ARRI dual gain feature seems like magic as well, they patented the obvious parts but the not the magic ones :)
 
  • Like
Reactions: 1 user
Upvote 0
it can't be that simple though or everyone would do it.
It requires additional circuits in the sensor design to control switches between the high and low gains. Tbh I don't know why this design isn't widely adopted. It looks like it increases the readout time, that's probably one of the reasons.
if the pixel can handle all 16EV and be above the noise floor and not overflow (clip) then you wouldn't even need DGO, it would simply handle the entire exposure in one shot. So reading the same data twice with two implications still wouldn't make up for the data not being there in the first place.

That's the part that I go hmmm over.

From those diagrams, each sub-pixel (photodiode) is read twice per exposure. The second time it has the same accumulated charge, so yes, at the first glance, there's no additional information that can be read.

However this DGO design is not about increasing information capacity of the photosites. It's about decreasing the noise in the shadows, that is, lowering the noise floor. Even though the pixels themselves may have the shadow information, when a DGO sensor applies 'saturation priority' setting (low gain), the deep shadows get lost in the ADC noise. But the 'noise priority' setting (high gain) amplifies everything but ADC noise, so the overall noise is lower, even though the highlights may be clipped.

It's explained in that white paper, pages 3-4. Low gain setting has noise = sqrt(N1^2 + N2^2). High gain setting has noise = sqrt(N1^2 + (N2/G)^2)
Where N1 is pre-ADC noise, N2 is ADC noise and G is gain.

So in the high gain setting the noise is lower, the shadows are cleaner and when combined with low gain output, they get cleaner data.
 
  • Like
Reactions: 1 users
Upvote 0
Is it possible, that the magic lies in the electronic shutter? For instance the Sensor may have in fact 48 Mpx and for the DGO functionality, half of the pixels are shielded during the first part of the exposure (low Gain) and the other half (high gain) during the second part of the exposure.
Only guessing. :)
 
Upvote 0
Is it possible, that the magic lies in the electronic shutter? For instance the Sensor may have in fact 48 Mpx and for the DGO functionality, half of the pixels are shielded during the first part of the exposure (low Gain) and the other half (high gain) during the second part of the exposure.
Only guessing. :)
No that's not the case which follows from the more detailed white paper from Canon that @neuroanatomist posted above: https://www.canonrumors.com/forum/t...-a-version-of-a-dgo-sensor.43725/post-1000080

If you need a very simplified explanation: the sensor pixels (photosites) are exposed normally, but then when the sensor converts the captured light (as electric charge) into digital numbers, it sequentially applies two different gains -low gain and high gain, and then uses low gain for highlights and high gain - for cleaner shadows. Cleaner shadows increase the dynamic range.

In normal sensors, the gain is typically controlled through ISO. So if we were to simplify the explanation even further, DGO is like applying, say, ISO 100 ("low gain") and ISO 800 ("high gain") setting to the same capture, and then combining the results.

Why ISO 800 (higher gain) will have cleaner shadows is explained in that white paper above, but that gets technical. Also there's the explanation on photonstophotos (also quite technical) : https://www.photonstophotos.net/Gen...ographic_Dynamic_Range_Shadow_Improvement.htm

By 'cleaner' shadows I mean the following: you shoot the very same scene at the very same f-stop and shutter speed at ISO 100 and ISO 800, then take the ISO 100 image and push the 'exposure' slider in Lightroom by 3 stops and compare it to the unmodified ISO 800 image. In most cameras, including the R5, the ISO 800 image will have cleaner shadows (also see the link above).

(However the R5 is ISO-invariant from ISO 800, so if you apply the above procedure to images shot at ISO 800 and 6400, you won't see a big difference in the shadows.)
 
Last edited:
  • Like
Reactions: 1 users
Upvote 0
But there are many situations where the scene DR is not so high that 10 stops can't capture it, and many others where the scene DR is too far great for a single image even at 16 stops.
You're certainly right as far as that goes, but say we have a 16-stop scene and are upgrading from a 12- to 13-stop camera. In some cases it's simply no help whatsoever, I get that, but I don't think that's the majority of 16-stop scenes.

You may still from 10% to 5% that are in zone I and IX (black shadow and totally blown out white). Or you may be able to move a very bright subject from VIII to VII (not quite blown out but no detail, to having detail) or a really dark one from II to III (black plus noise, to nearly black with features).

Most of the 16-shot scenes will be improved with the new camera: some the tiniest amount, and some totally saved. I can't quantify how many and how much improvement of course, but you seem to be arguing that only a small X% will be improved hugely. Even if that's true, wouldn't you happily dump a couple grand into improving X% of your shots hugely?

Secondly, no improvement will be in a vacuum. A little better DR, plus a little better AF, plus a little better weather proofing, plus a little better finder: it starts adding up to being a significantly better camera overall, even if no single area gets a significant improvement.
 
  • Like
Reactions: 2 users
Upvote 0
WE MUST be able to record at 16-bits per RGB colour channel along with a fourth Z or Depth channel for RGB+D imaging which uses millimetre wave, infrared, RADAR or SONAR scanning to provide a distance from sensor
Can also be inferred during AF and parallax, though not in all possible scenes...

It'd also be great to have a mid and deep IR channel, allowing production of both two different B&W IR images, and a false-color IR image, from any photo.
 
Upvote 0
  • Like
Reactions: 1 user
Upvote 0
if the pixel can handle all 16EV and be above the noise floor and not overflow (clip) then you wouldn't even need DGO, it would simply handle the entire exposure in one shot. So reading the same data twice with two implications still wouldn't make up for the data not being there in the first place.

That's the part that I go hmmm over.

There is no data at all in the first place.
Data exists after ADC and gain determines what goes into ADC.

I'm guesstimating about a stop more in shadows, but that estimate is based on older sensors.
 
Last edited:
Upvote 0
Well I owned the R3 since launch. Great camera. But compared to my A9III it feels a generation behind. The A9III is the superior camera in just about every measure. It has the fastest, most accurate AF of any camera available today. So I guess they ironed out the bugs.
I absolutely disagree. Own an R3 since the launch and constantly shooting some projects with a9 III and a1.
AF on Sony is I’d say on par but in no way better. On par in sports in good light but in bad/low light is does not shine at all.
Pre-capture on A9 III is so poorly executed that it is basically unusable. If you shoot sports you are constantly (like every second) half pressing the shutter button to focus (or bbf) and you end up with hundreds or thousands (120fps) of images within minutes.
And no, you don’t have time to erase sets after each press of AF button.

Maybe sensor is better or video modes (I don’t do video) but as far as stills go, I always pick up R3 for sports or fast paced events before Sony.
 
  • Like
Reactions: 1 user
Upvote 0
It requires additional circuits in the sensor design to control switches between the high and low gains. Tbh I don't know why this design isn't widely adopted.

Because it requires a sensor with sufficient image quality achievable with this additional circuitry design, sufficient sensor bandwidth for two parallel readouts and immense processing power in the box to handle it.

Check the size of Alexa Mini LF.
That's a 13Mp sensor.
 
Upvote 0