The Canon Cinema C80 is coming this week

The specified effective pixel count is lower than the total pixel count on all the spec lists I've seen, but going from 27 MP to 19 MP is a much bigger delta than normal, and that does suggest there's something else going on with the C400 and C80.

Exactly. The horizontal difference is fairly typical, but the vertical difference is abnormal and what brings it down to 17:9. That extra area has to be serving either an unknown purpose and/or leaving room for open gate. When you get to mass production numbers you're going to lose a lot of money packing larger sensors than you need on a single wafer.

Canon also has a history of adding significant features via firmware without advertising them at launch (i.e. adding internal raw to the C70). I'm not saying open gate is going to happen, but it certainly seems like a possibility.
 
Upvote 0
You can but in case of 2x you will loose much of the frame in height which a) severely limits framing and is not really the proper usage and b) is personal fun incompatible with any professional delivery standard.

For 16x9 imager 1.33 anamorphic works properly.



You specified wrong.

In digital, anamorphic image gets squeezed in height, not unsqueezed and upscaled in width generating pixels which were not recorded.

Horizontal resolution stays the same, vertical gets downscaled/oversampled

Example: DCI 4096x3112 goes to 4096x1716

Scroll down under "resolutions".


The only cases I'm seeing where it might make sense to keep vertical and upscale horizontal would be if the anamorphic recording was lower resolution then delivery, like 2K/1080p for 4K/UHD, or for doing prints.

Now you know.

NO! NO! NO! In Academy Format (aka 1.89:1), the Anamorphic lens squeezes ONLY on the horizontal at 2.4:1! How do I know this? 30+ Years in Broadcast Video and CINEMA PRODUCTION in Vancouver, Canada working on MAJOR movies and commercial shoots shot in Cinema Anamorphic aspect ratios (i.e. Arri Alexa FF and Alexa-65 and even Arriflex-35 film and PanaFlex-70 mm at 2.4:1!)

The Alexa I usually use shoots at 4608 by 3164 pixels which when the proper 2.4:1 Anamorphic lens is attached does NOT squeeze the vertical. I do notice it BOWS the image vertically just a tiny bit though! The 2.4:1 super-widescreen-image is OPTICALLY recorded onto the Alexa's actual 1.45:1 aspect ratio sensor at full native resolution. When put through the post-production pipeline, VFX and Editing gets a file that consists of UNSQUEEZED super-wide-screen original uncompressed 11,014 by 3164 pixel UNCOMPRESSED RGBA/YCbCrA video frames that were resized using the high-end but slow Lanczos-3 or Lanczos-5 frame resampling algorithms. These editors and VFX artists USUALLY have Dual CPU-based AMD EPYC super-workstations with MASSIVE hard drives arrays to be able to EDIT uncompressed RGBA/YCrCbA video files in real-time.

After that, all editing and colour-grading on the timeline (i.e. Most Hollywood Movie Editors use Avid or Heavyworks/Lightworks for cinema editing!) is done at the full UNSQUEEZED resolution and exported out to a final encrypted and compressed video stream for distribution at the final 2.4:1 super-widescreen ratio (which is actually a 2.39:1 aspect ratio!) BUT THEN is squeezed down horizontally and vertically to 4096 by 2160 pixels for final encrypted audio/video file distribution.

The movie theatre use a DIGITAL stream that was downscaled from full resolution to the resolution of their 4K cinema projector which is TYPICALLY 4096 by 2160 pixels (i.e. the 1:89:1 Academy Aspect ratio!) For 2.4:1 display, most Cinema projectors have a lens attachment which takes about 10 minutes to put on so that it can OPTICALLY DESQUEEZE the Academy 1.89:1 Ratio back out to 2.4:1 Super Wide Screen. You DO LOSE image quality BUT that is more of a projector resolution issue!

On the LATEST commercial DCI-capable Barco, Christie, Sony and NEC Cinema Projectors that can do DCI-8K resolution at 8192 by 4320 pixels, they no longer bother with the OPTICAL DESQUEEZE LENS being put on a projector and simply project a video file that was exported at the original 2.39:1 super-widescreen resolution of 8192 by 3,426 pixels. Little image quality is lost that way!

Now You Know!

V
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
It doesn't work that way.

You need 4:3 surface recording ability for 2x anamorphic otherwise you loose full image height and you don't "unsqueeze" and expand digital image into a wider image imagining new pixels in post but "squeeze" it vertically to same width.

"Unsqueeze" as in stretching the image happens optically with anamorphic images and anamorphic lens projection. Not in digital domain.
Depends on the SPECIFIC Aspect ratio (i.e. usually 2.4:1 -- aka is actually 2.39:1) Anamorphic Lens when put on a specific camera with specific NATIVE resolution. Most HIGH END cinema cameras can now record at the NATIVE resolution such as the Arri Alexa LF (4448 x 3096 pixels) which is recording unresized to the native MXF file format in Open Gate Mode.

That means the lens would sqeeze a super-wide-screen 2.4:1 image onto the native 4448 x 3096 pixel Arri MXF open-gate video file format at the native camera resolution. The main post production editor usually uses Avid or Heavyworks/Lightworks editing system to work on a NATIVELY DESQUEEZED uncompressed RGB file at 10,630 by 3096 pixels image for which all VFX and Post will be done on. ONLY AT FINAL EXPORT will the image be resqueezed and scaled down to the Academy 1.89:1 Cinema Aspect Ratio file size of 4096 by 2160 pixels for final encrypted distribution.

It is up to the MOVIE THEATRE to put on a a DESQUEEZE LENS on the projector to WIDEN the image back out to the 2.4 aspect ratio of the Super Widescreen Production. There are VERY FEW Hollywood Cinema super-widescreen productions ...BUT.... I do notice here in Vancouver, Canada (i.e. 3rd Largest Movie and Commercial Production Centre in North America after LA and New York!) that a LOT of Commercials AND business-oriented video displays are going for the Anamorphic super-wide-screen look and are WILLING TO PAY DEARLY for it!

Most of these ultra-high-end 2.4:1 aspect ratio super-widescreen commercials and business productions are shooting on Medium Format Sensor Arri Alexa-65 or Panavision's DXL2 8K camera with the RED Monstro sensor which uses 16 bits of color across 35 million pixels at up to 60 fps. The lenses are HUGE and HEAVY with a camera setup that needs a very time-consuming AND EXPENSIVE production process to "film"!

V
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
I was referring to the statement, "The same goes with the R5 an R5 m2 comparision as current tests already proved it," and using the photographic dynamic range values for those two cameras provided by Bill Claff (photonstophotos.net). Of course, that difference may not be representative of the C80 and C70, and from what I've read DR in video is different anyway (not something I am concerned with).

C80/C400 should have overall superior performance than any R5 model for motion usage but no proper stills support. Although grabbing frames from 12 bit RAW is very usable.
In terms of DR I'm assuming 12 bit RAW is a limiting factor.
 
Upvote 0
NO! NO! NO! In Academy Format (aka 1.89:1), the Anamorphic lens squeezes ONLY on the horizontal at 2.4:1!


The Alexa I usually use shoots at 4608 by 3164 pixels which when the proper 2.4:1 Anamorphic lens is attached does NOT squeeze the vertical.

Yes that's what I wrote. : )
Anamorphic lens squeezes horizontal and anamorphic projection lens unsqueezes it back.

In digital material it does not work that way and you don't "unsqueeze" and computer generate pixels which were not shot but keep the horizontal resolution. Let's call it "unstretch" for the sake of differentiation of the process.

In case of 4K material and 4K delivery there is no reason whatsoever to "unsqueeze" to higher horizontal resolution and then go back to 4K delivery container, overcomplicating post and twice resampling imagery.
 
Last edited:
Upvote 0
I wish they would introduce an R1C. Cinema raw light LT would be great to have on it.
As a suggestion, you can load your own custom LUTs on the Canon R5mk2 (or any other inexpensive mirrorless or DSLR camera!) and that means you can create a burned-in LUT-based colour palette that MIMICS the Raw Light colour spread so that any COMPRESSED or RAW file format will allow you to RECOVER the more cinematic-looking shadows, midtones and highlights.

You can create these custom LUTs in such software as 3D LUT Creator, Blackmagic Resolve, Color Labs Look Designer-2, Lutefy, IWLTBAP LUT Create and even Adobe Photoshop or Creative Cloud.

The key part of any custom camera recording LUT is that you want to create 14-bit or 12-bit LUT values that use a curve-based fitting of shadow values up and down to no-lower than 7.5% of each 16-bit, 14-bit or 12-bit sampled RGB channel value and highlights to no higher than 93.75% of the same 16-bit, 14-bit or 12-bit sampled pixel colour channel range.

That means you NEED to curve fit your Custom-made SQUEEZED colour-space Recording LUT's Shadows, Midtones and Highlight values range to within a minimum of 1,228 and a maximum of 15,360 within the 14-bit colour space, ...OR... to values within the range of 307 to 3,840 within a 12 bit colour space, whichever is used during recording.

Once you do that, you can RE-EXPAND your colour-squeezed RGB values back out to the original 16-bit 0..65,535 (or 14 or 12-bit) value range using a different COLOUR SPACE UNSQUEEZE LUT that is the full bit-width of your video editor's timeline. This type of colour and luminance range remapping allows video compression algorithms to more easily give you higher-quality and less-macro-blocky video footage during the recording phase when you use a LOSSY inter-frame compression algorithm such as MP4. If you use a LOSSLESS compression algorithm (i.e. 2:1, 3:1 or 4:1 BRAW or RED RAW) your recordings won't take as much file space!

Shadows are considered any individual RGB value from 0% to 24% of maximum possible pixel luminance as being shadows, 25% to 75% of maximum pixel luminance as being mid-tones and 76% to 100% of maximum pixel luminance as being highlights. You use floating point-based CURVE fitting to REMAP that full-width luminance level for EACH of the individual channel luminance values within your LUT to a range falling between 7.5% to 93.75% of the maximum 14 or 12 bit recording value (i.e. keep all final remapped LUT channel values between 1,228 to 15,360 for 14 bit recording and 307 to 3,840 for 12-bit recording).

The UNSQEEZING-oriented LUT, which is installed into your editor or VFX software, takes your recorded LUT SQUEEZED video footage and remaps, using a STRETCHED-OUT CURVE FIT method, all pixel values back out to the EXPANDED 16-bits wide per colour channel colour space so you can RECOVER all your shadows, midtones and highlights to their full-bandiwdth RGB colour glory without needing to do any fancy colour grading! The custom COLOUR SQUEEZING LUT within the camera and the custom COLOUR UNSQUEEZING LUT installed within the editor timeline are doing all the work for you!

You can do this all yourself OR you can buy OTHER PEOPLE'S colour squeeze/unsqueeze LUTs for various DSLR and Cinema cameras and video editors including those from Canon, Sony, Fuji, Panasonic, Black Magic, Adobe, Apple, etc.

This type of colour-squeeze LUT trickery lets you EMULATE the cinema production-grade CLOG-2/CLOG-3 recording of the more expensive cameras within your super-inexpensive mirrorless or DSLR camera no matter the brand!

Now You Know!

V
 
  • Like
Reactions: 1 user
Upvote 0
Not in video.
The R5 II has more DR than both R5 and R5 C.
The R5 only has more dynamic range with the mechanical shutter.
Unfortunately not. That is why I have to skip both R5 m2 and C80 too and stay with R5 and C70 becuase they are slightly better in the DR front. I wished Canon will do a better job, but with stacked sensors, you have to make some compromises. Either better autofocus or better DR, both won't be possible!
 
Upvote 0
As a suggestion, you can load your own custom LUTs on the Canon R5mk2 (or any other inexpensive mirrorless or DSLR camera!) and that means you can create a burned-in LUT-based colour palette that MIMICS the Raw Light colour spread so that any COMPRESSED or RAW file format will allow you to RECOVER the more cinematic-looking shadows, midtones and highlights.

You can create these custom LUTs in such software as 3D LUT Creator, Blackmagic Resolve, Color Labs Look Designer-2, Lutefy, IWLTBAP LUT Create and even Adobe Photoshop or Creative Cloud.

The key part of any custom camera recording LUT is that you want to create 14-bit or 12-bit LUT values that use a curve-based fitting of shadow values up and down to no-lower than 7.5% of each 16-bit, 14-bit or 12-bit sampled RGB channel value and highlights to no higher than 93.75% of the same 16-bit, 14-bit or 12-bit sampled pixel colour channel range.

That means you NEED to curve fit your Custom-made SQUEEZED colour-space Recording LUT's Shadows, Midtones and Highlight values range to within a minimum of 1,228 and a maximum of 15,360 within the 14-bit colour space, ...OR... to values within the range of 307 to 3,840 within a 12 bit colour space, whichever is used during recording.

Once you do that, you can RE-EXPAND your colour-squeezed RGB values back out to the original 16-bit 0..65,535 (or 14 or 12-bit) value range using a different COLOUR SPACE UNSQUEEZE LUT that is the full bit-width of your video editor's timeline. This type of colour and luminance range remapping allows video compression algorithms to more easily give you higher-quality and less-macro-blocky video footage during the recording phase when you use a LOSSY inter-frame compression algorithm such as MP4. If you use a LOSSLESS compression algorithm (i.e. 2:1, 3:1 or 4:1 BRAW or RED RAW) your recordings won't take as much file space!

Shadows are considered any individual RGB value from 0% to 24% of maximum possible pixel luminance as being shadows, 25% to 75% of maximum pixel luminance as being mid-tones and 76% to 100% of maximum pixel luminance as being highlights. You use floating point-based CURVE fitting to REMAP that full-width luminance level for EACH of the individual channel luminance values within your LUT to a range falling between 7.5% to 93.75% of the maximum 14 or 12 bit recording value (i.e. keep all final remapped LUT channel values between 1,228 to 15,360 for 14 bit recording and 307 to 3,840 for 12-bit recording).

The UNSQEEZING-oriented LUT, which is installed into your editor or VFX software, takes your recorded LUT SQUEEZED video footage and remaps, using a STRETCHED-OUT CURVE FIT method, all pixel values back out to the EXPANDED 16-bits wide per colour channel colour space so you can RECOVER all your shadows, midtones and highlights to their full-bandiwdth RGB colour glory without needing to do any fancy colour grading! The custom COLOUR SQUEEZING LUT within the camera and the custom COLOUR UNSQUEEZING LUT installed within the editor timeline are doing all the work for you!

You can do this all yourself OR you can buy OTHER PEOPLE'S colour squeeze/unsqueeze LUTs for various DSLR and Cinema cameras and video editors including those from Canon, Sony, Fuji, Panasonic, Black Magic, Adobe, Apple, etc.

This type of colour-squeeze LUT trickery lets you EMULATE the cinema production-grade CLOG-2/CLOG-3 recording of the more expensive cameras within your super-inexpensive mirrorless or DSLR camera no matter the brand!

Now You Know!

V
Normally I trust my eyes more than reading these values! :)
 
Upvote 0
The only problem with C70 was the heavy moire, because of the specific low-pass filter Canon implemented in it. The only camera in this price-range which is almost moire free is the Lumix S1H, so we had to re-record many interviews with the latter camera over the last 4 years. :-(
If C80 sensor has changed in this regard, this camera WILL BE GAMECHANGER!!! (But actually, I would doubt it!)
 
Upvote 0
Normally I trust my eyes more than reading these values! :)
Normally, when you create custom lookup tables you output a signal to a Waveform monitor (i.e. a Luminance value parade display) and Vectorscope (i.e. for checking Chroma-based RGB colour channel and Phase values) ... THAT is what you should trust and NOT your eyes. Many human males have Red-Green or Green-Blue colour deficiencies in their vision SOOOOOO trusting their eyes is the LAST THING they should do!

Much like how flying as an IFR (Instrument Flight Rules) FAA-governed pilot .... YOU ALWAYS TRUST YOUR INSTRUMENTS and NOT your vision or your "physical feelings" -- They can and WILL FOOL YOU which can kill you faster and deader than a doorknob! Ergo, with video luminance and RGB/YCbCr colour rendition, ALWAYS trust the waveform monitor and Vectorscope display! THEY are giving a TRUE REPRESENTATION of colour and luminance output!

So if you are making custom COLOUR SQUEEZE LUTs for recording, use the Vectorscope/Waveform display of your editing software to CONFIRM the colour value specifications to ensure all imagery falls within the stated 7.5% and 93.75% of maximum luminance and chroma bandwidth! Then use the Vectorscope/Waveform to confirm your COLOUR UNSQUEEZE LUTs for your timeline editing is working properly and widening out the luminance values and RGB colour (Chroma) palette of your recorded image back out to the full 0% to 100% Luma/Chroma range using a curve-fitting/spline-point expansion method!

Trust the SCIENCE and NOT your eyes! They fool you into complacency! The colour science MATH will tell you what is and IS NOT within Luminance and Chroma specifications. You want the most colour accurate video possible SO TRUST YOUR INSTRUMENTS and NOT your eyes when you make custom LUTs!

V
 
Upvote 0