It depends on your functional highlights, I guess.
On the one hand, it would be arrogant – not to mention, wrong – to claim that Sony’s (or Tower Jazz or …) reseach and production lines have been stagnant for the past decade.
On the other, it’s equally hard to deny that many artists – photographers not bound to productivity but to quality – still prefer film, and that many amateurs still get a warm fuzzy at the idea of CCD.
Let’s try to analyse why.
The superiority of CMOS over CCD is a common misconception. A CMOS sensor is a digital chip which converts the charge produced by electrons (created by photons falling on the light sensitive surface of the chip) into voltage, on the spot, via electronics present in every pixel. This is then shipped to an number of on-chip analog-to-digital converters, via circuits also present in each pixel.
A CCD sensor is an analog chip in which a pixel merely collects photos and converts them into charges. Those get pulled out, off-chip to an external A/D converter, then processed as an image in the same way as what comes off a CMOS sensor. A much larger proportion of every pixel is devoted to light collecting in a CCD, and no electronic noise from the other activities contaminates the images. Besides, off chip converters can be of higher quality. While much of that head start has been compensated by over a decade of R&D, CMOS (in its various BSI and stacked variations) still hasn’t quite caught up to CCD in terms of quality.
However, having processing on each pixel and a greater number of converters and amplifiers on-chip makes CMOS sensors a lot faster, a lot more frugal (CCD chips can use as much as 100x more energy as CMOS sensors do) and they are also a lot cheaper. So, pratical aspects (eg video), not quality, are the reason why CMOS has kicked CCD out of the arena. It’s understandable in an efficient market, but it still feels like a great shame that no outlier company is offering the choice of CCD to the nostalgic among us.
Film is another topic altogether. Being a chemical process, it comes with a limited set of advantages and a boatload of hinderances: cost, pollution, limited availability, technical limitations … It’s appeal is undeniable for many reasons, though.
For one thing, it sill looks immensely better in the highlights than anything digital can offer. And, film’s tone curve, to objective eyes, make it more natural and lifelike, in spite of all the other technical shortcomings. Then, there’s the fact that it comes in a very limited number of options. The digital market wants us to believe that more is better, but more is in fact harder, a lot harder. When you hone your technique on 3 filmstocks, you eventually get in tune with them and very proficient. Good luck with 200 presets or the absolute blank slate that is a RAW file in Lightroom. Finally, for some reason, chemical grain looks nicer to the eye-brain than digital noise. It feels like an aesthetic choice rather than a defect.
Of course, analog purists will tell you that the process is also a major part of the joy of film. No chimping, the surprise of results and other romantic concepts. And while I never chimp and do whatever I can to maintain flow during a session, the idea of feedback soon after that session appeals to me a lot more than heading to the post office to offload my films. But to each their own. If your life is constantly dictated by the rhythms of a phone, cooleagues who never unplug, social media, instant gratification and digital overdose, the process of slowing down to the beat of a faraway chemical lab must indeed be quite soothing.
But what about progress in CMOS sensors? Is a camera from 10 years ago as good as a camera from 10 days ago? Generally speaking, no. The first CMOS sensors were noisy, and not in a good way. And the technology has made big strides, until recently. Until resolution got in the way.
Photosites (the light sensitive area in pixels) convert photons to electricity with a certain quantum efficiency. CCDs are about 30% efficient in green light. Recent BSI CMOS sensors hike that up to 80%. This means that 80% of photons hitting the sensor get converted into electric charge, instead of 1 in 3 (or 1 in 4) with CCDs. It also means that the best improvement we can realistically hope for – ever – is a few unnoticeable percent, as no cheap sensor will likely breach the 90% anytime soon. And it’s been that way for a number of years. In spite of this, we have seen ISO ratings go throught the roof (using sensors that aren’t more sensitive to light than the previous generations).
Enter noise. Several sources of noise can degrade the quality of the signal. Dark noise, which happens as the exposure goes on, amplification noise, readout noise, conversion noise … If you have 20 000 electrons in your pixel and your cumulated noise is 8 electrons, your signal to noise ratio is 20 000 / 8, or 1250 to one, or just over 10 bits (2^10 = 1024). If you have 8 electrons in your pixel and a noise of 8, your S/N ratio becomes … ghastly. So, to improve on this, you can expose longer to capture more electons (which was the rationale behind Expose To The Right, a good idea in the bad old days, not so much today) or you can lower noise (by using a CCD, or by cooling the chip or by other means, including better on-chip circuits).
The best image quality comes from the chip that produces the highest signal to noise ratio. External cicruits that handle that signal also matter a lot (paying big money for such very high quality circuits is part of what got Pixii their record DxO rating, for example, as well as their post-Covid supply chain agony). But let’s just focus on sensors for now.
All things being equal, a larger pixel can collect more photons than a smaller one. Double the lateral size of a pixel and you quadruple its surface. That’s why cameras such as the Nikon D700 still have their followers today. Look at that sunset from the same article. Good luck taming those highlights with a high res sensor today.
Higher resolution essentially slices the sensor in smaller pixels. While the noise of each pixel remains the same, the amout of electrons each can hold gets sliced as well, as does the S/N ratio. Add to this the pixels added to control AF and other possible stuff I may not even be aware of, which all contribute to the diminution of captured photos, and you see a trend towards lower image quality and higher functionality. And yet, sensors keep getting measurably better. How?
For one thing, sensor technology is improving. While smaller photosites hitting a quantum efficiency glass ceiling can never capture more photons, noise can and does get improved over generations, keeping the S/N ratio ballpark, or attempting to. Then there’s the optimisation of circuits (Sony is legendary for its packaging and Pixii uses higher grade components than the rest of the industry, to name just two). Then, there’s image processing.
Sony’s astrophoto community fallout, a few years ago, was due to the company’s in-camera “cooking” of the noise books in order to simulate higher ISO than the electronics could actually muster. And there’s nothing wrong with that. Computational photography is the way ahead for our hobby. Phones are getting brilliant at it, and some camera brands are waking up to it.
But there’s something else at play: the market. You, me, selfie addicts, influencers, journalists, social platforms … Sony’s main persona is the content creator, not the photographer. As Susan Sontag brilliantly described decades ago, capitalism cannot thrive without intense flows of imagery. This skews the manufacturers not just towards functionality (AF, IBIS, frame rates) but towards almighty versatility (ISO, resolution) and measured performance, over pure aesthetics.
But it’s written neither in The Bible, nor in the canon of quantum mechanics that a sensor can’t be designed with photographers in mind. And it’s written in every eye doctor book and in cine history that it would be a great idea to do so.
When’s the last time you were in complete blackness, with areas of your surroundings you could make out no detail from? Quite frequently, if you ever walk at night. It feels entirely natural, though potentially scary. But when’s the last time you saw clipping? Take you time. Not ever, is the answer.
It’s no accident that 9-figure budget movies will gladly use $200 30 year-old photo-lenses to shoot important scenes (aberrations do not harm suspension of disbelief) but will go to any – repeat any – length to avoid clipping (which can instantly bumps the viewer out of the story). Lighting is a huge budget in those productions. And it’s no accident either that the leader in cine cameras, Arri, uses a sensor with a highlight bias, and that RED’s legendary RAW compression treats highlights with the utmost prudence and respect.
Compare that to the white chalk treatment of even expensive digital cameras, and you wonder why photo camera manufacturers don’t bother. Film does it well (and is enjoying a comeback), CCDs did it well (longing sigh), Phones are doing it (somewhat) better, and recent rumour suggests Hassy may be improving in the X2D (yaaaaay). Pixii, again, favoured highlights in its Monochrome mode tone curve, which probably measures worse for it, but looks beyond gorgeous. It isn’t a matter of fashion, or trend, but of human vision …
My analog-photo man-crush Jason, from the Grainydays channel, has embarked on a daily chugging of some ungodly type of Mountain Dew drink to pester Kodak into bringing back Aerochrome, a special filmstock he much loves. This post is my plea to the digital photo industry to produce more sensors and more cameras for … you know … photographers?
Enough with the resolution madness, enough with the ISO craze, enough with the framerates and quantifiable everything. Yes, sensors have gotten better. Now, could we please, please, please, in 2023, 50+ years after film nailed it, get proper highlight management in digital cameras???
Like what you are reading? Subscribe below and receive all posts in your inbox as they are published. Join the conversation with thousands of other creative photographers.
Leave a Reply