I really don’t get the use of super high resolutions on tiny sensors like that.
Sure, you can have a crazy zoom (aka crop) while still retaining good enough resolution, but at this point?
All the detriments that minuscule, high-res sensors bring about won’t just disappear.Don’t you enjoy photos of blurry gray splots with AI oversharpened edges who are supposed to be birds or squirrels?
Despite all the marketing fluff, phone cameras make small but steady advances. I bet you’d make a somewhat acceptable photo at this 200x zoom level, if you shine a pair of 500 watt floodlights at your scene, and put your phone on a tripod.
put your phone on a tripod.
My phone has a x10 zoom option that is barely usable without at least resting it on a surface, I can’t imagine trying to take an even half decent photo at x200…
AI will fix that. The
photopicture might not have anything to do with what’s in front of the lens, but at least it will be pretty.I mean if you’re not looking for an end result where you actually have a photo of the thing in front of you to look back at/show others, then yeah, I guess that’ll work lol
I really don’t get the use of super high resolutions on tiny sensors like that.
The caveat is that the software used to process all that data needs to be good.
pixel binning as a ‘solution’ to a problem which needn’t even exist in the first place.
Well, I fully agree with this article. There is one other good use of binning/supersampling though, and that is better chroma resolution relative to luma.
But even that won’t do much, with all the other shortcomings already present.
As the article says, it’s marketing. If groups of 4 pixels are binned into 1, it’s really a 50mpixel sensor.
Yes, many of these phones won’t give you 200mp images (unless under a specific mode like RAW), so you’re always getting something more reasonable.
Pixel binning can help with low light (effectively doubling the light available if binning with the next pixel over), or it can help to extend the telephoto range, or it can pull details that’d be harder to get with fewer MP.
Most would probably argue that it’s better to have this option than not.
I think that diffraction limit effects already happen at 50mp cameras so tiny phone sensors would be worse. ( https://blog.kasson.com/the-last-word/diffraction-and-sensors/)
In this case, adding more pixels only slows down the camera without improving the picture.
Let me preface this by admitting that I’m not a camera expert. That being said, some of the claims made in this article don’t make sense to me.
A sensor effectively measures the sum of the light that hits each photosite over a period of time. Assuming a correct signal gain (ISO) is applied, this in effect becomes the arithmetic mean of the light that hits each photosite.
When you split each photosite into four, you have more options. If you simply take the average of the four photosites, the result should in theory be equivalent to the original sensor. However, you could also exploit certain known characteristics of the image as well as the noise to produce an arguably better image, such as by discarding outlier samples or by using a weighted average based on some expectation of the pixel value.
However, you could also exploit certain known characteristics of the image as well as the noise to produce an arguably better image, such as by discarding outlier samples or by using a weighted average based on some expectation of the pixel value.
Yes, that is one use case for pixel binning. Apple uses it to reduce noise in low light photos, but it can also be used to improve telephoto images where more data (from neighboring pixels) can be used to yield cleaner results.
But its all about the numbers, like the speed thing we used to/still have on PCs.
It is, and I hate it so much. Like, even a full frame sensor would need some proper ISO magic at 200MP
Well, if you have 200 pixels, it means that you can zoom 200 times. It’s just basic physics.
Statistical photography aka computational photography aka supersampling. Statistically bin together number of smaller pixels to cut the amount of noise to create picture of a lower resolution than sensor level, but better quality.
Federation had a hiccup there, I’m only seeing your reply now
Supersampling is definitely something interesting, but up to what point? On a sensor this small, even something like 48 sampled to 12 already suffers to a degree where I would stop calling it useful.
Don’t get me wrong here, I can see the use first hand on my own phone. My second lens for night mode does 20MP to 5, and while the image is brighter than the main lens, it’s just as grainy, and a much lower output resolution too.
Now granted, my phone is a few years old now, and modern devices surely have better sensors, but no amount of trickery will make up for those physical limitations.
Haha! Look at those dumb “professional” photographers spending $15k USD on a single 600mm lens that only gives them like 15x zoom. My $1000 phone with 200x zoom will surely beat the crap outta those!
/S
If it were an actual zoom, at least. I was absolutely delighted when I first learned that some phones do in fact have lenses with a variable focal lenght.
Having that 2x zoom through actual optics instead of it being a cropped image is fantastic, gotta say. I really want my next phone to have that, so that zooming is actually useful.
It’s about the quality of pixels, not the volume.
Yeah that’s like pairing up 200 earbuds and expecting it to sound like a proper studio monitors.
The glass on the lens doesn’t even resolve that much resolution. I doubt it’s even physically possible to make a piece of glass that perfect. There is a reason people still buy medium format cameras over full frame, the glass elements can be larger and therefore small imperfections are a smaller fraction of the lens. This is also one reason bigger telescopes are always better. Diffraction also kicks in faster with smaller lenses. Even if your glass was perfect, then diffraction of the iris blurs your image.