The Limits of Tech and Why Smartphones Don’t Operate Under the Same Rules as ‘Real’ Cameras
While smartphone cameras and standalone cameras (which are sometimes referred to as “real” cameras) are often marketed using the same terms, the way they each function is very different. In a video published to TED-Ed, electrical engineer Rachel Yang simplifies and explains why smartphones don’t operate by the same rules that standalone cameras do.
The explanation may be familiar to those extremely familiar with how cameras work but for a majority of folks who use their phones to take pictures, the video above will serve as a great way to simply and quickly break down what is a complicated topic into a more digestible manner. It’s a very good way to quickly explain why a modern mirrorless camerea or DSLR may use the same marketing terms as the latest Apple iPhone, Google Pixel, or Samsung Galaxy device, but are achieving their end results very differently.
In both cases, the size of a sensor and the optics in front of it play the biggest role in image quality, which Yang explains can be tied to three factors: resolution, dynamic range, and noise.
“The first is resolution, or level of detail. Sensors with higher numbers of photosites offer better resolution, as the camera can collect more granular light data. Second and third are dynamic range and noise. Dynamic range is the span from light to dark within a single photo, and noise is the graininess that can come from poor lighting, long exposure times, or an overheating camera. Both these factors can be improved by using larger photosites, which can capture more light overall,” Yang explains succinctly.
“This wider range of data helps processors better measure the intensity of the incoming light, adding contrast and reducing noise. Simply put, to make better digital cameras, you need image sensors with higher numbers of larger photosites. Engineers know this.”
That’s why standalone cameras have moved to using larger sensors over the years with a Type 1, or the other confusing name “one-inch”, sensor typically being the smallest most photographers in the know will accept as viable, with medium format at the other end, being the largest digital sensors commercially available.
But smartphones don’t have the space for a sensor nearly that big, and most don’t even have space for a Type 1. While image quality on a sensor level is plateauing for both, the problems associated with that plateau would be felt far more dramatically if it were not for one trump card smartphones carry that standalone cameras do not: extremely powerful processing.
“When you snap a picture on your phone, this pocket-computer starts running complex algorithms, which often begin by secretly taking a string of photos in rapid succession,” Yang says in a simple explanation of what computational photography is.
“The algorithms then manipulate these pictures, using math to perfectly align them and identify their best parts before combining the images into one high-quality photo. The end result is an image with less noise, wider dynamic range, and higher resolution than its sensors should be able to achieve.”
What Yang doesn’t go into is any speculation on how these facts will apply to cameras of the future. As mentioned, sensor technology is plateauing. Through the mid-2010s, sensors improved significantly almost annually, with leaps and bounds better performance than preceding models. But in 2025, that trend has slowed dramatically to where new sensors take several years to come to market and often also ship with tradeoffs: resolution versus noise or readout speed — the latter of which is hugely necessary for proper computational photography — for example.
As the base sensor silicon stops improving, smartphone makers have begun to rely more heavily on machine learning and AI. Samsung, for example, has dedicated its last two major smartphone launches to AI improvements, with very few hardware adjustments on the camera side. Google is doing the same. Apple bucked the trend this year by adjusting the physical size of its sensors on the front-facing selfie cam and the telephoto camera, but what about next year or the year after that? Physics remains the biggest hurdle for image improvements, and while some AI and machine learning features that smartphone makers are adding are neat, they also seem to be plateauing on what they can offer.
The next few years of camera sensor development will be key in understanding just how much better a smartphone camera can get, if at all.