Mobile phone camera pixels have increased 364-fold over 18 years.
Image sensors have grown alongside the development of smartphones.
Samsung Strengthens Image Sensors with ISOCELL 
The first color photograph taken by Maxwell, the father of electromagnetism Since James Maxwell captured the first color photograph in 1861, camera technology has steadily advanced. And with the advent of digital cameras and mobile phones equipped with CMOS image sensors in the 2000s, camera technology has advanced even further.
High smartphone penetration leads to advancements in image sensors. 
18 Years Apart: Kyocera VP-210 and Huawei P20 Pro Kyocera's VP-210, released in 1999, was the first mobile phone to feature a camera. Its camera was 110,000 pixels. Huawei's flagship smartphone, the P20 Pro, released in 2018, boasts a triple rear camera setup with 40-megapixel, 20-megapixel, and 8-megapixel sensors, plus a 24-megapixel front-facing camera. In the past 18 years, mobile phones have added three more cameras and the maximum pixel count has increased by 364 times.
Smartphones, in particular, have contributed significantly to the advancement of camera technology. Two-thirds of the world's population uses smartphones, and all of them are equipped with cameras. Amidst a multitude of smartphone issues, including the application of artificial intelligence and deep learning, compliance with the 5G standard, and the introduction of fingerprint recognition and flexible displays, cameras remain a key focus for numerous smartphone manufacturers.
Why Various Image Sensing Technologies Are Being Developed As the era of smartphone standardization arrives, smartphone manufacturers are focusing on camera technology, making it their brand's specialty. Samsung Electronics is emphasizing high-speed and super slow-motion, Apple is targeting the market with large pixels, and Huawei is targeting the market with triple cameras. But why do smartphone manufacturers keep developing other technologies to solve a problem that seems to simply require more camera pixels?

Samsung Electronics' CMOS image sensor 'ISOCELL Fast 2L9' Due to physical limitations, camera pixel counts cannot be increased indiscriminately. To increase the pixel count on a 0.25-inch image sensor, the pixels must be reduced in size. However, smaller pixels cannot absorb sufficient light. Image sensor manufacturers have been increasing pixel absorption by moving the light-receiving element to the very top of the sensor, but even this has recently reached its limits. Therefore, they have turned to other technologies to supplement camera performance. The quality of a photo is determined by the quality of the pixels, not the number of pixels.
Above all, the biggest reason lies in the current state of smartphone camera usage. Most people take photos of cafe scenes or food with their smartphones. These photos are often taken in dimly lit indoor environments. When light is scarce, the camera increases its light sensitivity, or ISO, to maximize exposure. The more sensitive a pixel is to light, the more it affects neighboring pixels, resulting in noise in the photo. This problem cannot be solved by simply increasing the number of pixels.

Samsung Electronics' Galaxy S9 There are other reasons as well. Smartphone displays are getting bigger and bigger. Samsung Electronics' Galaxy S, released in 2010, had a 4-inch display, with a screen-to-body (STB) ratio of 57.88%. The Galaxy S9, released in 2018, has a 5.8-inch display, with a STB ratio of 84.36%. From the early days of smartphones until now, most consumers have preferred a bezel-less design. To achieve a bezel-less design, the display area must be increased while the body size must be minimized as much as possible. In addition, the recent increase in selfie-taking has led to improved front-facing camera performance. Smartphone manufacturers now have to increase the display area within a limited body size and improve the performance of the front camera.

Apple's 'iPhone X' notch design The iPhone X, which Apple unveiled in 2017, introduced the so-called notch design, a valley-like cutout at the top of the display to increase the display area and accommodate a facial recognition camera module. The ridicule of it as an M-shaped bald spot was short-lived, as almost all smartphone manufacturers around the world, except for Samsung Electronics, adopted the notch design for their smartphones. The industry predicts that smartphones with narrow notches, active holes, and pop-up designs will be released en masse starting in 2019.

vivo 'NEX' China's Vivo's 'NEX' has already achieved a STB ratio of 91.24% by applying a pop-up design, and OPPO's Find X has achieved a STB ratio of 93.8% by using a slide pop-up design.
Samsung Electronics' pixel enhancement strategy Samsung Electronics and Sony are leading the global compact camera module market. Among them, Samsung Electronics' CMOS image sensor technology has three key keywords. ISOCELL, 3-Stack, and enhanced user experience (UX).

Schematic diagram of ISOCELL technology The ISOCELL is structured to form an insulating layer between pixels, isolating adjacent pixels from each other. In other words, it creates a physical wall around the edge of each of the millions of pixels that make up the image sensor, preventing light from escaping. It is characterized by enhanced color reproducibility even in ultra-small pixels of 1.0㎛², and the thickness of the image sensor module can be significantly reduced from the existing 6.5㎜ (1.12㎛² pixels) to 5㎜ (1.10㎛² pixels).

Tetracell technology grafted onto ISOCELL Samsung Electronics has incorporated Tetra Cell technology into its ISOCELL. Tetra Cell operates at 1 pixel in bright light and 4 pixels in low light. This means it operates at 40 million pixels during the day and 10 million pixels at night. In other words, pixels are temporarily merged to increase their size arbitrarily. This Tetra Cell technology, which aggregates pixels in low-light environments, is expected to be further refined and developed to aggregate 9 and 16 pixels in the future.
Additionally, there are two photodiodes under the ISOCELL, which improves autofocus. Each photodiode and the subject form a triangle, and the processor uses triangulation to determine the subject's location and automatically focus.
The 'ISOCELL Fast 2L3' image sensor structure, which stacks the sensor, logic, and DRAM in three layers. Samsung Electronics is utilizing 3-Stack technology, which integrates its flagship DRAM and camera modules. There are various limitations to processing and utilizing image data from CMOS image sensors in mobile APs. Therefore, Samsung Electronics stacked the image sensor, image signal processor, and DRAM in three layers using Through Silicon Via (TSV) technology. Image data is processed directly on the DRAM, without going all the way to the mobile AP. The Galaxy S9's Super Slow-Mo feature utilizes this technology, capturing every movement of a subject by storing massive frame data at high speed.
Because there are two of us, not just one Dual cameras are a representative technology that overcomes the size of the image sensor. The first mobile phone to feature dual cameras was Samsung Electronics' SCH-B710, released in 2009. Now widely used, dual camera technology distributes and integrates the functions of a single camera, enabling previously unavailable functions. There are three main combinations.

Samsung Electronics' Galaxy Note 9 rear dual camera One solution is to configure a dual camera setup with a wide-angle and narrow-angle camera. This allows for zoom functionality. For close-up shots, only the wide-angle camera is used, while for distant shots, only the narrow-angle camera is used.
Another approach is to divide the camera's functions into a focusing camera and a non-focusing camera. One camera produces a focused image, while the other produces a non-focused image. The processor then synthesizes the two images, allowing for a bokeh effect, where only a specific subject is in focus and the rest is blurred.
Another option is to configure a dual camera with an RGB camera and a black-and-white camera. The RGB camera captures color, the black-and-white camera captures light, and the resulting images are combined to improve the quality of photos taken in low-light situations.
Dual camera technology can evolve into triple camera and even quadruple camera technology.
These technologies allow users to easily create high-quality content fit for the 5G era. They can capture delicious photos of food ordered at a cozy restaurant and post them on Instagram, or capture impressive athletic performances in slow motion.
Image sensor utilization beyond smartphones “I would like to see camera modules placed as far and wide as possible. “That’s how our sales increase.” At the 4th Advanced Sensor 2025 Forum held on the 11th, Samsung Electronics S.LSI Business Division Managing Director Lee Je-seok delivered a keynote speech, drawing laughter from the audience. The rapid advancement of camera modules, sparked by the smartphone industry, has opened up opportunities for their use in a variety of fields.
Multiple camera modules, capturing images from various angles, combined with AI and deep learning, enable previously impossible functions. AI synthesizes the image information sent by multiple camera modules and processes it through deep learning, transforming two-dimensional images into three-dimensional information. This three-dimensional information allows the device to pinpoint the location of objects and capture their movements.
The camera module aims to do more than just the human eye. The human eye is a high-performance image sensor, unmatched by any existing image sensor. It captures light with 576 million pixels. The visual information received is processed in real time by the brain. Furthermore, the two eyes estimate the location of an object based on the minute time and positional differences in the light received by each eye. In low-light environments, the pupils dilate to discern objects.
Nevertheless, camera modules can see, judge, and process things beyond the human eye. They can see ultraviolet and infrared light, and two or more camera modules can collect multifaceted information. For example, an image sensor mounted on an IoT device can analyze the user's movements and execute the IoT device function desired by the user.

Basic use cases of camera modules mounted on automobiles There's a growing movement to utilize image sensors in the autonomous vehicle sector. In the United States, rear-view cameras will be mandatory for all cars released starting in 2018. These cameras assist drivers in parking. Currently developed Level 3 autonomous vehicles are equipped with twelve cameras. As autonomous vehicle technology reaches Level 5, more cameras will be added.