Who buys a smartphone Android, regardless of the quality of the photographic sector chooses (almost always) to install a port of the Google Camera. This happens because all OEMs have not developed algorithms for beauty and for photo processing like the Mountain View giant did. But how were certain standards reached, super sought after by other brands too? In a recent article published on the official website, the colossus explains to us how he trained his AI in the Portrait Light function.
portrait light is a particular mode of the Google Camera: the brand explains how it reached the standard of this mode thanks to artificial intelligence
The function Portrait Light from Google can greatly improve some of the more mediocre photos by changing the direction and intensity of the light. The tech giant has launched the AI-based lighting function in September for Pixel 4 to 5G and Pixel 5 before granting access to older devices. In her blog she explains how she managed to achieve this effect. It is a lot experimentation and how it used volunteers to take the photos.
Google needed millions of portraits with and without lighting, from different directions. The company used a spherical lighting system with 64 cameras and 331 LED light sources individually programmable, in order to capture the photos he needed. He photographed 70 people with different skin tones, face shapes, sex and hairstyles. Even clothes e accessories they were fundamental for this study: they were illuminated within the sphere in a variable manner. In essence, the technology giant has made use of the use of a sort of dome (as we see in the GIF above) inside which LEDs and photographic sensors have been positioned. In a completely casual way, each sensor fired along with a combination of lights.