When smartphone manufacturers are all relying on dual camera systems to create a bokeh effect on Portrait mode, Google on the other hand proved that secondary camera sensors are optional to recreate the effect. The Pixel 2 is set as a prime example of how the effect can be done through software and AI algorithms.
Google Pixel 2 used an exclusive “semantic image segmentation model” called DeepLab-v3+ and there are releasing that in the open source community under TensorFlow so developers can build newer and even better applications or that will also be useful in the academic settings.
The release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) architecture as the backbone of this technology for the most accurate results.
This technology helps develop and create a synthetic shallow depth-of-field effect in the Portrait mode. What it generally do is to really on the software adaptive learning processes to label objects such as dogs, person, sky, or road to every pixel in an image. With these labels determined, the software are able to set the outline of the object in focus and create a bokeh effect from a single sensor camera system.