Recently in the month of May at Google’s I/O developer conference, CEO Sundar Pichai announced a new tech called Google Lens. The central idea behind this product is to grip Google’s computer vision and AI Technology so as to make your phone’s camera smart. The AI-driven visual search app focuses mainly on augmented reality feature and real-time interaction. In a demo at the Google IO, Google showed us how a user can point their camera at something and Lens tell them what is it. For example, if you point your camera at a typical flower, Lens could identify the flower you are trying to shoot. The feature is first being added to Google Photos and the personalized AI Software Assistant and is available on many devices. The main idea behind Lens is machine learning which examines photos viewed through your phone’s camera, or on saved photos on your phone and uses these images to complete tasks.
Lens can do –
- Tells you about species of Flower when viewed through phone’s camera;
- Automatically Logging you into a network by reading complicated Wi-Fi password through your phone’s camera;
- Shows reviews and other information about Restaurant or Retail Stores by just flashing your phone’s camera over the physical place.
The Company also showed how Google Lens would be integrated into Google Assistant during a Google Home Demonstration. Users will be able to launch Lens and insert a photo into the conversation with the Assistant, where it can process the data the photo contains, through a new button in the Assistant app. Translating would also be done by integration of Lens into Assistant.
Later, the CEO also showed how Google algorithms can clean up and enhance photos. For example, while taking a picture of your friend’s Tennis game through a chain-like fence, Google could remove the fence from the photo automatically. Google can also enhance photos taken in low-light conditions to make it less pixelated and blurry.
The Company didn’t officially announce release dates but it’s saying that it will be arriving soon.