Earlier this year, Google introduced a new AI milestone called Multitask Unified Model (MUM) at their annual I/O developer conference. Now, this technology can simultaneously process information across a wide range of formats, including texts, images, and videos.
Google has now announced one of the ways it will be putting MUM to use in an update to its Google Lens Visual search.
Google Lens has always been Google’s image recognition technology that lets you perform a variety of tasks like identifying plants, real-time translation, getting help with math problems, and much more.
Soon, Google will upgrade lens with the ability to add text to visual searches to allow users to ask questions about what they see. In this context, you could pull an image of a shirt you like in Google Search.
Then tap on the lens icon and ask Google to find the same pattern, but on a pair of socks. You could type “socks with this pattern”, and in no time, you have your results.
It’s a better way to direct Google to find relevant queries as compared to using text input alone. Another example, a part of your bike has been broken and you want to search on Google for repair tips.
However, you don’t know what the piece is called. You could point Google lens at the broken part and type “how to fix.” Instantly, you’ll get connected to an exact moment in a video that could help.
Google says the Lens update will roll out soon noting it’s undergoing “rigorous testing and evaluation.”
Next; Infinix officially unveils the Zero X series – the brand is shooting for the moon; literally
Alfred Gitonga is a passionate tech news writer with a deep interest in smartphones and related technologies. He is a staff writer at Mobitrends.co.ke.