Face is the first thing we notice whenever there is a need to recognize someone. This process takes place so instantly that there is virtually no observable delay in it. When the ability to recognize individuals by their face is implemented in electronic devices, it is called facial recognition or face biometrics. Biometrics is fundamentally a technology of pattern recognition, which leverages computing, mathematical and statistical techniques to map human physiological or behavioral patterns. These mathematical and statistical algorithms run in computer programs behind the scenes to analyse a biometric pattern.
Facial recognition has recently grown to a serious business as many tech giants like Google, Facebook, etc. have been pushing this technology. That is not all, many banking and financial service providers are using face biometrics for banking and authenticating transactions.

Why facial recognition has become importance lately?
Looks can be deceiving, and this byword does not get any truer when it comes to facial recognition. Facial recognition has been considered a less secure way to recognize people with their face alone. Twins can resemble each other. There are even people who are completely unrelated yet carry similar facial features. Despite these shortcomings, facial recognition is getting attention from top tech organizations and even from banking and financial service industry. There are profound reasons behind the increasing popularity of facial recognition. Face is the most natural way of identifying someone in human beings. Face is more exposed than any other biometric modality and unlike other recognition methods; it does not require any special process to acquire a sample. People capture and upload their digital photographs on social networks all the time.
When performing secure transactions like banking and financial transactions, facial recognition is used with behavioral biometrics to make sure the user is who she says she is. Mass surveillance has also become a necessity as terrorist incidents are on the rise. Govt. is funding tech firms to come up with more efficient facial recognition for public applications. These applications are used at places where a lot of people pass through a certain location and identifying them individually is not feasible. So from image identification on social networks to financial transactions as well as mass surveillance, applications of facial recognition are on their way to attain ubiquity.
Natural way of identification
Human ability to recognize faces, voices and other human characteristics is basically the ability to recognize patterns. Let it be facial pattern, voice pattern or gait, once human brain is done associating a pattern with a specific individual; it instantly recalls the identity when it comes across that pattern. That is how we are able recognize people with their voice or the way they walk, without even looking at their face. Facial recognition a natural way of human identification among human being, however, implementing the same ability in computing or electronic devices can be tricky. While human beings can identify difference between a photograph and a real human being, making computers know this difference takes the technological complexity to the next level.
There have been many instances where facial recognition systems have been fooled using photographs videos or masks. However, newer face recognition methods make use of 3D map of human face, which is considered more secure as it cannot be fooled with images or videos. This approach, however, has also been challenged by security researches with 3D masks. Apple used 3D facial recognition instead of fingerprint authentication in its iPhone X smartphone, which uses infrared light to create a 3D map of a user’s facial structure. This approach has been fooled by researchers at a Vietnamese cybersecurity firm Bkav.

AI and machine learning meet facial and object identification
Now tech firms are pushing boundaries of their facial recognition ability toward identification of objects, places and animals in images. Google has integrated its image analysis capabilities to some of its products. These products leverage Google’s expertise in machine learning and cloud processing to recognize people, places, objects and animals. Launched in May 2015, Google Photos is one of such products that comes equipped with AI powered recognition and image analysis capabilities. It is a photo sharing and storage service available as a browser and smartphone app. Google Photos may look like just another photo sharing app but it has a lot going under the hood. It can analyse images and automatically organize them in categories such as people, places, and things, depending what the image shows. It can group images of same person and enable users to search a particular person by clicking an icon showing her/his face.
The app automatically creates these “face icons” by extracting faces from available images. If your family albums consist of pets, you can type dog or cat, and it will show the images consisting respective pet. You can also type name of objects, places, monuments, etc. This level of image analysis is made possible by leveraging facial recognition tech and information gleaned from geo-tagged photos. But what is more impressive is that Google Photos can even recognise things, which includes buildings, monuments, food, flowers and even your cats.
How does it identify objects and animals?
Google’s software views each photo as having a distinct number of varying layers. Each layer contains different information about the photograph in question.
“If the input is a picture of a cat…, then the first layer spots simple features such as lines or colors. The network then passes them up to the next layer, which might pick out eyes or ears. Each level gets more sophisticated, until the network detects and links enough indicators that it can conclude, “Yes, this is a cat.” The 22 layers are enough to tell the difference between, say, “wrestle” and “hug” — two abstract concepts with minute visual differences that might confuse a network with fewer layers.”
Google Lens is another product that makes use of Google’s image analysis ability. It was announced by Google in 2017’s I/O conference. Google Lens has ability to identify any object by pointing device camera to the object. It can scan and recognize famous landmarks, paintings, translate foreign language, scan barcodes / QR codes, etc. It can connect to a Wi-Fi network by pointing the device camera at Wi-Fi label containing network name and password. It can also search for relevant information in case of objects like books, album covers, etc.
Despite all the greatness, AI and machine learning is not free from errors. Since AI and machine learning is still in its infancy, it fails to learn to not to fail in that scenario again. In such an incident, Google Photos labelled black people as ‘Gorillas’ and Google had to make an apology for the error.
Reverse image search: image as a search query
We all have been addicted to Google search, so addicted that we do not want to strain our brains if we have access to Google. Till the year 2000, Google search was limited to text search which returned results in simple pages of text with links. The search giant introduced image search in 2001 and by the end of the year it had indexed 250 million images, which reached whooping 10 billion images by 2010. Search by Image feature was added in June 2011, which allowed users to perform reverse image search.
Reverse image search is a search engine technology that takes an image file as input query and returns results related to the image. Users could upload an image, enter its URL or just drag and drop the image to image search bar and it showed related results in return. Reverse image search uses sophisticated algorithms that create a mathematical model of the submitted picture and compares it with billions of others in Google’s database. Google is most used, however, not the only search engine that makes use of reverse image search. TinEye, Bing, Yandex, etc. are some of the other search engines that offer reverse image search.
The competition is heating up
Google’s expertise in AI and machine learning has been evident in many of its recent products, be it of any genre. However, Google is not the only company that is pushing facial search, object identification and image analysis. Facial recognition has been already implemented by Facebook, and it has capability to tag people wherever their photos show up. Facebook’s DeepFace, which is a deep learning facial recognition system, can identify human faces in digital photographs. The system claims to be 97% accurate, which is even more accurate than FBI’s Next Generation Identification System.
Samsung has done an implementation similar to Google Lens in its app called Bixby Vision. This app can identify objects, landmarks, translate foreign language text, and fetch more information, just by pointing device camera to an object. It can also analyse images which are already downloaded or captured by the device camera. There are also many smaller organizations that working on facial recognition and visual analysis technology.
Conclusions
Human face plays an important part in social interaction and human to human recognition. Now it is set to play an important role in machine to human recognition as well. Being the easy-to-acquire modality, face recognition is the one of the technology being pushed by technology firms. Google is integrating AI and machine learning to its image analysis capability, which can not only recognize human faces but objects, monuments, places and animals as well. Google
Giving ability to recognize “things” to its applications, Google has done something which is yet to be implemented at this scale by others. No wonder these features have made Google a largely used service by millions of users. Google Photos, one of the apps that leverage AI and machine learning to identify objects, places and animals, can impressively tell the difference between dogs, cats, and any number of other animals in the images it processes. While facial recognition on humans is one thing, applying such software to varying species is an entirely new challenge altogether, which has been significantly achieved by Google.
Comments are closed.