Facial recognition is a method used to identify a human face using biometrics to map facial features through technology imaging and compares it to the database of known faces. This technology is widely used in identification verification as a security measures and in social media such as snapchat filters. In order to use this technology, the algorithms must be fed with hundred of thousand of images, mostly from the internet, and categorized them by age, gender, skin tone, and other metrics for AI to better identify and improve. Million of people pictures are used without consent to power this technology. In January, IBM released a collection of photos taken from Flickr with annotated details including facial geometry and skin tone to researchers as a training set for further improvement towards reducing biases in facial recognition algorithms. Yet, none of the people had any idea of their images being used. Greg Peverill-Conti, a Boston-based public relations executive claims that it’s a little sketchy that IBM can use these pictures without any consent. IBM claims that its dataset is designed to help academic researchers improve on the technology so it can develop a “fairer” facial recognition systems accurately identifying people of all races, ages, and genders. Yet, legal experts and civil rights advocates are concern about such AI training and facial recognition improvement, particularity for minorities who could be profiled and targeted, such as immigrants or participants in political protest if used by government or law enforcement agency. Thus, is facial recognition in the hand of law enforcement good or bad?
Source 1: https://www.nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921