FACIAL RECOGNITION SYSTEM PRIf, ITf, ISf 2023 02 14
FACIAL RECOGNITION SYSTEM PRIf, ITf, ISf 2023 02 14
FACIAL RECOGNITION SYSTEM PRIf, ITf, ISf 2023 02 14
Questions:
The task in groups.
1. Watch the video and read the text. Prepare a brief summary of all the main
aspects mentioned.
https://www.wired.com/story/hard-ban-facial-recognition-tech-iphone/
The article discusses the implications of Apple's decision to ban facial recognition
technology on its new iPhone lineup, citing privacy concerns as the main driver behind the
move. It highlights the potential for misuse of facial recognition technology by governments,
law enforcement agencies, and tech companies, citing several cases of abuse and bias in the
past. The article also discusses the limitations of facial recognition technology and the
potential forfalse positives and negatives, raising concerns about its accuracy and reliability.
Overall, the article emphasizes the need for greater regulation and oversight of facial
recognition technology to ensure that it is used responsibly and ethically.
2. Try to answer the questions and include your answers in the summary part.
What are the pluses and minuses of Facial Recognition System?
The pluses of facial recognition systems are their ability to identify individuals in crowded
public places without their cooperation, but the minuses include their lower reliability and
efficiency compared to other biometric techniques, their high false acceptance and rejection
rates, and their sensitivity to factors such as facial expressions, viewing angles, and image
resolution. Data privacy is also a major concern with the storage of biometric data.
Which idea is closer to you? Present your arguments.
o A constant surveillance state is not that bad if we are more secure with it.
o It is not worth it to sacrifice personal freedom over constant surveillance and
security.
o “If you have nothing to hide, you have nothing to fear” is completely true:
everything should be public, how else could we achieve true security?
Application
Social media platforms have adopted facial recognition capabilities to diversify their
functionalities to attract a wider user base amidst stiff competition from different
applications.
Founded in 2013, Looksery went on to raise money for its face modification app on
Kickstarter. After successful crowdfunding, Looksery launched in October 2014. The
application allows video chat with others through a special filter for faces that modifies the
look of users. While there is image augmenting applications such as FaceTune and
Perfect365, they are limited to static images, whereas Looksery allowed augmented reality to
live videos. In late 2015, Snapchat purchased Looksery, which would then become its
landmark lenses function.
Snapchat's animated lenses, which used facial recognition technology, revolutionized and
redefined the selfie, by allowing users to add filters to change the way they look. The
selection of filters changes every day, some examples include one that makes users look like
an old and wrinkled version of themselves, one that airbrushes their skin, and one that places
a virtual flower crown on top of their head. The dog filter is the most popular filter that
helped propel the continual success of Snapchat, with popular celebrities such as Gigi
Hadid, Kim Kardashian and the likes regularly posting videos of themselves with the dog
filter.
DeepFace is a deep learning facial recognition system created by a research group
at Facebook. It identifies human faces in digital images. It employs a nine-layer neural
net with over 120 million connection weights, and was trained on four million images
uploaded by Facebook users. The system is said to be 97% accurate, compared to 85% for the
FBI's Next Generation Identification system. One of the creators of the software, Yaniv
Taigman, came to Facebook via their acquisition of Face.com.
ID verification
The emerging use of facial recognition is in the use of ID verification services. Many
companies and others are working in the market now to provide these services to banks,
ICOs, and other e-businesses.
Face ID
Apple introduced Face ID on the flagship iPhone X as a biometric authentication successor to
the Touch ID, a fingerprint based system. Face ID has a facial recognition sensor that consists
of two parts: a "Romeo" module that projects more than 30,000 infrared dots onto the user's
face, and a "Juliet" module that reads the pattern. The pattern is sent to a local "Secure
Enclave" in the device's central processing unit (CPU) to confirm a match with the phone
owner's face. The facial pattern is not accessible by Apple. The system will not work with
eyes closed, to prevent unauthorized access.
The technology learns from changes in a user's appearance, and therefore works with hats,
scarves, glasses, and many sunglasses, beard and makeup.
It also works in the dark. This is done by using a "Flood Illuminator", which is a dedicated
infrared flash that throws out invisible infrared light onto the user's face to properly read the
30,000 facial points.
Controversies
Privacy violations
Civil rights right organizations and privacy campaigners such as the Electronic Frontier
Foundation, Big Brother Watch and the ACLUexpress concern that privacy is being
compromised by the use of surveillance technologies. Some fear that it could lead to a
“total surveillance society,” with the government and other authorities having the ability to
know the whereabouts and activities of all citizens around the clock. This knowledge has
been, is being, and could continue to be deployed to prevent the lawful exercise of rights of
citizens to criticize those in office, specific government policies or corporate practices. Many
centralized power structures with such surveillance capabilities have abused their privileged
access to maintain control of the political and economic apparatus, and to curtail populist
reforms.
Face recognition can be used not just to identify an individual, but also to unearth other
personal data associated with an individual – such as other photos featuring the individual,
blog posts, social networking profiles, Internet behaviour, travel patterns, etc. – all through
facial features alone. Concerns have been raised over who would have access to the
knowledge of one's whereabouts and people with them at any given time. Moreover,
individuals have limited ability to avoid or thwart face recognition tracking unless they hide
their faces. This fundamentally changes the dynamic of day-to-day privacy by enabling any
marketer, government agency, or random stranger to secretly collect the identities and
associated personal information of any individual captured by the face recognition system.
Consumers may not understand or be aware of what their data is being used for, which denies
them the ability to consent to how their personal information gets shared.
Face recognition was used in Russia to harass women allegedly involved in online
pornography. In Russia there is an app 'FindFace' which can identify faces with about 70%
accuracy using the social media app called VK. This app would not be possible in other
countries which do not use VK as their social media platform photos are not stored the same
way as with VK.
In July 2012, a hearing was held before the Subcommittee on Privacy, Technology and the
Law of the Committee on the Judiciary, United States Senate, to address issues surrounding
what face recognition technology means for privacy and civil liberties.
In 2014, the National Telecommunications and Information Association (NTIA) began a
multi-stakeholder process to engage privacy advocates and industry representatives to
establish guidelines regarding the use of face recognition technology by private companies.
In June 2015, privacy advocates left the bargaining table over what they felt was an impasse
based on the industry representatives being unwilling to agree to consent requirements for the
collection of face recognition data. The NTIA and industry representatives continued without
the privacy representatives, and draft rules are expected to be presented in the spring of 2016.
In July 2015, the United States Government Accountability Office conducted a Report to the
Ranking Member, Subcommittee on Privacy, Technology and the Law, Committee on the
Judiciary, U.S. Senate. The report discussed facial recognition technology's commercial uses,
privacy issues, and the applicable federal law. It states that previously, issues concerning
facial recognition technology were discussed and represent the need for updated federal
privacy laws that continually match the degree and impact of advanced technologies. Also,
some industry, government, and private organizations are in the process of developing, or
have developed, "voluntary privacy guidelines". These guidelines vary between the groups,
but overall aim to gain consent and inform citizens of the intended use of facial recognition
technology. This helps counteract the privacy issues that arise when citizens are unaware of
where their personal, privacy data gets put to use as the report indicates as a prevalent issue.
The largest concern with the development of biometric technology, and more specifically
facial recognition has to do with privacy. The rise in facial recognition technologies has led
people to be concerned that large companies, such as Google or Apple, or even Government
agencies will be using it for mass surveillance of the public. Regardless of whether or not
they have committed a crime, in general people do not wish to have their every action
watched or track. People tend to believe that, since we live in a free society, we should be
able to go out in public without the fear of being identified and surveilled. People worry that
with the rising prevalence of facial recognition, they will begin to lose their anonymity.
Facebook Deep Face
Social media web sites such as Facebook have very large numbers of photographs of people,
annotated with names. This represents a database which may be abused by governments for
face recognition purposes. Facebook's DeepFace has become the subject of several class
action lawsuits under the Biometric Information Privacy Act, with claims alleging that
Facebook is collecting and storing face recognition data of its users without obtaining
informed consent, in direct violation of the Biometric Information Privacy Act. The most
recent case was dismissed in January 2016 because the court lacked jurisdiction. Therefore, it
is still unclear if the Biometric Information Privacy Act will be effective in protecting
biometric data privacy rights.
In December 2017, Facebook rolled out a new feature that notifies a user when someone
uploads a photo that includes what Facebook thinks is their face, even if they are not tagged.
Facebook has attempted to frame the new functionality in a positive light, amidst prior
backlashes. Facebook's head of privacy, Rob Sherman, addressed this new feature as one that
gives people more control over their photos online. “We’ve thought about this as a really
empowering feature,” he says. “There may be photos that exist that you don’t know about.”
Imperfect technology in law enforcement
All over the world, law enforcement agencies have begun using facial recognition software to
aid in the identifying of criminals. For example, the Chinese police force were able to
identify twenty-five wanted suspects using facial recognition equipment at the Qingdao
International Beer Festival, one of which had been on the run for 10 years. The equipment
works by recording a 15-second video clip and taking multiple snapshots of the subject. That
data is compared and analysed with images from the police department's database and within
20 minutes, the subject can be identified with a 98.1% accuracy.
It is still contested as to whether or not facial recognition technology works less accurately on
people of colour. One study by Joy Buolamwini (MIT Media Lab) and Timnit Gebru
(Microsoft Research) found that the error rate for gender recognition for women of colour
within three commercial facial recognition systems ranged from 23.8% to 36%, whereas for
lighter-skinned men it was between 0.0 and 1.6%. Overall accuracy rates for identifying men
(91.9%) were higher than for women (79.4%), and none of the systems accommodated a non-
binary understanding of gender. However, another study showed that several commercial
facial recognition software sold to law enforcement offices around the country had a lower
false non-match rate for black people than for white people.
Experts fear that the new technology may actually be hurting the communities the police
claims they are trying to protect. It is considered an imperfect biometric, and in a study
conducted by Georgetown University researcher Clare Garvie, she concluded that "there’s no
consensus in the scientific community that it provides a positive identification of somebody.”
It is believed that with such large margins of error in this technology, both legal advocates
and facial recognition software companies say that the technology should only supply a
portion of the case – no evidence that can lead to an arrest of an individual.
The lack of regulations holding facial recognition technology companies to requirements of
racially biased testing can be a significant flaw in the adoption of use in law
enforcement. Cyber Extruder, a company that markets itself to law enforcement said that they
had not performed testing or research on bias in their software. Cyber Extruder did note that
some skin colours are more difficult for the software to recognize with current limitations of
the technology. “Just as individuals with very dark skin are hard to identify with high
significance via facial recognition, individuals with very pale skin are the same,” said Blake
Senftner, a senior software engineer at Cyber Extruder.
In 2018, the Scottish government created a code of practice which dealt with privacy issues
and won praise of the Open Rights Group.
Facial recognition technology market worth a staggering $4.6bn in 2019 - and set to grow by
another 25% over next 9 years.
In May 2019, the San Francisco Board of Supervisors voted to prohibit police and other
government agencies from using facial recognition technology, making San Francisco the
first U.S. city to ban this practice.