China on Sunday put into effect new regulations that require Chinese telecom carriers to scan the faces of users registering new mobile phone services, a move the government says is aimed at cracking down on fraud.
The rules, first announced in September, mean millions more people will come under the purview of facial recognition technology in China.
The Ministry of Industry and Information Technology (MIIT) did not say which companies will provide the telecoms providers with these services but China is home to some of the world’s leaders in facial recognition software, including Megvii and SenseTime.
What are the new rules for Chinese mobile phone users?
China’s telecom operators must now use facial recognition technology and other means to verify the identity of people opening new mobile phone accounts.
China’s three largest carriers are state-owned China Telecom, China Unicom and China Mobile. It was unclear how the law applies to existing mobile accounts.
Where else in China has the technology been used?
Supermarkets, subway systems and airports already use facial recognition technology. Alibaba gives customers the option to pay using their face at its Hema supermarket chain and runs a hotel in its headquarters city of Hangzhou where guests can scan their face with their smartphones for advance check-in.
The metro systems of some major Chinese cities have announced they will use the technology, with government-owned newspaper China Daily saying Beijing will use it to “classify passengers” to allow for “different security check measures”.
In July, the Xinhua news agency said Beijing had, or was in the process of, installing facial recognition systems at the entrances of 59 public rental housing communities.
Reuters reported last year on its wide use in the western Xinjiang region, an area wracked by separatist violence and a crackdown by security forces that has seen Uighur Muslims and members of other ethic groups detained in camps. China says the camps are re-education and training centers.
Chinese police are also known to have high tech surveillance gadgets such as glasses with built-in facial recognition.
How has its introduction been viewed by the Chinese public?
Surveillance technologies have encountered little public opposition, but there has been some mostly anonymous debate on social media platforms like Weibo.
Some users argue that it is a needed to combat fraud, like scam calls, but others have voiced concerns about its implications for personal data, privacy and ethics.
One rare case of opposition has involved a university lecturer, who sued a wildlife park in Hangzhou after it replaced its fingerprint-based entry system with one that used facial recognition technology.
The Southern Metropolis Daily newspaper, which reported on the case in November, said he was worried that the system might result in identify theft and asked for a refund. He sued after the park denied his request.
Has China exported any of this tech overseas?
Countries from Myanmar to Argentina have purchased surveillance technology from the likes of China’s ZTE and Huawei as part of plans to create “smart cities”.
There has been U.S. blowback over the work Chinese firms like Megvii and SenseTime have played in Beijing’s treatment of Muslim minorities. The United States expanded its trade blacklist in October to include these firms, and others, barring them from buying components from U.S. companies without U.S. government approval.
What is next?
The technology is currently being tested in areas such as street crossings to catch jaywalkers and China has announced that it will eventually expand its use to others like student registrations for its National College Entrance Examination.
There has also been calls for greater regulatory oversight.
The People’s Daily on Saturday called for an investigation, saying one of its reporters had found face data could be found for sale on the Internet, with a package of 5,000 faces costing just 10 yuan ($1.42).
Last week, China’s Internet regulator announced new rules governing the use of deepfake technology, which uses AI to create hyper-realistic videos where a person appears to say or do something they did not.