In cyberpunk movies, you can often see such ideas:

Society has been controlled by chaebol groups, artificial intelligence has acted as a tool of governance, and everyone’s behavior must follow the logic of the machine and the rules of the algorithm. Then the protagonist who lives at the bottom of the society finds the loopholes and tries to escape the fate of being manipulated.

Perhaps in most cognitions, a similar dystopian world is still far away from us. But when artificial intelligence technology entered real life from the ivory tower, issues related to AI ethics have gradually become the focus of academic debate, and even some young people have begun to think about the ethics and risks of algorithms.

For example, in the first issue produced by the famous video blogger “Xia Xiaosu” on station B, topics such as DeepFake, human-computer love, etc. were discussed, and black technologies such as DeepFake were used to spoof or even do evil. , What attitude should we hold towards artificial intelligence? Once “technology neutrality” is criticized by more and more people, how should we deal with the human-computer relationship in the new era?

Perhaps the concept of AI ethics is still somewhat macro, but it is closely related to everyone.

01 Algorithmic discrimination everywhere

Artificial intelligence is not far away from us.

When you open the information application, the algorithm will automatically recommend news according to your preferences; when you go shopping on the e-commerce platform, the algorithm will recommend the corresponding products according to your habits; when you apply for a job, first process your resume and The screening may also be an algorithm; when you go to the hospital to see a doctor, the doctor may use a certain algorithm model to determine the possibility of illness…

Algorithms are infiltrating our lives at an unprecedented speed. In the eyes of supporters, algorithms can reduce human intervention in some decisions, thereby improving the efficiency and accuracy of decision-making. From a critical point of view, do algorithms contain artificial biases, and will people’s fate be influenced by algorithms?

Unfortunately, algorithm discrimination is often a derivative of the application of algorithms.

Amazon developed an “algorithm screening system” in 2014 to help HR screen resumes during recruitment. The development team created 500 algorithm models and taught the algorithm to recognize 50,000 terms that once appeared in resumes. Assign weights to different abilities of candidates.

In the end, the development team found that the algorithm has a clear preference for male applicants. If it recognizes that there are women’s football clubs, women’s schools and other experiences in the resume, it will give a relatively low score to the resume. This algorithm model was finally exposed by Reuters, and Amazon stopped the use of the algorithm in due course. What is thought-provoking is: Why does the algorithm with “no values” begin to be biased?

Coincidentally, when the good news of IG’s victory in 2018 made the Internet boiling, team boss Wang Sicong immediately drew a lottery on Weibo, but the result was unexpected: there were 112 female winners and 1 male winner on the list of winners, and the ratio of female winners was 112 times that of men, and the ratio of male to female participating users is 1:1.2.

So many netizens questioned the fairness of the lottery algorithm. Some netizens even took the initiative to test the lottery algorithm and set the number of winners to be greater than the number of participants. There are still a large number of users who cannot win. The reason is that these users are judged by the algorithm as “bots”, which means that there is no chance of winning in any lottery.

In front of the black box of algorithms, most of what we see are results, but we cannot understand the decision-making process. Similar cases may abound, but few people pay attention.

The most direct lesson is the Internet. The free laws of Silicon Valley gave birth to the Internet, so that some original sins were selectively ignored by people. Finally, more than 20 years later, there was a round of criticism of the Internet. Just as the New York Times in the article “Reduce the Internet is the Only Answer”, the Internet is attributed to a technology with a totalist ideology, Internet companies are described as a group of “evil kings” driving technology.

Criticism of the Internet will not cause people to reduce the use of the Internet. However, it gives an idea: why the Internet will become a street mouse from an industry that is admired by all. If the application of algorithms and algorithm discrimination are not restricted, it will How big a wave will it cause someday in the future?

02 The root is human prejudice

Of course, the “bias” of the algorithm does not lack a reasonable explanation.

One argument is to attribute the bias of machine learning results to the bias of the data set, rather than the “technological neutrality” of the algorithm bias. One of the more well-known supporters is Yang Likun, known as the “father of convolutional neural networks.” A common argument is: If someone cuts someone with a kitchen knife, is it the fault of the kitchen knife manufacturer or even the “cooking knife”?

Another explanation is that the amount of data is too small. When the amount of data learned by the algorithm is larger, the error of the algorithm will be less, and the result will tend to be more accurate. Even if a screening system can be developed to input unbiased data into the algorithm, absolute fairness cannot be achieved. After all, the “mainstream” will always have more data, and the algorithm will eventually be biased towards the majority, causing so-called discrimination against the “non-mainstream”.

The two statements actually speak the same truth: a well-known acronym in the computer field is GIGO, which is Garbage in, Garbage Out. Translated into Chinese means that if the input is garbage data, the output will also be garbage results. Algorithms are like a mirror of the real world, which can reflect the prejudices that people in society are aware of or unconsciously. If the entire society is biased against a certain topic, the output of the algorithm will naturally be discriminatory.

The German philosopher Jaspers once wrote in “The Atomic Bomb and the Future of Mankind”: “Technology itself is neither good nor evil, but it can be used for both good and evil. It does not contain any ideas. : Neither the idea of ​​perfection nor the evil idea of ​​destruction; they all have other sources-in human beings.”

In other words, the root of algorithmic discrimination lies in the prejudice of human nature. The viewpoints of “algorithm neutrality” are essentially a disguise of the prejudice of human nature, and it is precisely what makes artificial intelligence scare.

The emergence of any technology has two dimensions of “instrumentality” and “purpose”, and the right to choose is actually left to human beings. However, human nature often cannot withstand the test, and it is impossible to imagine how evil it will be when the “tool” is handed over to people and lacks moderation.

Just like the DeepFake algorithm mentioned by “Xiao Su” in the video, it caused a huge sensation when it was unveiled in 2017. A user named Deepfakes changed the face of Wonder Woman actor Gal Gadot to one The adult movie heroine caused huge controversy with the fake effect.

Due to the emergence of the DeepFake algorithm, a task that could only be done by professional film production agencies can be mastered by ordinary people after a period of study, just like a beast rushing out of the cage: Gabonese President Ali Bongo used DeepFake to synthesize a New Year speech, unexpected It triggered a mutiny in the military; some people used DeepFake to synthesize a video of the Malaysian Minister of Economy with men, which caused a lot of trouble to the government; in countless hidden corners of the world, some people used DeepFake for fraudulent blackmail…

In this “algorithm everywhere” world, how should we deal with ourselves?

03 Put the beast in the iron cage

As the “tinder” of the new era, we may not be able to reject artificial intelligence.

Because of the application of artificial intelligence, workers on the quality inspection line no longer need to stare at the product under strong light to look for defects with their eyes; because of the application of artificial intelligence, the grassroots doctors can also make accurate disease judgments based on the patient’s examination results; With the application of artificial intelligence, a group of elderly people who do not know the input method can also use voice to enter the Internet world…

But the prerequisite for all goodness is to put the beast in the iron cage first.

It may be possible to borrow the views of the People’s Daily when commenting on the “Quickcast Case”: technology not only inevitably carries value, but also ethically “should” carry the value of “goodness”: maintain the stability of laws and customs, and stay away from damage subversion. Once this principle is violated, any technology will be marked with shame.

The implication is that technology should not be a utopia independent of reality. The rise of technology is inseparable from the necessary supervision. The boundaries of technology application are drawn in laws and regulations, and the ethical shackles of technology neutrality can be said to be the stability of artificial intelligence technology. The premise of Zhiyuan lies.

At the same time, more and more scholars are also discussing the relationship between code and law, worrying about whether algorithms will shake the basic framework of the existing human society law, and put forward the concept of “algorithm regulation”, a method of algorithmic decision-making. The regulatory governance system can be understood as a tool for algorithmic governance.

In addition to these defensive mechanisms, there may be another possibility: proper “AI ethics” education for algorithm developers, and determining the “basic principles” of some algorithms, just like the iron law of “robots can never harm humans” Similarly, prevent the abuse of algorithms from the source.

Video bloggers represented by “Xiaxia Xiaosu” can be an entry point. Although the AI ​​ethics program of “Xiaxia Xiaosu” does not rule out the suspicion of cooperating with contempt, if an artificial intelligence company is willing to enter the interactive field where young people are focused, it will convey to the outside world in the context and expressions familiar to young people The concept of AI ethics and its own AI ethics practice are not an effective way of enlightenment.

At present, scientific research institutions and universities, including the Chinese Academy of Social Sciences, Tsinghua University, Fudan University, and Shanghai Jiaotong University, have begun to conduct research on AI ethics. Top industry summits such as the World Artificial Intelligence Conference and the Beijing Zhiyuan Conference also include AI ethics as a topic of discussion. Companies that have played a leading role in the popularization of artificial intelligence should also shoulder the responsibility of popularizing AI ethics and teach young people the first lesson in AI.

As early as more than a hundred years ago, U.S. Supreme Court Justice Louis Brandeis said: “Sunlight is the best antidote.” The same principle applies to artificial intelligence education. While people use artificial intelligence to change the world, they must also understand good and evil, bottom lines and boundaries.

Following the example mentioned above, the kitchen knife has already marked its purpose when designing it, and the same is true for artificial intelligence. It should be controlled under an understandable ceiling, rather than letting it be in a black box that is out of control. AI ethics is Among the ceilings.

04 written at the end

It is undeniable that with the large-scale industrial application of artificial intelligence, some unprecedented human-machine contradictions have gradually surfaced, so as to find a predictable, constrained, and benevolent behavior. The artificial intelligence governance mechanism has become the primary proposition in the artificial intelligence era.

Maybe you don’t need to be too frustrated. From the primitive age of digging through wood to the computer age, human beings have been on the road of learning technology, using technology, and controlling technology. Although they have made some detours during this period, they have finally made poor choices. Found the correct fire control technique. And learning the “AI first lesson” of AI ethics is precisely to avoid the evil side of AI, and the correct start of rational control of AI.

When the young people of Generation Z are discussing AI ethics, constructing a set of perfect artificial intelligence governance rules is no longer out of reach.

Leave a Reply