Chinese papers voicing ethical problems with Artificial Intelligence and robots

Our Member, Liang Wang, Associate Professor at Xi’an Jiaotong University, Xi’an, P.R. China, deals with ethics concerning robots and Artificial Intelligence. Recently he published a paper in a Chinese journal in which he argues in favour of virtue ethics over and against deontological as well as utilitarian approaches to designing AI systems. The abstract is in English:

Liang Wang | A Virtuous Ethical Approach to Moral Design of Artificial Intelligence Systems, Studies in Dialectics of Nature, Vol. 38, No. 10 (Oct., 2022), 56-62 | pdf | link

Abstract: The moral theories that are often considered in the moral design of artificial intelligence systems include deontology, utilitarianism, and virtue ethics, but both deontological abstract principles and utilitarian ethical calculations ignore the complexity of moral situations and ultimately show a lack of “situational sensitivity”. In contrast, virtue ethics focuses on the learning process of empirical knowledge and adapts to complex moral situations with an open and dynamic theoretical quality, while reinforcement learning also focuses on the dynamic learning process, therefore, virtue ethics and reinforcement learning have theoretical compatibility, and the combination of the two makes moral reinforcement learning possible for artificial intelligence systems, and it is the best moral design solution for artificial intelligence systems based on realistic and complex moral situations.

[photo: Liang Wang]

Already earlier he had published two other papers, one on the deception problem with social robots and another on ethical risks regarding so-called artificial emotion:

Liang Wang | Social Robot Ethics Based on Situational Experience: From “Deception” to “Good”, Studies in Dialectics of Nature, Vol. 37, No. 10 (Oct., 2021), 55-60 | pdf | link

Abstract: The ethical problem of social robot’s “deception” is a hot topic in the research of robot ethics. But most of the traditional researches are based on the view of entity attribute, through analyzing the “anthropomorphic” attribute of social robots, to get the judgment that social robots will produce the ethical risk of “deception”. Based on the “situational experience” of the interaction between human and social robots, Coeckelbergh suspended the “anthropomorphic” attribute of social robots, and then turned the “deception” ethics of social robots to “ethics of human good”.

Liang Wang | Discussion on “Unidirectional Emotional” Ethical Risk Arising from Social Robots, Studies in Dialectics of Nature, Vol. 36, No. 1 (Jan., 2020), 56-61 | pdf | link

Abstract: The artificial emotion as one of the typical features of the social robots has become a community consensus. Due to the unbalanced relationship between fake artificial emotion and real human emotion,a series of ethical risk problems are caused. These are mainly reflected in the“manipulation”of empathy and“deception”by social robots. Human beings need to deal with ethical risks effectively from the aspects of legal and ethical supervision,the optimal design of social robots,and human beings’ adjustment of their own morality and values.

Liang Wang’s publications are part of our joint Information Ethics, Responsibility and Sustainability project.


Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.