FAQ

The expansion of artificial intelligence and the problem of values

Publication date: 19.12.2023

Art of Healing, Vol. 38 (2023) , Volume 38, Issue 2, pp. 37 - 46

https://doi.org/10.4467/18982026SZL.23.017.18684

Authors

Krzysztof Mudyń
University of the National Education Commission, Krakow
, Poland
https://orcid.org/0000-0001-6177-7241 Orcid
All publications →

Titles

The expansion of artificial intelligence and the problem of values

Abstract

The author analyzes relation between the expansion of artificial intelligence (AI) and the issue of values. In particular, he points out the difficulties associated with “agreeing on values” in an algorithm vs during a human interaction. The article highlights the obstacles of taking human values into account while designing complex algorithms, which result from the fact that the preferred values are inconsistent, contextual and therefore variable. The values depend on cultural conditions and individual differences as well. In addition, the sentimental values are also difficult to predict and take into account. All this makes it almost impossible to unambiguously define the values to be respected by the algorithm. Currently, an attempt is being made to include “emotional computing” into a design of artificial systems, which, according to many researchers, may turn out to be a breakthrough in the development of AI. There are already advanced attempts being made to model one of the aspects of emotional intelligence, which is to recognise other people’s emotional states based on the analysis of their facial expressions. According to the author, developments in the field of artificial emotional intelligence should rather worry than satisfy the users of the internet. They will contribute to greater control exercised by the institutions that use them, and consequently to further limitation of personal freedom of the individual users. The author suggests that the expansion of digital technology (contrary to the initial hopes) contributes to increased centralization of power and socio-economic inequalities. In the words of Norbert Wiener (1950), the development of digital technology contributes to “the human use of human beings”.

References

Alisawi M., Yalcin N. (2023). Real-time emotion recognition using deep learning methods: systematic review. Intelligent Methods in Engineering Sciences, 2(1), 5–21, https://doi.org/10.58190/imiens.2023.7 (accessed: 30.06.2023).

European Parliament, Artificial Intelligence Act, https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf (accessed: 6.07.2023).

Brockman J. (ed.) (2020). Possible Minds. 25 Ways Look­ ing at AI. New York: Penguin Books.

Dhope P., Neelagar M.B. (2022). Real-time emotion recognition from facial expressions using artificial intelligence. 2nd International Conference on Artificial Intelligence and Signal Processing (AISP). Vijayawada, 1–6, https://doi.org/10.1109/AISP53593.2022.9760654https://ieeexplore.ieee.org/document/9760654 (accessed: 9.07.2023).

Dragan A. (2020). Putting the human into AI equation. In: J. Brockman (ed.), Possible Minds. 25 Ways of Looking at AI. New York: Penguin Books, 134–142.

Ekman P., Friesen W.V., Hager J.C. (2002). Facial Action Coding System: The Manual on CD ROM. Salt Lake City: A Human Face.

Fitch W.T. (2016). Nano-intentionality. In: J. Brockman (ed.), What to Think About Machines That Think. New York: Harper Collins Publisher, 89–92.

Gigerenzer G. (2022). How to Stay Smart in a Smart World. Why Human Intelligence Still Beats Algorithms. London: Allen Lane.

Gopnik A. (2020). Ais Versus Four-Year-Olds. In: J. Brockman (ed.), Possible Minds. 25 Ways of Looking at AI. New York: Penguin Books, 299–230.

Leijen I., van Herk H., Bardi A. (2022). Individual and generational value change in an adult population, a 12-year longitudinal panel study. Scientific Reports, 12: 17844, https://doi.org/10.1038/s41598-022-22862-1www.nature.com/scientificreports (accessed: 9.07.2023).

Mayers D.G., Twenge J.M. (2018). Social Psychology, ed. XIII. New York: McGraw Hill.

Mudyń K. (2010). Digitalizacja rzeczywistości a problem dekontekstualizacji istnienia [Digitization of reality and the problem of decontextualization of existence]. In: T. Rowiński, R. Tadeusiewicz (red.), Psychologia i informatyka. Ich synergia i kontradykcje. Warsaw: UKSW Publishers, 191–204.

Mudyń K. (2012). O różnych aspektach antropomorfizacji, „systemach intencjonalnych” i dyskretnym uroku technologii [About various aspects of anthropomorphization, “intentional systems” and the discreet charm of technology]. In: J. Morbitzer, E. Musiał (red.), Człowiek­Media­Edukacja. Cracow: KTiME UP Publishers, 307–312.

Mudyń K. (2014). Między antropomorfizacją a dehumanizacją. Powracający problem natury ludzkiej [Between anthropomorphization and dehumanization. A returning problem of human nature].Czasopismo Psychologiczne, 1(20), 1–9.

Mudyń K. (2022). „Człowiek na rozdrożu. Sztuczna inteligencja – 25 punktów widzenia” – recenzja [A man at a crossroads. Artificial intelligence – 25 points of view – review . Tygodnik Spraw Obywatelskich, 137(33), https://instytutsprawobywatelskich.pl/krzysztof-mudyn-czlowiek-na-rozdrozu-recenzja/ (accessed: 27.05.2023).

Mudyń K. (2022). W poszukiwaniu biocentrycznej definicji inteligencji. Rośliny są inteligentniejsze od „inteligentnych maszyn” [In search of a biocentric definition of intelligence. Plants are smarter than “intelligent machines”], https://www.researchgate.net/publication/365568791_W_poszukiwaniu_biocentrycznej_definicja_
inteligencji_Rosliny_sa_inteligentniejsze_od_inteligentnych_maszyn_In_search_of_a_biocentric_
definition_of_intelligence_Plants_are_smarter_than_%27smart_machines%27
 (accessed: 1.03.2023).

Nisbett R.E. (2015). Mindware. Tools for Smart Thinking. New York: Farrar, Straus and Giroux.

Picard R. (1997). Affective Computing. Cambridge, MA: MIT Press.

Pietikäinen M., Silvén O. (2021). Challenges of Artificial Intelligence: From Machine Learning and Computer Vision to Emotional Intelligencehttp://urn.fi/urn:isbn:9789526231990 (accessed: 3.01.2023).

Rokeach M. (1973). The Nature of Human Values. New The Free Press.

Russell S. (2020). The purpose put into the machine. In: J. Brockman (ed.), Possible Minds. 25 Ways Looking at AI. New York: Penguin Books, 20–32.

Schuller D., Björn W., Schuller B.W. (2018). The age of artificial emotional intelligence. Computer, 51(9), 38–46. DOI: 10.1109/MC.2018.3620963https://ieeexplore.ieee.org/document/8481266 (accessed: 5.07.2023).

Young K.S. (1998). Caught in the Net: How to Recognize the Signs of Internet Addiction – and a Winning Strategy for Recovery. New York: John Wiley & Sons.

Walsh T. (2017). It’s Alive! Artificial Intelligence from the Logic Piano to Killer Robots. La Trobe: University Press.

Wiener N. (1950). The Human Use of Human Beings. Cybernetics and Societyhttps://monoskop.org/images/6/60/Wiener_Norbert_The_Human_Use_of_Human_Beings_1989.pdf (accessed: 8.2021).

Information

Information: Art of Healing, Vol. 38 (2023) , Volume 38, Issue 2, pp. 37 - 46

Article type: Original article

Titles:

Polish:

The expansion of artificial intelligence and the problem of values

English:

The expansion of artificial intelligence and the problem of values

Authors

https://orcid.org/0000-0001-6177-7241

Krzysztof Mudyń
University of the National Education Commission, Krakow
, Poland
https://orcid.org/0000-0001-6177-7241 Orcid
All publications →

University of the National Education Commission, Krakow
Poland

Published at: 19.12.2023

Received at: 12.07.2023

Accepted at: 30.07.2023

Article status: Open

Licence: CC BY  licence icon

Percentage share of authors:

Krzysztof Mudyń (Author) - 100%

Article corrections:

-

Publication languages:

English