Moral Responsibility in the Development of Artificial Intelligence according to Ethical Theology
DOI:
https://doi.org/10.71364/sfcj3f93Keywords:
Ethical AI Frameworks, Moral Philosophy in AI, Algorithmic Ethics, Theological Perspectives on AIAbstract
The development of artificial intelligence (AI) has brought about various significant changes in various sectors of life, including industry, education, and health. However, advances in AI also pose moral and ethical challenges, especially related to transparency, fairness, and accountability in their use. In the context of ethical theology, moral responsibility in the development of AI is an important aspect that needs to be considered to ensure that this technology is developed and applied responsibly in accordance with applicable human values and moral principles. This research aims to examine how the principles of ethical theology can provide a normative foundation in the development of more ethical and responsible AI. The method used in this study is a literature study by analyzing various academic sources, including scientific journals, books, and policy documents that discuss the relationship between AI, morality, and ethical theology. The data collected were then analyzed using the qualitative content analysis method to identify the main findings in this study. The results of the study show that the development of ethical AI requires the integration of moral principles such as justice, love, accountability, and respect for human dignity. Additionally, human regulation and oversight remain necessary to ensure that AI is not used in a way that harms certain individuals or groups. Therefore, the ethical theology approach can be one of the solutions in formulating a more equitable and responsible AI policy.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Kesumawati Kesumawati, Jan Lukas Lambertus Lombok, Otieli Harefa

This work is licensed under a Creative Commons Attribution 4.0 International License.