The Epistemic Crisis of Artificial Intelligence as an Agent Without Responsibility

Authors

DOI:

https://doi.org/10.47613/

Keywords:

Large language models, epistemology, hallucination, confabulation, responsibility

Abstract

Abstract
This study argues that the increasing use of generative artificial intelligence and large language models (LLMs) gives rise not only to technical, ethical, or governance-related problems, but also points to a deeper and more structural epistemic crisis. Although the risks initially attributed to artificial intelligence have largely been discussed under headings such as data security, bias, and hallucinations, this study emphasizes that these issues are not isolated malfunctions but structural consequences arising from the way AI produces knowledge. While LLMs can generate fluent and persuasive outputs by mimicking human cognitive processes, these outputs emerge independently of core human cognitive dimensions such as meaning, intention, causality, and responsibility. Accordingly, the study underlines that artificial intelligence is built upon an architecture that imitates human cognition without assuming the epistemic and moral burdens intrinsic to those processes. This condition indicates that, despite functioning as an agent capable of acting successfully, artificial intelligence should not be regarded as an epistemic subject. From this perspective, the reproduction of biases, the generation of fabricated content, and a high degree of compliance with morally problematic decisions should be understood as natural and inevitable outcomes of this architecture. Thus, the core problem lies not in models’ accuracy rates or performance levels, but in the processes through which their responses are generated. The study further discusses how generative artificial intelligence is transforming the human–machine relationship and examines the effects of the gradual transfer of cognitive load to machines on critical thinking, memory, patience, and independent problem-solving abilities. Existing findings suggest that rather than supporting humans, these tools may foster a dependency that leads to passivity in human judgment formation. In conclusion, the article maintains that the crisis surrounding artificial intelligence is not primarily rooted in technical inadequacy, but in an epistemic rupture caused by substituting humans with systems that bear no responsibility, and it argues that the solution lies not in more advanced models, but in rethinking the boundaries that safeguard human judgment, responsibility, and decision-making processes.

Downloads

Published

2026-03-18

Issue

Section

Opinion Papers

How to Cite

The Epistemic Crisis of Artificial Intelligence as an Agent Without Responsibility. (2026). REFLEKTIF Journal of Social Sciences, 7(1), 109-121. https://doi.org/10.47613/

Similar Articles

1-10 of 20

You may also start an advanced similarity search for this article.