The Epistemic Crisis of Artificial Intelligence as an Agent Without Responsibility
DOI:
https://doi.org/10.47613/Keywords:
Large language models, epistemology, hallucination, confabulation, responsibilityAbstract
Abstract
This study argues that the increasing use of generative artificial intelligence and large language models (LLMs) gives rise not only to technical, ethical, or governance-related problems, but also points to a deeper and more structural epistemic crisis. Although the risks initially attributed to artificial intelligence have largely been discussed under headings such as data security, bias, and hallucinations, this study emphasizes that these issues are not isolated malfunctions but structural consequences arising from the way AI produces knowledge. While LLMs can generate fluent and persuasive outputs by mimicking human cognitive processes, these outputs emerge independently of core human cognitive dimensions such as meaning, intention, causality, and responsibility. Accordingly, the study underlines that artificial intelligence is built upon an architecture that imitates human cognition without assuming the epistemic and moral burdens intrinsic to those processes. This condition indicates that, despite functioning as an agent capable of acting successfully, artificial intelligence should not be regarded as an epistemic subject. From this perspective, the reproduction of biases, the generation of fabricated content, and a high degree of compliance with morally problematic decisions should be understood as natural and inevitable outcomes of this architecture. Thus, the core problem lies not in models’ accuracy rates or performance levels, but in the processes through which their responses are generated. The study further discusses how generative artificial intelligence is transforming the human–machine relationship and examines the effects of the gradual transfer of cognitive load to machines on critical thinking, memory, patience, and independent problem-solving abilities. Existing findings suggest that rather than supporting humans, these tools may foster a dependency that leads to passivity in human judgment formation. In conclusion, the article maintains that the crisis surrounding artificial intelligence is not primarily rooted in technical inadequacy, but in an epistemic rupture caused by substituting humans with systems that bear no responsibility, and it argues that the solution lies not in more advanced models, but in rethinking the boundaries that safeguard human judgment, responsibility, and decision-making processes.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Mahmut Özer

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All manuscripts which are submitted to the REFLEKTIF Journal of Social Sciences should not be published, accepted and submitted for publication elsewhere.
In case an article is accepted for publication it is allowed to combine the article with other researches, to conduct a new research on the article or to make different arrangements on condition that the same license is used including the commercial purpose.
As an author of an article published in REFLEKTIF Journal of Social Sciences you retain the copyright of your article and you are free to reproduce and disseminate your work.


