It is generally accepted that physicians fulfil their obligations arising from the treatment contract within a special relationship of trust, making them personally obligated for performance. However, this does not imply that the physician must personally execute each phase of the medical intervention. It is possible for a physician to delegate certain medical tasks to another person. However, the positioning of artificial intelligence (AI) systems in this context is legally controversial. The present article focuses on the question of whether the concepts of sub-agent or auxiliary person can be applied to AI systems by analogy in the context of the transfer of the obligation to perform arising from medical treatment contracts. It is not possible to characterise AI systems as sub-agents due to their lack of economic independence. However, it is considered possible to apply the concept of an auxiliary person to AI systems by analogy. The fact that AI systems lack the capacity to have rights does not prevent the application by analogy. When physicians use AI systems in medical practice, they are obliged to comply with the requirements for the appropriate delegation of medical tasks. While the utilisation of decision support AI systems is unrestricted, the use of substitutive AI systems, particularly in core medical activities, requires a comprehensive evaluation with regard to patient safety and professional accountability.
Artificial intelligence sub-agent auxiliary person treatment contract
| Birincil Dil | İngilizce |
|---|---|
| Konular | Bilişim ve Teknoloji Hukuku, Tıbbi ve Sağlık Hukuku |
| Bölüm | Araştırma Makalesi |
| Yazarlar | |
| Gönderilme Tarihi | 12 Şubat 2025 |
| Kabul Tarihi | 14 Mayıs 2025 |
| Yayımlanma Tarihi | 21 Ocak 2026 |
| DOI | https://doi.org/10.26650/annales.2025.1638227 |
| IZ | https://izlik.org/JA42PK75BP |
| Yayımlandığı Sayı | Yıl 2025 Sayı: 77 |