Artificial Intelligence (AI) is increasingly pervasive, significantly altering social structures, cultural dynamics, and labor markets. The rapid growth of this ecosystem has sparked worldwide debates about AI’s challenges, including its role in reinforcing biases and social inequalities, ignoring societal values, and impacting diverse sectors like genetics, drug production, defense, and democratic processes. This study examines AI ethics through the social consensus framework, proposing participatory management as a crucial approach to address these challenges. The methodology spans the entire AI lifecycle, advocating for inclusive practices from the design stage to implementation, monitoring, and control. The participatory management model is structured in three phases: Stakeholder Engagement, which involves active participation from diverse stakeholders in developing AI systems, ensuring a range of perspectives in design, modeling, and implementation; Monitoring and Alignment, which focuses on the continuous observation of AI systems’ interaction with their environments, and Macro-level Impact Analysis, which looks at the broader societal impacts of the AI ecosystem, assessing its influence on various sectors like education, culture, health, and safety. This study underscores the importance of a collaborative, inclusive approach in AI development and management, emphasizing the need to align AI advancements with ethical principles and societal well-being.
Artificial intelligence ethics bias fairness participatory algorithmic management algorithmic accountability
Birincil Dil | İngilizce |
---|---|
Konular | Sosyoloji (Diğer) |
Bölüm | ARAŞTIRMA MAKALELERİ |
Yazarlar | |
Yayımlanma Tarihi | 26 Ağustos 2024 |
Gönderilme Tarihi | 5 Ocak 2024 |
Kabul Tarihi | 25 Mart 2024 |
Yayımlandığı Sayı | Yıl 2024 |