Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability
Year 2023,
Volume: 3 Issue: 2, 166 - 180, 31.12.2023
Md. Tanzıb Hosain
,
Mehedi Hasan Anik
,
Sadman Rafi
,
Rana Tabassum
,
Khaleque Insia
,
Md. Mehrab Sıddıky
Abstract
Artificial Intelligence (AI) is rapidly integrating into various aspects of our daily lives, influencing decision-making processes in areas such as targeted advertising and matchmaking algorithms. As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial. Functional transparency is a fundamental aspect of algorithmic decision-making systems, allowing stakeholders to comprehend the inner workings of these systems and enabling them to evaluate their fairness and accuracy. However, achieving functional transparency poses significant challenges that need to be addressed. In this paper, we propose a design for user-centered compliant-by-design transparency in transparent systems. We emphasize that the development of transparent and explainable AI systems is a complex and multidisciplinary endeavor, necessitating collaboration among researchers from diverse fields such as computer science, artificial intelligence, ethics, law, and social science. By providing a comprehensive understanding of the challenges associated with transparency in AI systems and proposing a user-centered design framework, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
References
- Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021, May). Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
- Bhatt, U., Antorán, J., Zhang, Y., Liao, Q. V., Sattigeri, P., Fogliato, R., ... & Xiang, A. (2021, July). Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 401-413).
- Rader, E., Cotter, K., & Cho, J. (2018, April). Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-13).
- Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., ... & Eckersley, P. (2020, January). Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 648-657).
- Springer, A., & Whittaker, S. (2019, March). Progressive disclosure: empirically motivated approaches to designing effective transparency. In Proceedings of the 24th international conference on intelligent user interfaces (pp. 107-120).
- Urquhart, C., & Spence, J. (2007). Document Engineering: Analyzing and Designing Documents for Business Informatics and Web Services. Journal of Documentation, 63(2), 288-290.
- Norval, C., Cornelius, K., Cobbe, J., & Singh, J. (2022). Disclosure by Design: Designing information disclosures to support meaningful transparency and accountability. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 679-690). ACM.
- Marsh, C. H. (1999). The engineer as technical writer and document designer: The new paradigm. ACM SIGDOC Asterisk Journal of Computer Documentation, 23(2), 57-61.
- Biasin, E. (2022). ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT): Doctoral Consortium Session.
- Antunes, N., Balby, L., Figueiredo, F., Lourenco, N., Meira, W., & Santos, W. (2018). Fairness and transparency of machine learning for trustworthy cloud services. In 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W) (pp. 188-193). IEEE.
- Barclay, I., Taylor, H., Preece, A., Taylor, I., Verma, D., & de Mel, G. (2021). A framework for fostering transparency in shared artificial intelligence models by increasing visibility of contributions. Concurrency and Computation: Practice and Experience, 33(19), e6129.
- Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barnes, P., & Mitchell, M. (2021). Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 560-575). ACM.
- Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barnes, P., & Mitchell, M. (2021). Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 560-575). ACM.
- Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2020). Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), 3333-3361.
- Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022). Data cards: Purposeful and transparent dataset documentation for responsible AI. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1776-1826). ACM.
- MacKay, D. J. C. (2003). Information theory, inference and learning algorithms. Cambridge University Press.
- Bland, J. M., & Altman, D. G. (1998). Bayesians and frequentists. BMJ, 317(7166), 1151-1160.
- Pek, J., & Van Zandt, T. (2020). Frequentist and Bayesian approaches to data analysis: Evaluation and estimation. Psychology Learning & Teaching, 19(1), 21-35.
- Xie, M., & Singh, K. (2013). Confidence distribution, the frequentist distribution estimator of a parameter: A review. International Statistical Review, 81(1), 3-39.
- MacKay, D. J. C. (1992). Bayesian interpolation. Neural Computation, 4(3), 415-447.
- Palakkadavath, R., & Srijith, P. K. (2021). Bayesian generative adversarial nets with dropout inference. In Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD) (pp. 92-100).
- FAT. (2018). Fairness, accountability, and transparency in machine learning. Retrieved December 24, 2018.
- Voigt, P., & Von dem Bussche, A. (2017). The EU general data protection regulation (GDPR): A practical guide (1st Ed.). Springer International Publishing.
- Burt, A. (2019). The AI transparency paradox. Harvard Business Review. Retrieved from https://bit.ly/369LKvq
- Garfinkel, S., Matthews, J., Shapiro, S. S., & Smith, J. M. (2017). Toward algorithmic transparency and accountability. Communications of the ACM, 60(9), 5-5.
- Speith, T. (2022). A review of taxonomies of explainable artificial intelligence (XAI) methods. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 2239-2250). ACM.
- Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37), eaay7120.
- Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (XAI): A survey. arXiv preprint arXiv:2006.11371.
- von Eschenbach, W. J. (2021). Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4), 1607-1622.
- Gade, K., Geyik, S. C., Kenthapadi, K., Mithal, V., & Taly, A. (2019). Explainable AI in industry. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 3203-3204).
- Nielsen, M. A. (2015). Neural networks and deep learning (Vol. 25). Determination Press.
- Wang, Y., Xiong, M., & Olya, H. (2020). Toward an understanding of responsible artificial intelligence practices. In Proceedings of the 53rd Hawaii International Conference on System Sciences (HICSS) (pp. 4962-4971). Hawaii International Conference on System Sciences.
- Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48, 137-141.
- Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 494.
- Ball, C. (2009). What is transparency? Public Integrity, 11(4), 293-308.
- Bostrom, N. (2017). Strategic implications of openness in AI development. Global Policy, 8(2), 135-148.
- Rosenberg, N. (1982). Inside the black box: Technology and economics. Cambridge University Press.
- Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.
- Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512.
- Yampolskiy, R. V. (2020). Unexplainability and incomprehensibility of AI. Journal of Artificial Intelligence and Consciousness, 7(02), 277-291.
- Winograd, T., Flores, F., & Flores, F. F. (1986). Understanding computers and cognition: A new foundation for design. Intellect Books.
- Chromá, M. (2008). Two approaches to legal translation. In Language, Culture and the Law: The Formulation of Legal Concepts across Systems and Cultures (Vol. 64, pp. 303).
- Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2015). "I always assumed that I wasn't really that close to [her]": Reasoning about invisible algorithms in news feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153-162).
- Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine, 38(3), 50-57.
- Dupret, G. E., & Piwowarski, B. (2008). A user browsing model to predict search engine click data from past observations. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 331-338).
- Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
- Bhushan, B., Khamparia, A. Sagayam, K. M., Sharma, S. K., Ahad, M. A., & Debnath, N. C. (2020). Blockchain for smart cities: A review of architectures, integration trends and future research directions. Sustainable Cities and Society, 61, 102360.
- Fawcett, S. E., Wallin, C., Allred, C., Fawcett, A. M., & Magnan, G. M. (2011). Information technology as an enabler of supply chain collaboration: A dynamic‐capabilities perspective. Journal of Supply Chain Management, 47(1), 38-59.
- Boonstra, A., & Broekhuis, M. (2010). Barriers to the acceptance of electronic medical records by physicians: From systematic review to taxonomy and interventions. BMC Health Services Research, 10(1), 1-17.
- Yarbrough, A. K., & Smith, T. B. (2007). Technology acceptance among physicians: A new take on TAM. Medical Care Research and Review, 64(6), 650-672.
- Ulman, Y. I., Cakar, T., & Yildiz, G. (2015). Ethical issues in neuromarketing: "I consume, therefore I am!". Science and Engineering Ethics, 21, 1271-1284.
- Watson, L. C. (1976). Understanding a life history as a subjective document: Hermeneutical and phenomenological perspectives. Ethos, 4(1), 95-131.Schwartz, Paul M. "European data protection law and restrictions on international data flows." Iowa L. Rev. 80 (1994): 471.
- Schwartz, P. M. (1994). European data protection law and restrictions on international data flows. Iowa L. Rev., 80, 471.
- Diaz, O., Kushibar, K., Osuala, R., Linardos, A., Garrucho, L., Igual, L., Radeva, P., Prior, F., Gkontra, P., & Lekadir, K. (2021). Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools. Physica Medica, 83, 25-37.
- Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15(3), 320-330.
- Grus, J. (2019). Data science from scratch: First principles with python. O'Reilly Media.
- Wang, D., Weisz, J. D., Muller, M., Ram, P., Geyer, W., Dugan, C., Tausczik, Y., Samulowitz, H., & Gray, A. (2019). Human-AI collaboration in data science: Exploring data scientists' perceptions of automated AI. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-24.
- Kasabov, N. K. (2014). NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data. Neural Networks, 52, 62-76.
- Abraham, M. J., Murtola, T., Schulz, R., Páll, S., Smith, J. C., Hess, B., & Lindahl, E. (2015). GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX, 1, 19-25.
- Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31, 611-627.
- Chen, I. J., & Popovich, K. (2003). Understanding customer relationship management (CRM): People, process and technology. Business Process Management Journal, 9(5), 672-688.Yang, Jing, and Ava Francesca Battocchio. "Effects of transparent brand communication on perceived brand authenticity and consumer responses." Journal of Product & Brand Management 30, no. 8 (2021): 1176-1193.
- Yang, J., & Battocchio, A. F. (2021). Effects of transparent brand communication on perceived brand authenticity and consumer responses. Journal of Product & Brand Management, 30(8), 1176-1193.
- Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.
- Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2119-2128).
- Landwehr, C. E., Bull, A. R., McDermott, J. P., & Choi, W. S. (1994). A taxonomy of computer program security flaws. ACM Computing Surveys (CSUR), 26(3), 211-254.
- Wu, L., & Chen, J. L. (2005). An extension of trust and TAM model with TPB in the initial adoption of on-line tax: An empirical study. International Journal of Human-Computer Studies, 62(6), 784-808.
- Hess, D. (2007). Social reporting and new governance regulation: The prospects of achieving corporate accountability through transparency. Business Ethics Quarterly, 17(3), 453-476.
- Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability
Year 2023,
Volume: 3 Issue: 2, 166 - 180, 31.12.2023
Md. Tanzıb Hosain
,
Mehedi Hasan Anik
,
Sadman Rafi
,
Rana Tabassum
,
Khaleque Insia
,
Md. Mehrab Sıddıky
Abstract
Artificial Intelligence (AI) is rapidly integrating into various aspects of our daily lives, influencing decision-making processes in areas such as targeted advertising and matchmaking algorithms. As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial. Functional transparency is a fundamental aspect of algorithmic decision-making systems, allowing stakeholders to comprehend the inner workings of these systems and enabling them to evaluate their fairness and accuracy. However, achieving functional transparency poses significant challenges that need to be addressed. In this paper, we propose a design for user-centered compliant-by-design transparency in transparent systems. We emphasize that the development of transparent and explainable AI systems is a complex and multidisciplinary endeavor, necessitating collaboration among researchers from diverse fields such as computer science, artificial intelligence, ethics, law, and social science. By providing a comprehensive understanding of the challenges associated with transparency in AI systems and proposing a user-centered design framework, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
References
- Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021, May). Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
- Bhatt, U., Antorán, J., Zhang, Y., Liao, Q. V., Sattigeri, P., Fogliato, R., ... & Xiang, A. (2021, July). Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 401-413).
- Rader, E., Cotter, K., & Cho, J. (2018, April). Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-13).
- Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., ... & Eckersley, P. (2020, January). Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 648-657).
- Springer, A., & Whittaker, S. (2019, March). Progressive disclosure: empirically motivated approaches to designing effective transparency. In Proceedings of the 24th international conference on intelligent user interfaces (pp. 107-120).
- Urquhart, C., & Spence, J. (2007). Document Engineering: Analyzing and Designing Documents for Business Informatics and Web Services. Journal of Documentation, 63(2), 288-290.
- Norval, C., Cornelius, K., Cobbe, J., & Singh, J. (2022). Disclosure by Design: Designing information disclosures to support meaningful transparency and accountability. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 679-690). ACM.
- Marsh, C. H. (1999). The engineer as technical writer and document designer: The new paradigm. ACM SIGDOC Asterisk Journal of Computer Documentation, 23(2), 57-61.
- Biasin, E. (2022). ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT): Doctoral Consortium Session.
- Antunes, N., Balby, L., Figueiredo, F., Lourenco, N., Meira, W., & Santos, W. (2018). Fairness and transparency of machine learning for trustworthy cloud services. In 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W) (pp. 188-193). IEEE.
- Barclay, I., Taylor, H., Preece, A., Taylor, I., Verma, D., & de Mel, G. (2021). A framework for fostering transparency in shared artificial intelligence models by increasing visibility of contributions. Concurrency and Computation: Practice and Experience, 33(19), e6129.
- Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barnes, P., & Mitchell, M. (2021). Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 560-575). ACM.
- Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barnes, P., & Mitchell, M. (2021). Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 560-575). ACM.
- Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2020). Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), 3333-3361.
- Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022). Data cards: Purposeful and transparent dataset documentation for responsible AI. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1776-1826). ACM.
- MacKay, D. J. C. (2003). Information theory, inference and learning algorithms. Cambridge University Press.
- Bland, J. M., & Altman, D. G. (1998). Bayesians and frequentists. BMJ, 317(7166), 1151-1160.
- Pek, J., & Van Zandt, T. (2020). Frequentist and Bayesian approaches to data analysis: Evaluation and estimation. Psychology Learning & Teaching, 19(1), 21-35.
- Xie, M., & Singh, K. (2013). Confidence distribution, the frequentist distribution estimator of a parameter: A review. International Statistical Review, 81(1), 3-39.
- MacKay, D. J. C. (1992). Bayesian interpolation. Neural Computation, 4(3), 415-447.
- Palakkadavath, R., & Srijith, P. K. (2021). Bayesian generative adversarial nets with dropout inference. In Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD) (pp. 92-100).
- FAT. (2018). Fairness, accountability, and transparency in machine learning. Retrieved December 24, 2018.
- Voigt, P., & Von dem Bussche, A. (2017). The EU general data protection regulation (GDPR): A practical guide (1st Ed.). Springer International Publishing.
- Burt, A. (2019). The AI transparency paradox. Harvard Business Review. Retrieved from https://bit.ly/369LKvq
- Garfinkel, S., Matthews, J., Shapiro, S. S., & Smith, J. M. (2017). Toward algorithmic transparency and accountability. Communications of the ACM, 60(9), 5-5.
- Speith, T. (2022). A review of taxonomies of explainable artificial intelligence (XAI) methods. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 2239-2250). ACM.
- Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37), eaay7120.
- Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (XAI): A survey. arXiv preprint arXiv:2006.11371.
- von Eschenbach, W. J. (2021). Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4), 1607-1622.
- Gade, K., Geyik, S. C., Kenthapadi, K., Mithal, V., & Taly, A. (2019). Explainable AI in industry. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 3203-3204).
- Nielsen, M. A. (2015). Neural networks and deep learning (Vol. 25). Determination Press.
- Wang, Y., Xiong, M., & Olya, H. (2020). Toward an understanding of responsible artificial intelligence practices. In Proceedings of the 53rd Hawaii International Conference on System Sciences (HICSS) (pp. 4962-4971). Hawaii International Conference on System Sciences.
- Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48, 137-141.
- Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 494.
- Ball, C. (2009). What is transparency? Public Integrity, 11(4), 293-308.
- Bostrom, N. (2017). Strategic implications of openness in AI development. Global Policy, 8(2), 135-148.
- Rosenberg, N. (1982). Inside the black box: Technology and economics. Cambridge University Press.
- Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.
- Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512.
- Yampolskiy, R. V. (2020). Unexplainability and incomprehensibility of AI. Journal of Artificial Intelligence and Consciousness, 7(02), 277-291.
- Winograd, T., Flores, F., & Flores, F. F. (1986). Understanding computers and cognition: A new foundation for design. Intellect Books.
- Chromá, M. (2008). Two approaches to legal translation. In Language, Culture and the Law: The Formulation of Legal Concepts across Systems and Cultures (Vol. 64, pp. 303).
- Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2015). "I always assumed that I wasn't really that close to [her]": Reasoning about invisible algorithms in news feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153-162).
- Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine, 38(3), 50-57.
- Dupret, G. E., & Piwowarski, B. (2008). A user browsing model to predict search engine click data from past observations. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 331-338).
- Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
- Bhushan, B., Khamparia, A. Sagayam, K. M., Sharma, S. K., Ahad, M. A., & Debnath, N. C. (2020). Blockchain for smart cities: A review of architectures, integration trends and future research directions. Sustainable Cities and Society, 61, 102360.
- Fawcett, S. E., Wallin, C., Allred, C., Fawcett, A. M., & Magnan, G. M. (2011). Information technology as an enabler of supply chain collaboration: A dynamic‐capabilities perspective. Journal of Supply Chain Management, 47(1), 38-59.
- Boonstra, A., & Broekhuis, M. (2010). Barriers to the acceptance of electronic medical records by physicians: From systematic review to taxonomy and interventions. BMC Health Services Research, 10(1), 1-17.
- Yarbrough, A. K., & Smith, T. B. (2007). Technology acceptance among physicians: A new take on TAM. Medical Care Research and Review, 64(6), 650-672.
- Ulman, Y. I., Cakar, T., & Yildiz, G. (2015). Ethical issues in neuromarketing: "I consume, therefore I am!". Science and Engineering Ethics, 21, 1271-1284.
- Watson, L. C. (1976). Understanding a life history as a subjective document: Hermeneutical and phenomenological perspectives. Ethos, 4(1), 95-131.Schwartz, Paul M. "European data protection law and restrictions on international data flows." Iowa L. Rev. 80 (1994): 471.
- Schwartz, P. M. (1994). European data protection law and restrictions on international data flows. Iowa L. Rev., 80, 471.
- Diaz, O., Kushibar, K., Osuala, R., Linardos, A., Garrucho, L., Igual, L., Radeva, P., Prior, F., Gkontra, P., & Lekadir, K. (2021). Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools. Physica Medica, 83, 25-37.
- Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15(3), 320-330.
- Grus, J. (2019). Data science from scratch: First principles with python. O'Reilly Media.
- Wang, D., Weisz, J. D., Muller, M., Ram, P., Geyer, W., Dugan, C., Tausczik, Y., Samulowitz, H., & Gray, A. (2019). Human-AI collaboration in data science: Exploring data scientists' perceptions of automated AI. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-24.
- Kasabov, N. K. (2014). NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data. Neural Networks, 52, 62-76.
- Abraham, M. J., Murtola, T., Schulz, R., Páll, S., Smith, J. C., Hess, B., & Lindahl, E. (2015). GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX, 1, 19-25.
- Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31, 611-627.
- Chen, I. J., & Popovich, K. (2003). Understanding customer relationship management (CRM): People, process and technology. Business Process Management Journal, 9(5), 672-688.Yang, Jing, and Ava Francesca Battocchio. "Effects of transparent brand communication on perceived brand authenticity and consumer responses." Journal of Product & Brand Management 30, no. 8 (2021): 1176-1193.
- Yang, J., & Battocchio, A. F. (2021). Effects of transparent brand communication on perceived brand authenticity and consumer responses. Journal of Product & Brand Management, 30(8), 1176-1193.
- Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.
- Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2119-2128).
- Landwehr, C. E., Bull, A. R., McDermott, J. P., & Choi, W. S. (1994). A taxonomy of computer program security flaws. ACM Computing Surveys (CSUR), 26(3), 211-254.
- Wu, L., & Chen, J. L. (2005). An extension of trust and TAM model with TPB in the initial adoption of on-line tax: An empirical study. International Journal of Human-Computer Studies, 62(6), 784-808.
- Hess, D. (2007). Social reporting and new governance regulation: The prospects of achieving corporate accountability through transparency. Business Ethics Quarterly, 17(3), 453-476.
- Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.