Skip to main content

Advertisement

Log in

AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

A new and unorthodox approach to deal with discriminatory bias in Artificial Intelligence is needed. As it is explored in detail, the current literature is a dichotomy with studies originating from the contrasting fields of study of either philosophy and sociology or data science and programming. It is suggested that there is a need instead for an integration of both academic approaches, and needs to be machine-centric rather than human-centric applied with a deep understanding of societal and individual prejudices. This article is a novel approach developed into a framework of action: a bias impact assessment to raise awareness of bias and why, a clear set of methodologies as shown in a table comparing with the four stages of pharmaceutical trials, and a summary flowchart. Finally, this study concludes the need for a transnational independent body with enough power to guarantee the implementation of those solutions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Notes

  1. For a complementary uptake, please see [73] report.

  2. The pharmaceutical industry is far from perfect, but it is in a better position now than when eugenics experiments were openly conducted on underprivileged sectors of society with no consequences. Today there are mechanisms to take a pharmaceutical company to Court if harm to society is proven as the over-promotion of opioids derivatives in the US, for example. Such legal mechanisms are underdeveloped or non-existent in the AI industry.

  3. Prejudices and abuse of power occur in all directions and among members of the same social class. However, I am more interested in elite discrimination from the top to the bottom of the social scale as it affects bigger sectors of the population and the monopoly of the implementation of discriminatory ML models on a larger scale.

  4. The ethical issues of Web Data Mining are well explored in this paper Van Wel et al. [88].

  5. Not that it is that simple or the only reason. However, it is an important factor.

  6. Dr Spiekermann is a co-chair of IEEE’s first standardisation effort on ethical engineering (IEEE P7000). She has been published in leading IS and CS Journals including the Journal of Information Technology, the IEEE Transactions on Software Engineering, Communications of the ACM, and the European Journal of IS, where she served as Editor until 2013 (obtained from IEEE, Institute of Electrical and Electronics Engineers, website).

  7. As this article focuses on bias AI, I will prioritise the values that affect bias.

  8. To simplify and more data available, I have not mentioned the Latinx community and other communities that also endure discrimination based on race.

  9. Many other groups might have been treated unfairly, such as Latino or black males, but I will concentrate on gender discrimination in this case study.

  10. Whitehouse et al. [97] draws on survey data to examine horizontal and vertical gender segregation within IT employment in Australia. Not all data can be extrapolated to other countries and cultures, and it may be outdated. However, tech culture is global and it is an example of blocking women in IT jobs due to the masculinity of technology [92].

  11. Pharmaceutical companies’ business model is based on profit, but there are regulatory procedures to minimise harm, remove products when proven harmful and compensate the victims which do not exist in the AI industry.

  12. Although there are many other factors that need to be checked, like data privacy. In this article, I concentrate on bias. The main reason is to be able to introduce possible applicable solutions in a deeper manner.

  13. Some may say that they need to have a more prominent role rather than just equal.

  14. There are cases like the Boeing 737 MAX being in the market with faulty software and causing two fatal accidents. But that was caused by the lack of adequate monitoring of Boeing by the FAA, not by ineffective or inexistent regulation [44]. Commercial scheduled air travel remains among the safest modes of transportation (US National Safety Council 2019). Not perfect, but much better than unregulated.

  15. It is the reason why I have been advocating about the benefits of Citizens' Assemblies on AI to keep members of the Society informed and engaged. It could give politicians the public mandate to act upon it. Tech companies control the flow of information in the digital sphere with sophisticated algorithms. It is reasonable to suspect that they might interfere with accessing information that questions the technological status quo.

References

  1. Anderson, J., Rainie, L., Luchsinger, A.: Artificial intelligence and the future of humans. Pew Res. Center 10, 12 (2018)

    Google Scholar 

  2. Angwin, J., et al.: (2016) Machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (2016). Accessed 28 Mar 2021

  3. Alpaydin, E. (2020). Introduction to machine learning. MIT press.

  4. Bageri, V., Katsoulacos, Y., Spagnolo, G.: The distortive effects of antitrust fines based on revenue. Econ. J. 123(572), F545–F557 (2013)

    Article  Google Scholar 

  5. Bagilhole, B.: Being different is a very difficult row to hoe: survival strategies of women academics. In: Davies, S., Lubelska, C., Quinn, J. (eds.) Changing the Subject, pp. 15–28. Taylor & Francis, London (2017)

  6. Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. L. Rev. 104, 671 (2016)

    Google Scholar 

  7. Bartlett, R., Morse, A., Stanton, R., Wallace, N.: Consumer-lending discrimination in the FinTech era. J. Financ. Econ. 143(1), 30–56 (2022)

  8. Bell, D. Faces at the Bottom of the Well: The Permanence of Racism. Hachette, UK (2018)

  9. Bellamy, R.K., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S. AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv:1810.01943 (2018)

  10. Bhattacharya, S. (2005). Up to 140,000 heart attacks linked to Vioxx. New scientist, 25.

  11. Bhuiyan, H., Ashiquzzaman, A., Juthi, T.I., Biswas, S., Ara, J.: A survey of existing e-mail spam filtering methods considering machine learning techniques. Glob. J. Comput. Sci. Technol. 18(2-c)(2018)

  12. Bi, W.L., Hosny, A., Schabath, M.B., Giger, M.L., Birkbak, N.J., Mehrtash, A., Allison, T., Arnaout, O., Abbosh, C., Dunn, I.F., Mak, R.H.: Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin 69(2), 127–157 (2019)

    Google Scholar 

  13. Binns, R.: Fairness in machine learning: lessons from political philosophy. In Conference on Fairness, Accountability and Transparency, pp. 149–159. PMLR (2018)

  14. Blyth, C.R.: On Simpson’s paradox and the sure-thing principle. J. Am. Stat. Assoc. 67(338), 364–366 (1972)

    Article  MathSciNet  MATH  Google Scholar 

  15. Boddington, P.: Towards a Code of Ethics for Artificial Intelligence, pp. 27–37. Springer, Cham (2017)

    Google Scholar 

  16. Boden, M.A.: Creativity and artificial intelligence: a contradiction in terms. In: Paul, E., Kaufman, S. (eds.) The Philosophy of Creativity: New Essays, pp. 224–46. Oxford University Press, Oxford (2014)

  17. Bonilla-Silva, E.: White Supremacy and Racism in the Post-Civil Rights Era. Lynne Rienner Publishers, Boulder (2001)

    Book  Google Scholar 

  18. Bose, D., Segui-Gomez, S.C.D.M., Crandall, J.R.: Vulnerability of female drivers involved in motor vehicle crashes: an analysis of US population at risk. Am. J. Public Health 101(12), 2368–2373 (2011)

    Article  Google Scholar 

  19. Bostrom, N., Yudkowsky, E.: The ethics of artificial intelligence. Camb. Handb. Artif. Intell. 1, 316–334 (2014)

    Article  Google Scholar 

  20. Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014)

    Google Scholar 

  21. Bronson, J., Carson, E.A.: Prisoners in 2017. Age 500, 400 (2019)

    Google Scholar 

  22. Brewer, R.M., Heitzeg, N.A.: The racialization of crime and punishment: criminal justice, color-blind racism, and the political economy of the prison industrial complex. Am. Behav. Sci. 51(5), 625–644 (2008)

    Article  Google Scholar 

  23. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on fairness, accountability and transparency, pp. 77–91. PMLR (2018)

  24. Burkhardt, B.C.: Who is in private prisons? Demographic profiles of prisoners and workers in American private prisons. Int. J. Law Crime Just. 51, 24–33 (2017)

    Article  Google Scholar 

  25. Calvo, R.A., Peters, D., Cave, S.: Advancing impact assessment for intelligent systems. Nat. Mach. Intell. 2(2), 89–91 (2020)

    Article  Google Scholar 

  26. Campolo, A., Sanfilippo, M., Whittaker, M., Crawford, K.: AI now 2017 report. https://assets.ctfassets.net/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf (2017). Accessed 7 May 2021

  27. Carrie, J.: More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru. The Guardian. https://amp.theguardian.com/technology/2020/dec/04/timnit-gebru-google-ai-fired-diversity-ethics (2020). Accessed 4 May 2021

  28. Castelvecchi, D.: Can we open the black box of AI? Nat. News 538(7623), 20 (2016)

    Article  Google Scholar 

  29. Chalmers, D.: The singularity: a philosophical analysis. In: Schneider, S. (ed.) Science Fiction and Philosophy: From Time Travel to Superintelligence, pp. 171–224. Wiley, UK (2009)

  30. Collingridge, D.: The Social Control of Technology. Frances Pinter (Publishers), London (1982)

    Google Scholar 

  31. Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., Huq, A.: Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797–806 (2017)

  32. Crawford, K.: The Atlas of AI. Yale University Press (2021)

    Book  Google Scholar 

  33. Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (2018). Accessed 24 Apr 2021

  34. Dwivedi, Y.K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P.V., Janssen, M., Jones, P., Kar, A.K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., Medaglia, R., Meunier-FitzHugh, K.L., Meunier-FitzHugh, L.C.L., Misra, S., Mogaji, E., Sharma, S.K., Singh, J.B., Raghavan, V., Raman, R., Rana, N.P., Samothrakis, S., Spencer, J., Tamilmani, K., Tubadji, A., Walton, P., Williams, M.D.: Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 57, 101994 (2019)

  35. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)

  36. Erdélyi, O.J., Goldsmith, J.: Regulating Artificial Intelligence: Proposal for a Global Solution. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (2018)

  37. Erdélyi, O. J., Goldsmith, J.: Regulating artificial intelligence proposal for a global solution. Preprint at arXiv:2005.11072 (2020)

  38. Ferrer, X., van Nuenen, T., Such, J.M., Coté, M., Criado, N.: Bias and discrimination in AI: a cross-disciplinary perspective. IEEE Technol. Soc. Mag. 40(2), 72–80 (2021)

    Article  Google Scholar 

  39. Fleming, J.G.: Drug injury compensation plans. Am. J. Comp. Law. 1, 297–323 (1982)

    Article  Google Scholar 

  40. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  41. Guynn, J.: Google photos labelled black people 'gorillas'. USA today. http://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/ (2015). Accessed 15 Mar 2021

  42. Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30(1), 99–120 (2020)

    Article  Google Scholar 

  43. Hauben, M., Bate, A.: Decision support methods for the detection of adverse events in post-marketing data. Drug Discov. Today 14(7–8), 343–357 (2009)

    Article  Google Scholar 

  44. Herkert, J., Borenstein, J., Miller, K.: The Boeing 737 MAX: lessons for engineering ethics. Sci. Eng. Ethics 26(6), 2957–2974 (2020)

    Article  Google Scholar 

  45. High-Level Expert Group on AI of the EU.: Ethics guidelines for trustworthy AI | Shaping Europe’s digital future”. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (2019). Accessed 15 Mar 2021

  46. Hoffmann, A.L.: Terms of inclusion: data, discourse, violence. New Media Soc. 23(12), 3539–3556 (2020)

  47. Hoofnagle, C.J., van der Sloot, B., Borgesius, F.Z.: The European Union general data protection regulation: what it is and what it means. Inf. Commun. Technol. Law 28(1), 65–98 (2019)

    Article  Google Scholar 

  48. Janiesch, C., Zschech, P., Heinrich, K.: Machine learning and deep learning. Electron. Markets 31, 685–695 (2021)

  49. Kearns, M., Roth, A.: The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press, Oxford (2019)

    Google Scholar 

  50. Kim, Y.C., Dema, B., Reyes-Sandoval, A.: COVID-19 vaccines: breaking record times to first-in-human trials. NPJ Vacc. 5(1), 1–3 (2020)

    Google Scholar 

  51. Lee, N.T., Resnick, P., Barton, G.: Algorithmic bias detection and mitigation: best practices and policies to reduce consumer harms. Brookings Institute, Washington, DC (2019)

    Google Scholar 

  52. Linden, G., Smith, B., York, J.: Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Comput. 7(1), 76–80 (2003)

    Article  Google Scholar 

  53. McDuff, D., Cheng, R., Kapoor, A.: Identifying bias in AI using simulation. arXiv:1810.00471 (2018)

  54. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54(6), 1–35 (2021)

    Article  Google Scholar 

  55. Mills, C.W.: The Racial Contract. Cornell University Press, Ithaca (2014)

    Google Scholar 

  56. Müller, V.C. (Summer 2021 Edition), Zalta, E.N.: (eds.) Ethics of artificial intelligence and robotics. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/. Accessed 18 Mar 2021

  57. Murphy, K.P.: Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge (2012)

    MATH  Google Scholar 

  58. Nabirahni, D.M., Evans, B.R., Persaud, A.: Al-Khwarizmi (algorithm) and the development of algebra. Math. Teach. Res. J. 11(1–2), 13–17 (2019)

    Google Scholar 

  59. Nielsen, M.W., Alegria, S., Börjeson, L., Etzkowitz, H., Falk-Krzesinski, H.J., Joshi, A., Leahey, E., Smith-Doerr, L., Woolley, A.W., Schiebinger, L.: Opinion: gender diversity leads to better science. Proc. Natl. Acad. Sci. 114(8), 1740–1742 (2017)

    Article  Google Scholar 

  60. Noble, S.U.: Algorithms of Oppression. New York University Press, New York (2018)

    Book  Google Scholar 

  61. Northpointe Inc.: Measurement & treatment implications of COMPAS core scales. Technical report, Northpointe Inc. https://www.michigan.gov/documents/corrections/Timothy Brenne Ph.D. Meaning and treatment implications of COMPA core scales 297495 7.pdf. Accessed 2 Feb 2020 (2009)

  62. Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S.: Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453 (2019)

    Article  Google Scholar 

  63. Olteanu, A., Castillo, C., Diaz, F., Kıcıman, E.: Social data: biases, methodological pitfalls, and ethical boundaries. Front. Big Data 2, 13 (2019)

    Article  Google Scholar 

  64. O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin Books Limited, New York (2016)

    MATH  Google Scholar 

  65. Onuoha, M.: Notes on Algorithmic Violence. https://github.com/MimiOnuoha/On-Algorithmic-Violence (2018). Accessed 20 Aug 2021

  66. Opeyemi, B.: Deployment of Machine learning Models Demystified (Part 1). Towards Data Science (2019)

  67. Pateman, C.: The Sexual Contract. Wiley, Weinheim (2018)

    Google Scholar 

  68. Podesta Report. Exec.: Office of the President, big data: seizing opportunities, preserving values. https://obamawhitehouse.archives.gov/sites/default/files/docs/20150204_Big_Data_Seizing_Opportunities_Preserving_Values_Memo.pdf (2014). Accessed 15 Aug 2021

  69. Reed, C.: How should we regulate artificial intelligence? Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 376(2128), 20170360 (2018)

    Article  Google Scholar 

  70. Reisman, D., Schultz, J., Crawford, K., Whittaker, M.: Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, pp. 1–22. AI Now Institute (2018)

  71. Ricardo, B.Y.: Bias on the web. Commun. ACM 61(6), 54–61 (2018)

    Article  Google Scholar 

  72. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach. Pearson, New York (2016)

    MATH  Google Scholar 

  73. Sandler, R., Basl, J.: Building Data and AI Ethics Committees. North Eastern University Ethics Institute and Accenture. https://cssh.northeastern.edu/informationethics/wp-content/uploads/sites/44/2020/08/811330-AI-Data-Ethics-Committee-Report_V10.0.pdf (2019). Accessed 7 May 2021

  74. Santoro, M.A., Gorrie, T.M.: Ethics and the Pharmaceutical Industry. Cambridge University Press, Cambridge (2005)

    Book  Google Scholar 

  75. Sax, L.J., Lehman, K.J., Jacobs, J.A., Kanny, M.A., Lim, G., Monje-Paulson, L., Zimmerman, H.B.: Anatomy of an enduring gender gap: the evolution of women’s participation in computer science. J. Higher Educ. 88(2), 258–293 (2017)

    Article  Google Scholar 

  76. Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., Guez, A., Lockhart, E., Hassabis, D., Graepel, T., Lillicrap, T.: Mastering atari, go, chess and shogi by planning with a learned model. Nature 588(7839), 604–609 (2020)

    Article  Google Scholar 

  77. Sedgwick, P.: Phases of clinical trials. BMJ 343, d6068 (2011)

  78. Shapira, R., Zingales, L.: Is Pollution Value-Maximizing? The DuPont case (No. w23866). National Bureau of Economic Research (2017)

  79. Shields, M.: Women's participation in Seattle's high-tech economy. https://smartech.gatech.edu/bitstream/handle/1853/53790/madelyn_shields_womens_participation_in_seattles_hightech_economy.pdf (2015). Accessed 15 Aug 2021

  80. Spiekermann, S.: Ethical IT innovation: a value-based system design approach. CRC Press, Boca Raton (2015)

    Book  Google Scholar 

  81. Suresh, H., Guttag, J.V.: A framework for understanding unintended consequences of machine learning. arXiv:1901.10002 (2019)

  82. Swift, S.: Gender Disparities in the Tech Industry: The Effects of Gender and Stereotypicability on Perceived Environmental Fit. In: 2015 NCUR (2015)

  83. The National Archives.: Equality Act 2010. [online] https://www.legislation.gov.uk/ukpga/2010/15/contents. Accessed 15 June 2021

  84. Thelisson, E., Padh, K., Celis, L.E.: Regulatory mechanisms and algorithms towards trust in AI/ML. In: Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

  85. Tolan, S.: Fair and unbiased algorithmic decision making: current state and future challenges. arXiv:1901.04730 (2019)

  86. Tramer, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J.P., Humbert, M., Juels, A., Lin, H.: FairTest: discovering unwarranted associations in data-driven applications. In: 2017 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 401–416. IEEE (2017)

  87. US Census Bureau, Bureau of Justice Statistics.: https://data.census.gov/cedsci/table?q=S0201&t=400%20-%20Hispanic%20or%20Latino%20%28of%20any%20race%29%20%28200-299%29%3A451%20-%20White%20alone,%20not%20Hispanic%20or%20Latino%3A453%20-%20Black%20or%20African%20American%20alone,%20not%20Hispanic%20or%20Latino&tid=ACSSPP1Y2019.S0201 (2019). Accessed 22 Apr 2021

  88. Van Wel, L., Royakkers, L.: Ethical issues in web data mining. Ethics Inf. Technol. 6(2), 129–140 (2004)

    Article  Google Scholar 

  89. Van Wynsberghe, A., Robbins, S.: Critiquing the reasons for making artificial moral agents. Sci. Eng. Ethics 25(3), 719–735 (2019)

    Article  Google Scholar 

  90. Verdin, J., Funk, C., Senay, G., Choularton, R.: Climate science and famine early warning. Philos. Trans. R. Soc. B Biol. Sci. 360(1463), 2155–2168 (2005)

    Article  Google Scholar 

  91. Vincent, J.: Amazon reportedly scraps internal AI recruiting tool that was biased against women. The Verge. https://www.theverge.com/2018/10/10/17958784/ai-recruiting-tool-bias-amazon-report (2018). Accessed 28 Mar 2021

  92. Wajcman, J.: Feminism Confronts Technology. Penn State Press, Pennsylvania (1991)

    Google Scholar 

  93. Wallach, W., Allen, C.: Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Oxford (2009)

    Book  Google Scholar 

  94. Wang, S., Guo, W., Narasimhan, H., Cotter, A., Gupta, M., Jordan, M.I.: Robust optimization for fairness with noisy protected groups. arXiv:2002.09343 (2020)

  95. Washington, A.L.: How to argue with an algorithm: lessons from the COMPAS-ProPublica debate. Colo. Tech. LJ 17, 131 (2018)

    Google Scholar 

  96. Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., Cave, S.: Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Nuffield Foundation, London (2019)

    Google Scholar 

  97. Whitehouse, G., Diamond, C.: Reproducing gender inequality: segregation and career paths in information technology jobs in Australia. Reworking 1, 555–564 (2005)

  98. Winfield, A.F., Jirotka, M.: The case for an ethical black box. In: Annual Conference Towards Autonomous Robotic Systems, pp. 262–273. Springer, Cham (2017)

  99. Woolley, S.C., Howard, P.N. (eds.) Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press, Oxford (2018)

  100. World Prison Brief.: https://prisonstudies.org/country/united-states-america (2018). Accessed 22 Apr 2021

  101. Yasser, Q.R., Al Mamun, A., Ahmed, I.: Corporate social responsibility and gender diversity: insights from Asia Pacific. Corp. Soc. Responsib. Environ. Manag. 24(3), 210–221 (2017)

    Article  Google Scholar 

  102. Zeng, Z.: Jail Inmates in 2018, US Census Bureau, Bureau of Justice Statistics. https://bjs.ojp.gov/library/publications/jail-inmates-2018. Accessed 22 June 2021 (2020)

  103. Zhou, N., Zhang, Z., Nair, V.N., Singhal, H., Chen, J., Sudjianto, A.: Bias, Fairness, and Accountability with AI and ML Algorithms. arXiv:2105.06558 (2021)

  104. Zuboff, S.: The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. United States: PublicAffairs (2019)

Download references

Acknowledgements

With immense gratitude to the editorial team for their great assistance, and the anonymous reviewers for their input. Secondly, to Ioannis Votsis, my MA dissertation supervisor, a truly vocational professor who provided me with superb insights and feedback. Thirdly, to Justine Seager for her great assistance in the initial editing. Finally, this paper is dedicated to the inspirational women and nonbinary of colour, especially Timnit Gebru and Joy Buolamwini, for pioneering a more diverse and inclusive approach to AI and Ethics.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lorenzo Belenguer.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Belenguer, L. AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI Ethics 2, 771–787 (2022). https://doi.org/10.1007/s43681-022-00138-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00138-8

Keywords