GENDER BIAS IN AI
By Gunjan Khanted
Artificial intelligence, is it the future of the world or the cause of the end of all humanity?
It is seeping into our lives in ways we do not even notice. It promises to create a vastly more productive and efficient economy. Mainly working to eliminate human error and uphold ethical work by removing all biases. But are we truly achieving that?
With the recent application of revolutionized AI technology in numerous sectors, results were astonishing. Favoritism in hiring, Racism in Healthcare, Sexism in university screening, discrimination in the courts and banking sector as well as many such prejudices were observed. Algorithmic bias can not only undermine equality but also lead to unfair decisions and representations of certain groups.
The root cause of unfairness is the way the program is designed and the incomplete data fed to these machines. Though a considerable size of data covering all aspects to represent the real world is given to AI, the large data set reflects human subjectivity over the years and hence it is only heightened while the technology is used in practice. AI systems fail to understand whether their training inputs are objective or subjective.
Transactional services are to be accounted for as they translate non-gendered languages into English that were on default referring to doctors as males and nurses as females. This societal bias and outdated laws influence AI to assume all nurses are females. Additionally, current industry trends showing a majority of males holding leadership positions inaccurately conclude gender as a requirement and renders women unfit for those positions.
Furthermore, the alarming conjecture of AI determines that black people are statistically more prone to commit crime compared to white people and hence the US court system grants separate charges and sentences based on such factors.
Although after a considerable amount of time, companies and corporates have started to realize the gigantic flaw in the system and are tirelessly working to eliminate the same. As AI is created by humans we hope to change the coding but unfortunately, it is not as easy as it sounds. Machine learning happens instantaneously without clarity on data gaps that give rise to bias and assumptions.
To overcome this gender inequality we can follow a 3 step rule that answers: Whos Codes, How we code and Why we code, and finally thorough testing of the algorithm before its release. With detailed strategic plans under each can lead to a huge change in this system.
AI is biased because humans who make them are, hence to ensure AI receives a varied data set covering all bases, the coders themselves must represent a diverse background. There is a huge gender gap as only 22% of professionals in AI and Data Science are women mostly occupying a lower associate position. Moreover, if it’s restricted to one region or nationality its application worldwide might be inefficacious.
A more exhaustive approach would be to provide data from all corners of the world and have them checked by people of all colors and sexuality.
The Auxiliary impact can be observed on facts like 300 million fewer women than men have access to the internet and mobile phone which inherently skews datasets.
How do we Code?
Presently the data in datasets were collected and labeled, humans decided on which data to be listed and which was to be discarded. This gave incomplete information to the software, leaving blind spots for assumptions. Labeling a large demographic data can be subjective, pointing to only a few parameters like gender or nationality which further leads to bias in technology.
The changes in policies can also provide a lower backing to the data. For example, earlier creditworthiness of a woman was processed using marital status. The progressive times demanded a neutral approach, the harm has already been done showing less formal financial history. Therefore AI is not approaching modern times the way we want it to.
Why do we Code?
The purpose of this technology is to eliminate human error and propagate equal opportunities. Therefore it is important to hold that doctrine while customizing the software for specific uses. Research and analysis should be done to extend the option of blurring gender to an individual by age, blurring nationality by options of citizens and immigrants, and blurring historical biases by inducing a fact-based analogy not following the principle of labeling mostly to all. For example, most pilots are men but not all.
In addition to the above principle, we need to establish a gender-sensitive governance approach, hosting the voice of marginalized communities and advancing equity for people as well as AI technology.
Audit Programs must be initiated by a board or committee solely responsible for technology implications internationally. It can have similar rights and proceedings as an organ of the UN. Additionally, countries can impose their own screening tests for AI impacting the public sector and can include a list of standards that are to be met by private companies before implementing the technology.
Also for a thorough market scanning, the developers may launch campaigns to gain users or volunteers who will be examining the technology and provide their experience and feedback. This can be conducted on a large scale globally to achieve a better understanding. An example of this can be software working on facial recognition that can ask users to join a selfie inclusion campaign where their face would be used as a training data set for the AI. This will bring in a larger diverse input and the output can then be explored for change.
New technology is a chance for humanity to start fresh. For AI to lead to a better tomorrow, it’s up to humans to remove the bias and not to the machines. If immediate attention is not given to this software it will only reflect human bias on a greater scale.
It’s for all nations, sexes, caste, color, and religion to come together as one and play a crucial role in shaping the future of a bias-free AI world.