x
Help Us Guide You Better
best online ias coaching in india
2023-05-25

Download Pdf

banner

Science & Technology
www.thehindu.com

To enjoy additional benefits

CONNECT WITH US

May 25, 2023 12:40 am | Updated 12:40 am IST

COMMents

‘There is no good quality data available at present to train algorithms with an Indian context and background’ | Photo Credit: Getty Images/iStockphoto

There has been a rise in warnings against Artificial Intelligence (AI) by technology leaders, researchers and experts, including Geoffrey Hinton (artificial intelligence pioneer), Elon Musk (co-founder and CEO of Tesla), Emad Mostaque (British AI expert), Cathy O’Neil (American mathematician, data scientist, and author), Stuart J. Russell (British computer scientist), and many others. Recently, the World Health Organization called for caution in using AI for public health care. But there have been fewer voices in India to echo NITI Aayog’s initiative for Responsible AI; some journal papers and articles warn of impending dangers. Instead of waiting for more industry leaders in India to add their names to this list, we need to realise that the time to pay attention and prepare is now.

The possibility of a dystopian world where machines surpass humans is a more remote potential threat, the likelihood of which needs more discourse and deliberation. The more immediate threat that algorithms and artificial intelligence pose is that of reenforcing biases, de-emphasising representation and trivialising diversity. With the background of the Indian States, their many languages, colours, cultures, and traditions, this threat can propagate more inequity and further exclusion for marginalised groups and minorities, which can translate into high costs in terms of livelihoods, opportunities, well-being, and life. A recent report by the United Nations Development Programme and Aapti Institute highlighted that ‘algorithmic bias has [the] greatest impact on financial services, healthcare, retail, and gig worker industries’. This report has also flagged that AI-generated credit scoring exhibited a tendency to assign lower scores to females compared to males, even when their financial backgrounds were similar. Research on application in health-care diagnostics has identified significant biases against people of colour since most of the data used to train the model was from the United States and China, lacking representation of all communities even from within these countries.

With this background, it is imminent that we pause and think before jumping onto the AI roller coaster. Both commercial and government players in India are already evolving from Big Data towards AI for better targeting, efficiency, profits, as well as ‘perceived and (mostly) intended’ social welfare. The sensational popularity of ChatGPT is on the rise as social impact organisations and government agencies use it to customise their own public information systems and communication mechanisms. The Government of India continues to increase its budget allocation to emerging technologies each year, including AI, and has set up teams and an agenda for AI deployment and use. However, as the most populous country with the second largest Internet user base is inclined towards this innovation and modernisation, there needs to be more discussion around the data being used to train and build these algorithms. It is essential to redirect our focus towards the underlying foundations, governance, and ethical considerations of these systems and their operations before their adoption affects our daily lives.

AI depends on Big Data or programmatic rules to learn about and simulate human intelligence, communication, and (potentially) actions. Machines learn from massive volumes of data created and provided by humans and are programmed to identify patterns and learn to replicate to make decisions. This translates into a replication of human languages, communication styles, competencies, capabilities and logic alongside stereotypes, biases, belief systems and preferences. While these algorithms are designed to improve through feedback loops and programmatic corrections, they lack a moral compass; and unlike humans, they do not question stereotypes, norms, culture or traditions. Machines do not have a sense of fairness or empathy that a society, especially with minorities and disempowered communities, leans on. Furthermore, machines generalise learnt patterns to the population without knowing if the data used to train them are diverse and complete, with adequate representation from all communities and groups affected by their application.

For example, algorithms can potentially sift through volumes of resumes, career trajectories, and the performance of previous employees in an organisation and learn that men are more productive than women in general, take long leave much less, and are more suited for leadership roles. Why? Because, historically, organisations have had a disproportionately high number of men in their leadership, do not offer paternity leave leading women to take a break from their careers to raise children, and are often ranked high on performance in comparison to their women colleagues by their male supervisors. This problem can have deeper layers of race, caste, geography, educational background, and so on. Every bias that we have had in the past can be learnt as a rule and insight, and applied as a generalisation.

Other examples of this can include assigning potential drug abuse to a specific geography, potential criminal activity to a specific community, a (misleading) intellectual supremacy to a fraternity, or a stereotypical gender attribution to a profession based upon past data, patterns and historical records. In a way, algorithms learn from and project the past into the future.

What compounds this problem is the poor state of administrative data in India across States. There is no good quality data available at present to train algorithms with an Indian context and background. Representation of marginalised groups and communities in privately collected and held data is even worse.

This challenge is exacerbated by the low awareness and understanding of algorithms and their functioning among government personnel and the general population affected by these algorithm applications. Algorithms and systems usually come across as a black box to their users, with little to no transparency into their input variables or data used for training. This can translate into excessive dependence and exaggerated beliefs in algorithm suggestions and outputs, furthering issues of discrimination and exclusion of minorities and underprivileged groups.

AI deployment is not a leap we can afford to make without its development in India based on our own data and context. We need clean, organised, digitised, and well-governed public data to build algorithms and models that benefit our people. Both industry and governments must exercise caution and invest sufficiently in its research, development and scrutiny before embracing this innovation. While the challenge of available data volumes can be addressed due to the scale at which our administration and services operate, it is imperative that we prioritise the development of AI in a responsible and informed manner.

The private sector and practitioners should work collaboratively with governments on this journey. It is crucial to emphasise that the goal must be to build intelligent machines that are inclusive and reflect the country’s diversity and heterogeneity. Our adoption of this innovation should not hinder our progress towards equality and equity for all; it should support our reforms and endeavours for positive change.

Tulika Avni Sinha is a public policy professional who works with state governments and multilateral institutions in India on the design, adoption, and implementation of data policies and systems. She is a former Mason Fellow at the Harvard Kennedy School

COMMents

BACK TO TOPBack to Top

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.

END
© Zuccess App by crackIAS.com