In this edition, we opened up the CSR conversation to more participants, resulting in a lively Roundtable discussion of how AI can be used to improve CSR initiatives.
At the table were Gazala Shaikh - Head- Group ESG, L&T Finance; Balakumar Thangavelu - India Head - Outreach, Employee Volunteering (CSR) | Global lead - ESG – Cognizant and Dr Lopamudra Priyadarshini- Assistant VP - Hindalco Industries Ltd, Head of Community Relations & Sustainability. Moderating this insightful conversation was Joel Fernandez, Head Projects & Strategy, TeamLease Education Foundation.
Across the Indian CSR landscape, funds are not in short supply. What is, is the trust factor. Companies spend considerable resources in terms of time and manpower, finding and shortlisting partners to implement their CSR plans. These partners are whetted, and their track records are analysed at multiple levels before projects are given the go-ahead.
Yet, after just one project or financial year, companies are on the look-out for new NGOs/CSR partners. Leading to another long, resource-consuming search.
This is where AI could play a great role in building trust between corporate donors and NGOs. According to Thangavelu from Cognizant, by interfacing inputs from drones that track built infra, projects can be tracked in real time and funds disbursed, accordingly – preventing waste or mismanagement. And generative AI could be employed to ensure government-mandated reports are in the right format, cover the required metrics and generally serve the purpose of accurate impact assessment.
In this way, AI-enabled close monitoring can build trust between donor and recipient organisations, leading to longer-term relationships.
Dr Priyadarshini of Hindalco Industries offers another perspective. Apart from the benefits mentioned earlier, she pointed out that AI-driven predictive analytics can forecast potential deviations from an organization’s CSR goals as well. By analysing historical data and external factors every year prior to allocating funds for CSR, companies become aware of where money has been spent, why certain funds were not spent and in the true sense of predictive analysis, where project implementers could go off-track or over-budget. With AI, a built-in early-warning trip-wire gets built into the tracking systems, avoiding wastage of funds and manpower.
Gazala Shaikh of L&T Finance does strike a cautionary note against relying too much on the powers of AI. She warns that since machines learn from humans and algorithms are written by us, out biases do get embedded in AI, so great care should be taken to counter this aspect.
Across India, a core aspect of most CSR programs is diversity. So the issue of algorithmic bias could counter whatever inclusivity initiatives companies roll out. So how could organisations overcome this roadblock?
Ms. Shaikh of L&T Finance spoke of the Safe City app, developed by the Red Dot Foundation to ensure women’s safety in urban centers. And the unique way algorithmic bias against women got nullified was by women themselves feeding in data-points linked to levels of safety across different parts of a city, at different times of the day/night.
The interesting points raised by Ms. Shaikh and Dr Priyadarshini highlight the importance of humans playing the key decision-making role even when AI is deployed for data analysis or other functions.
Take the case of hiring – women, those with disabilities, others from traditionally marginalised communities and those suffering from disabilities have traditionally faced insurmountable biases when interviewed by human hirers. In this context, anonymous AI-based hiring could be a solution – where the deeply entrenched prejudices that people tend to bring to their jobs doesn’t enter the picture. In such a context, true inclusivity in hiring could be achieved.
Yet, it would require humans to once again reassess AI models for biases (that reflect the prejudices of the humans that the machines learnt from!), before these AI-powered hiring systems could be called truly fair and unbiased. This dual nature of AI, highlights the utter importance of humans remaining in the driver’s seat when deploying AI solutions.
Another example from Thangavelu highlighted how AI and humans were natural partners. Describing the volunteering program within Cognizant, Thangavelu described how employees signed up to participate in a number of important life-saving programs online – feeding in the important data that AI-solutions need to learn from before operating independently. The Open Street program required employees to log in for as little as 15 minutes to build out sections of maps that allowed the Red Cross and other relief organisations to create accurate Google-Maps-like navigation aids that they could use in crisis zones. The other project involved getting trained for an hour before viewing X-Ray or ultrasound images of women’s breasts and identifying lymph nodes. Such data sets would be used by AI to eventually learn how to spot signs of breast cancer, early enough for better recovery chances.
A question posed by moderator Joel Fernandez was whether large organisations, often operating at the cutting edge of technology, owed it to NGOs and CSR partners to train them in the latest tech. And all three participants at the Roundtable believed this ought to be the case. Rather than find fault with where NGOs were falling short – whether in project assessment or implementation - donor companies with deep knowledge of tech could play the role of helping/training NGO partners in adopting AI to streamline their own operations. This would be a win-win situation for both parties.
After covering considerable ground – on all the ways adopting AI could improve CSR outcomes, the discussion ended on a cautionary note. Ms. Shaikh drew the participants' attention to the critical importance of data privacy. With large volumes of data being collected, about beneficiaries in particular – this left them vulnerable, not just to hackers, but also to their personal details getting exploited for purposes not intended by the donor organisation.
And in the case of AI and machine learning where different data sets interact with each other, what a person’s data was collected for and what it eventually gets exploited for could lead to damaging outcomes.
The discussion wound up on the new Data Privacy Bill being passed in the Indian parliament. And also the important role individuals themselves need to play in protecting their data – by avoiding posting personal details in public forums.
After all, in the days to come, data is only going to become a more valuable currency.