Revisiting the AI diversity crisis – and how to solve it

In recent months and years, the prominence of movements like Black Lives Matter and #MeToo have made one thing demonstrably clear: there is still much work to be done across the board when it comes to fostering diversity and inclusion. Particularly in the tech industry, where the far-reaching scope of artificial intelligence (AI) means that any gender and racial bias at the source can be multiplied to the nth degree in practice, these biases will have a profound impact on society.

And they should not be underestimated. Put simply, when leaders and decision-makers in AI companies don’t reflect the diversity of society at large, this can have dire consequences in the development and implementation of AI/ML products. From chatbots like Microsoft’s Tay, that can easily learn to adopt misogynistic and racist language, to image recognition systems that miscategorize black faces – if companies don’t start making big changes, it will become increasingly difficult to correct these errors and prevent them from becoming entrenched within these systems.

We all know that AI has the potential to solve some of the world’s thorniest issues; everything from climate change, to curing cancer. But if we are to build truly pioneering tech that works for all and serves all, companies must begin taking some practical steps in the right direction.

Revisiting the conversation

Considering the dialogue around diversity and inclusion to date would be a good place for AI businesses to start.

Back in 2014, big names in the AI space came under fire from activists for their lack of transparency around diversity standards, with industry professionals, such as software engineer Tracy Chou and investor Ellen Pao to name but a few, urging these companies to face the issue head-on.

As a result, these companies began publishing diversity reports, with many large corporations such as Google, Apple and Facebook following suit. As anticipated, the number of staff from underrepresented backgrounds – particularly those employed in technical roles – was a cause for concern. With the intention of increasing their transparency, these companies began hiring leads of diversity and inclusion, launching diversity initiatives, and overhauling their hiring practices.

The publishing of these statistics also gave rise to non-profit organizations like Project Include, Diversity.AI, and Women in Machine Learning + Data Science, as well as research initiatives delving into the ethical ramifications of diversity issues.

In just over half a decade, it seems like great progress has been made – especially in increasing transparency. But despite apparent progress, a recent study from the AI Now Institute of New York found that more than 80 percent of academics holding professorships in the AI field are men, and only 15 percent of AI researchers at Facebook and 10 percent at Google are women.

Where race is concerned, the numbers are more troubling. As of 2019, just 4 percent of the workforce at both Facebook and Microsoft are black; at Google, the state of affairs is even worse, with just 2.5 percent.

Now is the time for practical steps

Although on the surface progress has been made, and many companies are now more transparent about putting an inclusive ethos at the heart of their operations, meaningful change must be built from the ground up.

Ultimately, addressing the complexities of inclusive AI will take much more than just publishing white papers and reports, although this process is no doubt important for shedding some much-needed light on the issues at play.

We must not forget that AI is not implicitly biased. So, from a technical standpoint, as an industry we must continue to create and utilize detection tools for identifying and mitigating bias in given datasets. This will prevent AI toolsets from propagating existing inequality, and building on unrepresentative data.

But more than that, if we are to benefit from AI democratically within society, change must begin with Governments and state-led enterprises, as well as increased funding for STEM education. More must be done to shift the status quo and address the nuances of how identity shapes students’ relationships to the computer sciences. Only then will we see labs and research departments filled with people from all walks of life and underrepresented backgrounds, who should be encouraged and given the means to pursue careers in AI.

Likewise, companies who have already made important changes to their employment practices should redouble on their efforts by promoting hard-working employees from diverse backgrounds into more senior positions. The AI researchers and pioneers of the future require strong role-models to look up to, as well as mentors, and will be more likely to take the first steps into a technical role if they see themselves in the leaders of today.

Within organizations, it is also vital to create an atmosphere of inclusivity and trust where all voices are heard. This should be a matter of priority and business leaders should hold themselves to account by fostering an environment where employees can speak up if they believe that a particular product or practice does not reflect appropriate diversity standards. This is the right thing to do, as well as being better for business more generally. After all, it has long been said that a diverse workforce will perform better financially, too.

All in all, it is high time that the AI industry looks beyond just statistics and audits: real changes can only be made if we interrogate these issues on a societal level, as well as a technical one. As such, researchers and industry professionals would do well to examine their technology in context of wider issues. Only then will we be able to enjoy a future where all members of society benefit from the many exciting opportunities that AI has to offer.

Nikolas Kairinos, chief executive officer and founder, Soffos

Source link