Fighting Evolution's Mistakes: Ensuring Equitable Hiring with Machine Learning

We understand how to reduce bias in AI through rigorous empirical data, and we can take these lessons and apply them to society by reintroducing diversity to our corporate and academic population.

Monkey Business
Monkey Business/stock.adobe.com

Machine learning is incredibly difficult. Many people assume the technology is nearly omnipotent — able to attack any problem with ease — but frequently forget to factor in the effort required to produce useful, robust and unbiased results.

Often, time spent implementing machine learning isn't building the basics of machine learning, as most of the fundamental structure is well solved and available premade. The difficulty lies in preparing data and optimizing the model to make it useful for actual, real-world examples.

Implementing truly effective machine learning models may take months and millions of dollars. The process usually involves incredibly long and complex calculations, run either on supercomputers or massive grids of distributed computers on cloud architectures. In order to be truly read, endless variations of the same problem must be run through the system to try to produce better and better results measured through key metrics.

I've observed these difficulties firsthand — twice. First, at a crypto hedge fund building automated trading strategies using forecasting tools, and second, at Google, helping with pipelining for machine learning data and annotations. These experiences have helped me learn that it's impossible to produce good machine learning without putting in the requisite work and effort, the same way it's difficult to hire equitably without focus on fairness.

Most of the time, simple solutions are more useful than investing in machine learning.

The perfectly optimal answer might be only marginally better than a trivial answer — which will often suffice, especially when factoring in the cost of training a model, you could be much worse off than as a human making an educated guess.

However, for very difficult problems where marginally better answers can make a substantial impact, machine learning is an invaluable tool that can make massive impacts on the overall results. Machine learningis an umbrella term for a large variety of different strategies which would work with different levels of success for different problems.

For example, genetic algorithms are a popular strategy that works well in a variety of circumstances. During each iteration, all candidates are remeasured for success. This approach mimics Darwinian evolution by taking the most successful (to be defined per problem) candidates out of a large pool and creating new candidates who are combinations of these.

In technical terms, in each iteration, or epoch, each individual candidate is tested for its heuristic strength or fitness. For example, imagine a baker trying to find a recipe for the perfect brownie. In their first batches, they randomly tweak the ingredients. After baking these slightly different batches of brownies, they taste each batch. Then, they take several of the best-tasting brownie recipes and mix those recipes together to make more recipes, repeating until the baker ends up with only great-tasting brownie recipes.

Machine learning can be incredibly biased if not controlled.

This evolutionary strategy comes with a major flaw that is often problematic: diversity trends toward zero. In each generation of the system, the most successful candidates win more often, resulting in them being more likely to be one of the items chosen for the next generation. This can compound over time to always move toward more and more similar candidates.

A lack of biodiversity in ecosystems is well understood as problematic, and AI is vulnerable to similar issues. For example, in a bid to automate the labor-intensive hiring process, Amazon accidentally developed a failed resume screener that heavily preferred men, penalizing graduates of women's colleges and members of clubs with the word "women." How did this happen? Algorithms, especially AI-based solutions, are entirely dependent on the data provided to train them. When the algorithm is trained on successful hires largely consisting of men, these biases are reproduced in the seemingly "objective" model.

These diversity issues are so prevalent in genetic algorithms that reducing bias is one of the most valuable uses of time to produce better results. We can ameliorate genetic overfitting by introducing randomness and diversity into the system, giving random items a chance to be more prominent for a moment and see if they produce quality results, or by introducing a completely diverse new item to give it a chance to dominate.

We understand how to reduce bias in AI through rigorous empirical data, and we can take these lessons and apply them to society by reintroducing diversity to our corporate and academic population. Women make up less than 25% of the technical roles at America's largest tech companies, and that number is even lower for women making up ~22% of all AI jobs. When this happens in machine learning, we introduce more diversity and make sure no group is underrepresented, mirroring the efforts needed for the people behind the screen.

Encouraging unique opinions prevents losing creativity.

For this reason, it's mathematically crucial that we fix these imbalances and restore balance to the job market by guaranteeing representation from all backgrounds of people in every type of job market. Gender and racial disparities need to be addressed and are inherently dangerous to the overall success of the system.

By taking the lessons we learn from machine learning and genetic diversity, we can guarantee empirically that getting more diverse candidates into high-profile openings is incredibly important, both to encourage diversity of opinions and to make sure the next generation of leaders isn't biased toward certain groups. These problems aren't going to be fixed easily and will require a large effort to shift the overall model toward one that's entirely meritocratic.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

The Newsweek Expert Forum is an invitation-only network of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience.
What's this?
Content labeled as the Expert Forum is produced and managed by Newsweek Expert Forum, a fee based, invitation only membership community. The opinions expressed in this content do not necessarily reflect the opinion of Newsweek or the Newsweek Expert Forum.

About the writer

Noah Mitsuhashi


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go