Given the rapid adoption and uncertainty surrounding AI, it’s become increasingly necessary for employers to take control of their tech stack and monitor their use of automated employment decision tools (AEDTs). For many, this will take the form of a recruitment technology audit. But undergoing an audit can be a daunting experience for any organization, especially for an HR team. Don’t fear; when approached correctly, it can be an invaluable learning experience that helps identify areas of improvement and ensures fairness in your organization’s practices. In this blog post, we’ll discuss our experience with our most recent AI audit and share some best practices for those considering where to start.
Our AI Audit Experience
PandoLogic has proactively made efforts to audit our AI technology and review our AI models’ current and future state applications to ensure we are helping to mitigate bias at the top of the recruitment funnel. The use of programmatic technology, like pandoIQ, is designed to enable employers to make data-driven decisions and decrease the risk of human bias in sourcing candidates. However, we wanted to ensure that the AI and algorithms our technology uses to make those decisions were not learning bias from discriminatory practices that happen in the real world where our data originates. We enlisted the help of the AI experts at Vera to help us navigate the process and identify potential risks of bias within our AI models. Overall, the experience was eye-opening. The Vera team was thorough and patient and provided valuable insights from their expertise in human-centered system design, machine learning, information retrieval, and AI.
Once evaluated, Vera put our technology to the test. They considered the individual purpose, design, and position of each model within our AI ecosystem and dug deep into the areas with greater bias risk factors. Under the guidance of the Vera team, we uncovered a way to evaluate our AI technology and reach the most accurate results with the data we had available.
The Results
pandoIQ: The pandoIQ platform is a programmatic job advertising tool powered by proprietary algorithms to help target and reach talent. Given the individual purpose, design, and position of all the algorithms in the PandoLogic AI ecosystem, most were analyzed by Vera, whose deeper evaluations found that these algorithms presented little risk in introducing bias at the top of the recruitment funnel.
Of the code evaluated, our Job Expansion algorithm required a closer look because of its zip code expansion capabilities. Zip codes can serve as a proxy for demographics due, in part, to historical redlining practices across the United States. The Vera AI audit data found that, when the appropriate data sets are used, the Job Expansion algorithm showed no signs of increasing bias and actually decreased bias by expanding job results to additional zip codes. And with our human-led processes, we have an expert team monitoring these decisions to ensure additive, algorithmic bias is reduced at the start of the hiring process.
pandoSELECT: pandoSELECT is a combination of the pandoIQ platform and our Conversational AI chatbot. The Conversational AI tool enables applicants to learn more about a role and helps qualify them via a series of predetermined questions generated from our repository of chat templates and guidance from our clients.
Like most chatbot systems, the Conversational AI used in pandoSELECT is a blend of several independent models that meet through a central algorithm to provide a seamless and conversational experience for applicants.
Two models required deeper study: a skills extraction algorithm and a job matching algorithm. The skills extraction algorithm references a large database of keywords to extract words from the conversation that indicate various skills requirements from the job description. This model identifies references to these words within the conversation to enrich a job applicant’s profile in the dashboard. The job matching algorithm helps determine whether a job candidate is a reasonable match for an open role. The intention here is not necessarily to suggest which candidate is the most qualified but rather to help our clients focus their efforts on candidates who meet certain basic and necessary qualifications.
We believe the current state of our conversational AI offering runs very little risk of adding bias to the top of the recruitment funnel. Many forms of algorithmic bias are introduced by way of inferring or estimating attributes of an individual that may be subjective in nature and may serve as proxies for categories that are protected under the law. Our Conversational AI model today asks simple yes or no questions leaving very little room for misinterpretations by the algorithms. Ultimately, these models operate on information that is objectively true and volunteered by the applicant. There are plans in our product roadmap to introduce open-ended questions to our Conversational AI as a standalone product which runs a higher risk for bias. We have consulted with the Vera audit team to continue following best practices in the development of this feature and will be sure to audit this technology prior to release.
Best Practices
Legislation such as NYC’s Local Law 144 mandates that employers must audit their AEDT recruitment technology at least once a year to ensure they are not imposing bias via these AEDTs. This is a new legal requirement for many employers, with no clear guidance on how to begin or best practices to consider when attempting to comply with the law. While the PandoLogic AI audit summarized previously does not replace the need for employers to conduct their own AEDT audits, we want to empower employers with the knowledge they need to ask the right questions of their recruitment technology vendors.
We have identified best practices to help guide employers as they navigate this uncharted territory:
- First, map your talent acquisition journey and workflows: Create a visual representation of your talent acquisition process to identify where automated employment decision-making tools may be present. This will help pinpoint areas where unwanted bias could emerge, allow employers to see the data flow at each step, and enable employers to take proactive measures to address issues that may arise. Additionally, employers should identify the vendors and systems used at each stage to better assess the technologies involved.
- Next, reach out to your vendors: Once your processes are mapped, it’s time to evaluate your existing or potential recruitment vendors. Questions to consider when evaluating your tech partners:
- Have you conducted an audit to assess and mitigate biases in your technology? If so, can you provide documentation of the audit findings?
- How has your organization taken steps to ensure bias reduction in your technology?
- Are there any transparency statements or public announcements available to show your commitment to mitigating bias in technology?
If a vendor cannot provide answers to these questions, it could be a warning sign that their technology may not align with your organization’s commitment to reducing bias and maintaining compliance.
- Then, give candidates a voice: Start building trust with your applicants by offering them a way to voice concerns during the recruitment process. This can be done through the application itself or an automated survey, ensuring you stay informed about potential issues as soon as they are flagged.
- Always keep an expert in the loop: Maintain a human presence at all times to address concerns quickly and efficiently. AI is meant to enhance the human work experience, not replace it. Collaborate with your vendor to ensure a productive and compliant process.
- Lastly, conduct a demographics survey: Gathering data on your organization’s demographics and the demographics of those who apply to your open jobs can help you accurately assess the impact of technology on your workforce through self-reported information.
Summary
Ultimately, fairness and transparency should always be the top priority. While efficiency is essential in the workplace, it should not come at the expense of fairness, and an AI audit can be an excellent opportunity for organizations to ensure that their practices are fair, transparent, and compliant.
Disclaimer: Information contained on PandoLogic’s website and set forth in this blog post is for general guidance only and does not constitute legal advice or a guarantee of compliance with federal, state, or local law. The data and information contained above in “The Results” section is for illustrative purposes only and is not intended to replace or serve as an employer’s audit of its own use of PandoLogic’s automated employment-decision tools. Please consult your organization’s legal counsel to ensure compliance with relevant laws and regulations.