At PandoLogic, we’re growing at incredible speeds. So much so that our own internal recruiters need some assistance to keep up with our hiring demand. As the Lead Data Algorithm Analyst at PandoLogic, I thought that this would be an incredible opportunity for our AI chatbot, Wendy, to help us find the best candidates for a Natural Language Processing Data Scientist position. In the tech space, this is known as “dogfooding.” Dogfooding is tech slang for using your own product and it’s always a good idea.
For the last two weeks, I’ve been utilizing Wendy to help me find great talent for the team. Here’s what I learned.
Who is Wendy?
Wendy is PandoLogic’s dedicated AI recruitment chatbot. Wendy chats with our talent pool and candidates who applied for a specific job and evaluates fit based on their responses. She typically handles high-volume sectors (e.g., warehousing, fast-food chains, etc.) and high-touch positions (e.g., recruiters, nurses, etc.) and can consistently achieve key performance targets—but this NLP Data Scientist position is new to Wendy. So, let’s see how she’s handling it:
How is Wendy Doing?
Over the last two weeks, Wendy has been working incredibly hard to find our next great NLP Data Scientist. So far she has:
- Chatted with 73 applicants
- Evaluated 53 (~73%) applicants who completed the chat
- Identified (~8%) applicants who she evaluated as “Qualified”
I noticed that most applicants responded to invitations within a day—so I started sending reminders in Week 2. I used older applications as a canary test and the initial response rates were low.
Within such a short timeframe, Wendy had already become an amazing organizational asset. She seamlessly compiled candidate responses and relevant data into our graphical database for analysis. The below graphs represent sample profile data:
Thanks to Wendy, I was able to quickly determine performance across a range of evaluation metrics and job descriptions. She compiled the following data after asking three logistical questions and five skills-based questions:
The three logistical questions were:
- If contacted for a phone and/or in-person interview, do you commit to attending?
- Are you authorized to work in the U.S?
- Do you consent to a background check?
After reviewing the responses, I discovered that one candidate said no to the first question because it mentioned an in-person interview and they were clear that they wouldn’t be able to attend (I should point out all of our roles are fully remote). This was valuable feedback and I’m revising the question accordingly.
The five skills-based questions were:
- Do you have 5+ years of experience as an NLP Data Scientist?
- Do you have 5+ years of experience with building NLP/ML models (e.g., Entity Extraction, Topic Modeling, Text Classification)?
- Do you have experience with RDBMS (e.g. Oracle, PostgreSQL, SQLServer)?
- Do you have experience with one of the Deep Learning Frameworks such as TensorFlow, Torch or MXNet?
- Do you have experience with NLP/ML libraries like transformers / spacy / scikit-learn / rasa / spark-nlp?
After these questions, I actually realized that finding a candidate with 5+ years of experience in NLP/ML is hard to come by. My initial expectations did not fully reflect what was in market. This represents another revision needed for the job description.
These simple evaluation metrics help provide a clearer understanding of candidates’ skill sets and allow for more effective prioritization during the review process—and as our analytics continues to evolve so will our ability to refine and improve the entire hiring process.
Some key findings:
- Wendy can quickly engage with many candidates, which is very helpful when you’re overwhelmed with applicants.
- Since most responses are submitted within a day, sending a reminder after a few days may boost conversions.
- Qualified candidates can be very rare; consider revising metrics based on responses.
- More sophisticated evaluation models (such as one that incorporates nice-to-haves) may be useful.