LLMs reproducing stereotypes is a well researched topic. They do that due to what they are. Stereotypes and bias in (in the training data), bias and stereotypes out. That’s what they’re meant to do. And all AI companies have entire departments to tune that, measure the biases and then fine-tune it to whatever they deem fit.
I mean the issue aren’t women or anything, it’s using AI for hiring in the first place. You do that if you want whatever stereotypes Anthropic and OpenAI gave to you.
LLMs reproducing stereotypes is a well researched topic. They do that due to what they are. Stereotypes and bias in (in the training data), bias and stereotypes out. That’s what they’re meant to do. And all AI companies have entire departments to tune that, measure the biases and then fine-tune it to whatever they deem fit.
I mean the issue aren’t women or anything, it’s using AI for hiring in the first place. You do that if you want whatever stereotypes Anthropic and OpenAI gave to you.
Just pattern recognition in the end, and extrapolating from that sample size.
Issue is they probably want to pattern-recognize something like merit / ability / competence here. And ignore other factors. Which is just hard to do.