Your AI application has no bias

Being a person with a technical background, I am annoyed by statements like the AI is racist, the AI is misogynistic, the AI favours old white men etc. because these attributions distract from the actual perpetrators, the humans.

Usually, the issue is that someone has used a generative AI and the result is simply not suitable. One nice example I found was an experiment by an HR colleague who used generative AI to generate a job advert. The result was a job description for a 50-year-old man with a lot of industrial experience. I found the result quite plausible, as this description seems to be about the average of how these positions are currently filled. So it was the wrong question or the wrong AI that was used.

AI applications are tools, highly complex tools whose decision-making principles are rarely traceable. Tools have no prejudices, people do. It would be like saying I don’t think my hammer likes my screws. If I use nails, everything is fine, but if I hammer screws into the wall, there are always such ugly chipped edges.

By the way, we completely redid the job advertisement without artificial intelligence. We let the current job holders get involved and asked them what they felt appealed to them. What makes their job and their employer special, what are the real benefits that competitors don’t offer. The response to this job advert was many times higher than before. And the position was filled quickly.


Leave a Reply

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.

Dein Kommentar wird manuell freigeschaltet. Ich bitte um Geduld, das kann manchmal etwas dauern.