

Many people are being forced to use it though — this is where much of the ire is coming from. These people are likely in the minority though. Something that’s much more concerning though is the use of AI that affects us, but we don’t get a say: doctors being made to use generative AI transcription tools (which perform worse than established audio transcription software that doesn’t use AI). The people pushing doctors to use AI are doing it to wring more productivity out of them — more patients in less time. This means that even if a patient doesn’t end up with AI hallucinations in their medical records, their experience seeing their doctor will likely be worse.
Cases like this are becoming less niche as time progresses, despite mounting research showing the harms of these technologies when they’re applied in this way. Increasingly we are being put into situations where AI tools aren’t something to be used by us (which is something you can often opt out of), but things to be used on us. We don’t find out until something goes wrong, and when it does, regular people can struggle to challenge the situation (the example coming to mind here is false positives in facial recognition systems being used by the police. It is leading to more innocent people being wrongfully arrested)
I saw a paper a while back that argued that AI is being used as “moral crumple zones”. For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It’s an interesting concept that I’ve thought about a lot since I found it.