Experimenting with unproven technology to determine whether a child should be granted protections they desperately need and are legally entitled to is cruel and unconscionable.
Companies that tested their technology in a handful of supermarkets, pubs, and on websites set them to predict whether a person looks under 25, not 18, allowing a wide error margin for algorithms that struggle to distinguish a 17-year-old from a 19-year-old.
AI face scans were never designed for children seeking asylum, and risk producing disastrous, life-changing errors. Algorithms identify patterns in the distance between nostrils and the texture of skin; they cannot account for children who have aged prematurely from trauma and violence. They cannot grasp how malnutrition, dehydration, sleep deprivation, and exposure to salt water during a dangerous sea crossing might profoundly alter a child’s face.
Goddamn, this is horrible. Imagine leaving shitty AI to determine the fate of this girl :
I won’t blame those kids one bit when their superpowers kick in and they start telekinetically shaking our cities to dust.
don’t buy this bullshit. i guarantee there’s no experiment, and probably no “AI” in the common sense of the word being used today. this is 100% going to be a deny-o-matic, because they’d rather say “the almighty AI determined it” than “we hate children”. This is the same thing united healthcare did which led to the famous—and very popular—deposition of its CEO, and also what Israel claims to be a targeting system while they’re commiting warcrimes on top of a genocide.
There’s a very high chance of racial bias issues here