FEED Autumn 2022 Newsletter

We asked AI image generator Craiyon to take a crack at four different prompts: ‘a man,’ ‘a woman,’ ‘a baby’ and ‘a child.’ Below are some of the images it returned. On the right is a response from Midjourney to the prompt: ‘a family.’ These were typical of all responses from both AI, without exception. Can you spot what’s wrong? (Hint: it’s not that they look like they stepped out of a Cronenberg film.) The danger posed by AI is not runaway robots taking over the world. It’s (as usual) runaway humans taking over the world. AI, like any tool, amplifies our wants and needs. AI is trained on massive data sets based on pre-existing material. If that material comes from, for example, a community with preconceptions, expectations or prejudices, those will be inherited by the AI. Developers are familiar with these bias problems. Craiyon addresses them in its FAQ. “While the capabilities of image generation models are impressive, they may reinforce or exacerbate societal biases. Because the model was trained on unfiltered internet data, it may generate images with harmful stereotypes. The extent and nature of the Dall-E mini model’s biases have yet to be fully documented. Work to analyse the nature and extent of these limitations is ongoing.” ROBOT PROBLEMS ARE HUMAN PROBLEMS

@feedzinesocial

Powered by