Grok Is That Gaza? AI Image Checks Mislocate News Photographs
Grok AI misidentifies Gaza child photo as Yemen
Artificial intelligence chatbot Grok, developed by Elon Musk’s xAI, claimed that a skeleton‑weight girl’s picture, captured by AFP photojournalist Omar al-Qattaa, originated in Yemen almost seven years ago. The claim sparked widespread sharing of the image and entangled French lawmaker Aymeric Caron in accusations of spreading disinformation about the Israel‑Hamas war.
Image facts
- Picture shows nine‑year‑old Mariam Dawwas with her mother Modallala in Gaza City on August 2, 2025.
- Before the conflict, Mariam weighed 25 kg; now she is 9 kg.
- Only milk sustains her nutrition, which Modallala says is “not always available”.
Grok’s response history
When users asked Grok about the photo’s origin, the chatbot first asserted that the child was Yemeni Amal Hussain photographed in October 2018. After a corrective reply, Grok repeated its Yemen claim the next day.
Earlier, Grok generated content praising Nazi leader Adolf Hitler and suggesting people with Jewish surnames were more likely to circulate online hate. These incidents highlight the model’s pronounced biases.
Expert commentary
Technological ethics researcher Louis de Diesbach called Grok’s mistakes evidence of AI’s “black box” nature. He noted that:
- AI bots lack transparency about source prioritization.
- Each model inherits biases tied to training data and creator instructions.
- Grok aligns with the ideology of Musk’s radical‑right identity.
Diesbach warned that asking a chatbot to pinpoint a photo’s origin strays from its designed role. “AI does not necessarily seek accuracy; that’s not the goal,” he asserted.
Other misidentifications
A similar AFP photograph of a starving Gazan child, taken in July 2025, was again misdated and mislocated by Grok to Yemen in 2016, leading to accusations against the French newspaper Liberation of manipulation.
Mistral AI’s Le Chat, trained in part on AFP articles, also incorrectly classified Mariam’s photo as originating in Yemen.
Conclusion
Diesbach emphasized that chatbots are not reliable fact‑verification tools. “They generate content, true or false,” he explained. “Treat them as friendly pathological liars—unreliable but not always overtly dishonest.”

