A harrowing picture captured in Gaza, exhibiting a severely malnourished younger lady held in her mom’s arms, has turn into the newest flashpoint within the ongoing battle over fact, know-how, and the Israel-Hamas conflict.The photograph, taken on August 2, 2025, by AFP photojournalist Omar al-Qattaa, paperwork the frail, skeletal body of nine-year-old Mariam Dawwas amid rising fears of mass famine within the besieged Palestinian enclave. Israel’s blockade of the Gaza Strip has lower off vital humanitarian support, pushing over two million residents to the brink of hunger.However when customers turned to Elon Musk’s AI chatbot, Grok, on X to confirm the picture, the response was stunningly off the mark. Grok insisted the photograph was taken in Yemen in 2018, claiming it confirmed Amal Hussain, a seven-year-old lady whose loss of life from hunger made world headlines through the Yemen civil conflict.That reply was not simply incorrect — it was dangerously deceptive.When AI turns into a disinformation machineGrok’s defective identification quickly unfold on-line, sowing confusion and weaponising doubt. French left-wing lawmaker Aymeric Caron, who shared the picture in solidarity with Palestinians, was swiftly accused of spreading disinformation, although the picture was genuine and present.“This picture is actual, and so is the struggling it represents,” mentioned Caron, pushing again towards the accusations.The controversy spotlights a deeply unsettling pattern: as extra customers depend on AI instruments to fact-check content material, the know-how’s errors aren’t simply errors — they’re catalysts for discrediting fact.A human tragedy, buried underneath algorithmic errorMariam Dawwas, as soon as a wholesome little one weighing 25 kilograms earlier than the conflict started in October 2023, now weighs simply 9. “The one diet she will get is milk,” her mom Modallala advised AFP, “and even that is not at all times accessible.”Her picture has turn into an emblem of Gaza’s deepening humanitarian disaster. However Grok’s misfire lowered her to an information level within the flawed file, an AI hallucination with real-world penalties.Even after being challenged, Grok initially doubled down: “I don’t unfold faux information; I base my solutions on verified sources.” Whereas the chatbot finally acknowledged the error, it once more repeated the inaccurate Yemen attribution the very subsequent day.