When Barbie Doesn’t Look Like Me: AI, Body Dysmorphia, and the Inheritance of Shame

Recently, I decided to create an action figure version of myself—a Barbie, to be exact—using ChatGPT and AI image generation tools. It started as something playful, even empowering. I thought, What would my Barbie look like? Strong? Stylish? Weird in all the best ways? I fed in some photos of myself, gave prompts like “realistic action figure Barbie based on this person, casual outfit, confident expression,” and waited for the image generator to do its thing.

What came back felt jarringly unfamiliar. In the first render, Barbie-Me had exaggerated hips and a midsection noticeably larger than mine. In the second, my proportions had shifted again—bigger arms, bulkier legs, and a softness that didn’t match the actual photos I'd uploaded. I stared at the images, confused and a little queasy. Is this what I look like? Have I been mis-seeing myself this whole time?


It triggered a flood of dysmorphia. The kind that doesn’t care about logic or evidence. But when I returned to my original photos—taken just days earlier—I saw that I hadn’t imagined myself incorrectly. The images had, in a very real way, made me larger. Not in a way that felt celebratory or inclusive, but in a way that felt like a distortion. A punishment for existing in a body that doesn’t slot neatly into the thin ideal, but isn’t fat either. Somewhere in between—where representation often breaks down.

This experience made something painfully clear: AI tools, even the creative ones, are not neutral. They reflect and reproduce the biases of the culture that trained them. And when it comes to bodies, especially bodies assigned female at birth, that culture remains deeply fatphobic and steeped in binary thinking. You're either "thin and ideal" or you're "fat and failed." There's no in-between, no spectrum, no nuance. And because AI learns from mountains of image data—photos, media, filtered selfies, stock models—it replicates the assumptions embedded in those visuals.

The result? A Barbie that didn’t look like me, but did reflect how our culture imagines someone like me should look.

There’s academic backing to this. A 2023 study in Computers in Human Behavior found that AI-generated body images disproportionately overrepresent certain body types (thin, white, cisgender) and distort others, often making ambiguous or average-sized bodies appear larger than they are. The researchers note that AI lacks the ability to interpret context—it can’t see how posture, clothing, lighting, or angle affect appearance. It simply guesses, and it guesses based on a world that often sees anything outside the “ideal” as flawed.

And when you're someone like me—someone who has worked hard to recover from an eating disorder and a lifetime of body shame—those guesses can cut deep.

Growing up, I received very different messages about bodies, depending on whose body it was. My step-sister, thin and lithe, was allowed to wear what she wanted—shorts, tank tops, fitted dresses. I, on the other hand, was told to cover up. My dad and stepmother framed it as “appropriate,” but the message was unmistakable: my body was too big, too visible, too much. Baggy shirts. Loose pants. No cleavage, ever. I internalized it all. I learned that my body was something to manage, to hide, to control.

Even now, as someone who actively studies and resists body policing, the residue of those early messages lingers. I am conscious of body politics, aware of how capitalism and patriarchy intersect to commodify and shame bodies, and still—I felt that shame flare back up when AI gave me a version of myself that wasn’t quite me, but was believable enough to make me doubt.

Sabrina Strings, in Fearing the Black Body, outlines how fatphobia isn’t just a personal bias—it’s an ideology rooted in colonialism, racism, and sexism. These systems taught us to equate thinness with virtue, control, and beauty, while coding fatness as lazy, indulgent, and morally wrong. The AI didn’t invent this binary—it simply performed it. But for those of us who’ve been harmed by these systems, the impact is very real.

What scares me most is how subtle the harm can be. How someone without the tools I’ve built—therapy, community, research—might see a distorted AI image of themselves and accept it as fact. How many people will internalize these machine-learned distortions, believing their body is inherently too much? How many will start to shrink themselves again?

My Barbie didn’t look like me. And that’s okay. What’s not okay is that I was made to feel like she did. That the AI’s guess was somehow truer than my own reflection. That the machine, echoing the culture that built it, could still make me question my worth.

But I also left that experience with a clearer sense of where I stand. I won’t apologize for my body. I won’t accept digital distortions as definitions. And I certainly won’t let a Barbie—AI-generated or not—decide how I get to see myself.

Next
Next

Can Love Be Blind?