ChatGPT Was Tasked With Designing a Website. The Result Was as Manipulative as You’d Expect—or Was It?

A recent Fast Company article critiques ChatGPT for generating website designs that use “dark patterns”—strategies like urgency indicators, pre-checked opt-ins, or fake discounts.

These techniques, while controversial, have been staples of big-brand marketing for years. They’ve helped maximize conversions and drive sales.

But does ChatGPT’s use of them make the tool inherently bad?

For small businesses, access to tools like ChatGPT represents a game-changer. It means that strategies once reserved for companies with massive budgets can now be leveraged by smaller players looking to compete.

That’s not manipulation—it’s leveling the playing field.

That said, ethical responsibility matters. Manipulative practices that erode trust have no place in marketing, whether executed by AI or humans.

And it’s worth noting that ChatGPT isn’t inventing these tactics—it’s reflecting what it’s trained on. If there’s concern about their use, the bigger question is how these practices became so widespread in the first place.

What’s missing from this discussion is a balanced perspective. Singling out AI risks oversimplifying a much larger issue.

Instead, we need a broader conversation about ethical standards for marketing, no matter who—or what—is behind them.

At DIGITAL IVAN, I believe in empowering businesses to grow responsibly. When tools like ChatGPT are used thoughtfully, they can create marketing that works for everyone—not just the biggest players.

By prioritizing transparency and trust, we can unlock AI’s potential without compromising on ethics.