At WeAreDevelopers 2025, we asked real developers how AI is actually being used in their day-to-day work. From skepticism to practical shortcuts, their answers reveal what’s working, what isn’t—and why trust still matters more than speed.
What developers really think about Copilot, Cursor, and the rise of “vibe coding.”
At WeAreDevelopers, we asked attendees how AI is actually showing up in their day-to-day development work.
Not in demos. Not in pitch decks. But in the messy, high-stakes reality of product teams, refactors, deadlines, and production bugs.
What we heard was refreshingly honest: AI coding tools are helping…but they’re also hallucinating, oversimplifying, and slowing things down when they matter most.
For every success story about a one-day prototype, there was a cautionary tale about broken assumptions, unreadable logic, or code that “looked fine until it failed.”
Here’s what it sounds like when AI coding meets the real world.
Let’s start with Kayla Moral, a PhD candidate in particle physics and a working software developer. Her team doesn’t use AI tools much. Not because they’re anti-innovation, but because their domain demands rigor.
“If you already have the foundational knowledge, it’s a great assistant,” she said. “But if you skip that step, you build on gaps.”
“I see junior devs using AI as a shortcut and losing out on understanding. That’s risky.”
She’s not alone. Marcel Rieder, a Java developer at Wien IT, told us he avoids AI entirely for anything non-trivial:
“For small things, it’s great. But as soon as the problem gets big, it slows me down. The output looks good, but someone still has to go deeper. And that’s where the mess begins.”
Across interviews, this pattern kept emerging: developers want to be optimistic. But they’ve been burned enough to be skeptical.
There’s a term floating around in dev culture now: vibe coding. It’s when someone pastes a vague prompt into an AI tool and runs with whatever comes out. No review. No validation. Just…vibes.
It makes for great memes. But real teams aren’t laughing.
David Fijuck, a machine learning engineer working on healthcare software, put it plainly:
“We work with patient data. I can’t afford to vibe code.”
He’ll use Claude to generate a quick skeleton, but rewrites everything from scratch. Not because the code was bad, but because he didn’t write it, so he can’t trust it.
Trust doesn’t come from speed. It comes from understanding.
Alaa Awad, a staff engineer at JOIN Solutions, gave us one of the most grounded takes.
He recently used Cursor to prototype an interview scheduling integration. The goal? Just test assumptions and experiment with API flows.
“I had something working in a day. That’s huge,” he said. “But I wouldn’t ship it. For production, AI still doesn’t understand the full system or context. I had to review, refactor, and fill in the gaps.”
This is what we’re hearing from many experienced developers: AI is a prototyping assistant, not a production engineer.
It can help you sketch ideas, explore new syntax, or avoid typing boilerplate. But it doesn’t own the architecture. And it definitely doesn’t own the outcome.
We asked everyone whether they were worried about AI replacing developers.
Not one person said yes.
Instead, we heard versions of this:
“Jobs will change. They always do. But someone will still need to understand the system.”—Omar El-Khatib, Staff Frontend Engineer at JOIN
“We’ll adapt. We’ve done it before.”—Ashraf Atef, Senior Backend Engineer
The consensus? Coding itself might shift. But ownership, oversight, and accountability won’t. You can’t outsource that to an autocomplete engine.
It’s not more code. It’s better context.
They want tools that help them move fast without creating black boxes. They want systems they can understand, debug, and trust…especially when things go wrong.
“You still have to review the output,” said Marcel. “And if you’re going to review it anyway, you might as well write it.”
This is the tension AI coding tools face today. Most still prioritize completion over comprehension. That’s backwards.
At Leapter, we’ve heard the same story again and again: AI can help—but only when you understand what it’s doing.
That’s why we built something different.
Not raw code, not black-box suggestions, but visual, auditable blueprints that your whole team can understand, shape, and ship with confidence.
No vibe coding. No guesswork.
Just collaborative systems design grounded in human logic, not AI hallucination.
Because speed doesn’t matter if you don’t trust what you’re shipping.
And real productivity starts with clarity.
Read more and join our newsletter
Get on the waitlist
Let’s build software you don’t have to double-check twice.
... let's celebrate!!!