DOKKAEBILABS
WhatsApp us
← All posts

Dokkaebi Labs · April 5, 2026 · 6 min read

AI for Developers: Foundations First, Prompts Second

You can use Claude to write code. But if you don't understand what it's writing, you're just a prompt typist. Here's why foundations matter more than ever—and how to use AI the right way.

aiprogramminglearningproductivitysingapore

The Prompt Engineering Trap

A junior dev you know just dropped 200 on a "ChatGPT for Developers" course. Three weeks later: they can copy-paste a prompt and get code. But they don't understand the code. Can't debug it. Can't modify it.

They're now a prompt interface, not a developer. This is a problem we see constantly in Singapore and globally—developers reaching for AI before their foundations are solid.

This is the biggest mistake people make with AI: treating it as a shortcut instead of a tool.

Why This Matters Now

AI can write code. Really well. It can suggest optimizations, generate tests, explain complex systems. But it can also generate confidently wrong code. It can miss security issues. It can suggest architecturally questionable patterns—and you won't catch them if you don't understand the foundation.

A developer without strong fundamentals using AI is like a driver with no understanding of brakes using a Ferrari. Speed without control ends badly.

Here's the paradox: AI makes foundations more critical, not less.

What's Actually Happening With AI + Code

AI is best at:

  • Reducing boilerplate (handling the mechanical parts)
  • Accelerating known patterns (you know what you want, AI writes it faster)
  • Explaining systems (asking "why does this work?" and getting a real answer)
  • Catching edge cases (rubber-duck debugging at scale)

AI is not good at:

  • Architectural decisions (should this be a component? A hook? Context? Only you know your constraints)
  • Understanding tradeoffs (AI suggests optimal code, not optimal for your situation)
  • System-level thinking (the code works, but does it scale? Does it integrate well?)
  • Security decisions (the code is syntactically correct, but is it safe?)

Notice the pattern: The boundary between AI strength and weakness is exactly where foundations matter.

The Three Levels of AI Use

Level 1: Prompt Typing (Illusion of Productivity) You: "Write a React form for login" AI: Generates component You: Paste it into your codebase Result: Code you don't understand, maintaining it later is hell

This feels productive. It's actually cargo-culting.

Level 2: Informed Collaboration (Real Productivity) You understand React hooks, form state, validation patterns. You ask AI: "Generate a login form with email validation and proper error handling in React 18." AI: Generates code that aligns with your mental model You: Review it, understand it, modify it confidently Result: 30 minutes of work instead of 2 hours, you still own the code

This is real productivity. The AI accelerates; you direct.

Level 3: Mastery-Level Thinking (The Ceiling) You architect systems. You know when to use AI (boilerplate, explanations) and when to think (design decisions, tradeoffs). You prompt like you're rubber-ducking to a very smart colleague. Result: You're 3x faster without losing control

Most people are stuck at level 1 thinking they're at level 2. They skip level 2 (where foundations actually matter).

How Strong Foundations Change Everything

Example: Database Queries

Junior dev at Level 1: "Write a query to get all users and their posts" AI: SELECT * FROM users u JOIN posts p ON u.id = p.user_id They paste it. It works. They move on. Six months later: The app is slow. N+1 query problem. They don't see it.

Developer with foundations (Level 2): Knows: Joins, indexing, query plans, N+1 problems Uses AI to: Generate the syntax quickly while thinking about the architecture Reviews: "Is this query efficient? Are the right columns indexed?" Catches: Performance issues before production

The difference isn't the code. It's the thinking that comes before asking the AI.

Example: State Management in React

Level 1: "I have this data, make a component" AI generates something with useState everywhere or useContext everywhere They don't understand when each is appropriate Their app gets slow. They blame React.

Level 2: Understands why state lives where it lives Knows: Local component state vs. lifted state vs. context vs. external store Uses AI for: Syntax and boilerplate once you've decided where state belongs Their app scales because the architecture is sound

Again: Same AI. Different outcome based on foundation.

The Real Skill in the AI Era

The skill isn't "prompt engineering." That's tactical and changes as AI improves.

The skill is judgment:

  • Knowing when to use AI and when to think
  • Recognizing when AI output is right vs. plausible-but-wrong
  • Understanding tradeoffs AI doesn't see
  • Making architectural decisions AI can't make

All of those require understanding why code is written a certain way, not just how.

What To Do About It

If you're learning to code:

  • Learn fundamentals first (algorithms, data structures, how HTTP works, what a database does)
  • Then use AI to accelerate. Don't skip to step 2.
  • Ask AI to explain why the code it wrote is structured that way. Use it as a teaching tool.
  • When AI generates code, understand it before using it. This takes an extra 10 minutes but saves you 10 hours later.

If you're experienced but new to AI:

  • Don't treat it as a code generator. Treat it as a thinking partner.
  • Use it for boilerplate and explanations, not architecture.
  • Prompt like you're explaining your problem to a colleague: "I have X problem, here's my constraint, what's the best approach?"
  • Review with the eye of someone who understands why code is structured certain ways.

If you're tempted by "prompt engineering" courses:

  • Skip them. The specific prompts they teach will be outdated in 12 months.
  • Learn programming fundamentals instead. Those are timeless.
  • Learn how to use AI within those fundamentals.

The Uncomfortable Truth

Right now, there's a huge market of people selling "ChatGPT courses for developers" that teach prompt syntax. This is peak hype.

In 12 months, half of them won't exist. The prompts will be outdated. The people who took them will be stuck.

Meanwhile, developers with strong foundations using AI as a tool are getting exponentially more productive. They don't need the course because they understand the principles.

A Different Kind of AI Tutoring

If you're learning, what you need isn't "how to prompt." You need:

  • Strong fundamentals (algorithms, systems thinking, your chosen language)
  • How to use AI to accelerate, not replace, that learning
  • When to trust AI output and when to verify it
  • Real projects where you use AI alongside your judgment, not instead of it

That's not a course. That's tutoring from someone who understands both programming and where AI actually helps and where it doesn't.

We're Building This

We're adding an "AI + Foundations" track to our programming tutoring. Not "learn prompts." But "learn to code at the speed of AI—without losing control."

This is for:

  • Developers who want to use AI but feel like they're just copying
  • People learning to code who want to skip the 6-month slog but not skip the thinking
  • Teams that have AI but aren't getting the productivity gains because the fundamentals are weak

It's the opposite of every "ChatGPT for devs" course you've seen. It's harder. It's deeper. It actually sticks.

Get in touch if this resonates. We're validating interest before we officially launch the service.

Have questions or want to discuss this further? Reach out on WhatsApp or email.

Get in touch →