DOKKAEBILABS
WhatsApp us
← All posts

Dokkaebi Labs · April 5, 2026 · 7 min read

How to Use AI in Code Without Breaking Things (A Practical Guide)

You can prompt Claude to write entire features. But should you? Here's exactly when to use AI, when not to, and the specific practices that keep you in control.

aiprogrammingproductivityworkflowsingapore

The AI Code Workflow Nobody Talks About

You get a feature request. Instead of thinking for 20 minutes, you copy the spec into Claude.

30 seconds later: 50 lines of code.

You paste it. It works. You ship it. This is the AI workflow problem we see across Singapore dev teams and remote learners alike.

But you didn't think. You didn't review why it's structured that way. You didn't consider edge cases. You didn't check if it aligns with the rest of your codebase.

That's where bugs hide. That's where security issues live. That's where "it worked in my test" becomes "production is down."

Real AI productivity isn't about moving faster. It's about moving faster without losing control.

When AI Saves You Time (And When It Wastes It)

AI excels at:

TaskTime SavedConfidence Level
Boilerplate (forms, CRUD endpoints, config)80%High—review the output, it's usually right
Explanations (why does this work?)90%High—ask for clarification if confused
Refactoring suggestions60%Medium—understand the tradeoff before applying
Writing tests70%Medium—test coverage matters, verify all cases covered
Documentation85%High—AI writes clearer docs than most devs

AI wastes time on:

TaskWhyExample
Architecture decisionsDoesn't know your constraints"Should this be a service or a hook?" needs your judgment
Security-sensitive codeCan be confidently wrongAI doesn't check OWASP vulnerabilities
Performance-critical codeMisses optimization opportunitiesDatabase queries, caching logic, rendering
Business logicDoesn't understand intentOnly you know why rules exist

The pattern: AI is great at mechanics, weak at judgment.

The Review Framework: Don't Ship What You Don't Understand

Before you paste AI code into production, ask three questions:

1. Do I understand what this code does?

Not "does it look right?" but "can I trace the execution?"

Bad: "Looks fine" → Ship Good: "This function fetches users, filters by status, sorts by date, returns first 10. I understand each step."

Takes 3 minutes. Prevents 90% of issues.

2. Does this fit our architecture?

Does it match your codebase's patterns?

Bad: You use React hooks everywhere; AI generated class components Good: AI generated hooks that match your patterns

Ask AI: "I use [framework/pattern]. Regenerate this using that approach."

3. Does this miss edge cases?

AI generates happy-path code. Production has unhappy paths.

Bad: AI generates form without error handling. You ship it. Good: You ask: "What happens if the API times out? What if the response is malformed? What if the user has no permissions?"

Then update the code.

The Prompt That Works: Be Specific

Bad prompt: "Write a function to get users"

AI output: Basic SELECT query. No error handling. No logging. No validation.

Good prompt: "Write a function in Python that fetches users from PostgreSQL, handles connection errors gracefully, validates input IDs, logs failures, and returns a list of User objects. Handle the case where no users are found."

Better output: Actual production-ready code.

Formula: [What] + [Stack] + [Constraints] + [Edge cases]

The more specific you are, the better the output. AI isn't magic; it's a reflection of what you ask for.

The Dangerous Practices (Don't Do These)

1. Paste AI Code Without Reading It

You set yourself up for:

  • Security vulnerabilities
  • Performance issues
  • Bugs that only surface under load
  • Technical debt you'll inherit

One security issue costs more than hours of review. Always read.

2. Use AI for Architecture Decisions

AI will generate working code. Won't generate optimal code for your constraints.

You have a legacy system with specific patterns. AI doesn't know them. Result: Code that works but doesn't fit.

3. Trust AI on Performance-Critical Paths

AI doesn't optimize for your specific bottleneck. Database queries, rendering loops, caching—these need your thinking.

Use AI to generate the baseline. Benchmark. Optimize with your judgment.

4. Skip Testing Because AI Code "Looks Good"

AI code still has bugs. Still misses edge cases. Test it like any code.

5. Never Pair It With a Co-Reviewer

Second set of eyes catches what both you and AI miss. Especially on security-sensitive code.

The Workflow That Actually Works

1. Think first (10 minutes)

  • What am I building?
  • What constraints do I have?
  • What could go wrong?
  • What's my architecture?

Write this down. It becomes your prompt.

2. Generate with AI (2 minutes) Paste that thinking + your spec into Claude. Get code back.

3. Review intelligently (10 minutes)

  • Trace the logic. Do I understand each step?
  • Does it match our patterns?
  • What edge cases does it miss?
  • Is there a security risk?

Make notes.

4. Iterate (5 minutes) Paste your feedback to Claude. Refine.

5. Test (10 minutes) Write tests covering happy path + edge cases.

Total: ~40 minutes for a feature.

Without AI: 2-3 hours. With lazy AI use: Code ships, breaks in production, fixes take 4 hours.

40 minutes of thoughtful AI use is the win.

Real Example: Building an API Endpoint

The thinking step: "I need a /users endpoint that:

  • Fetches users with pagination
  • Filters by status (active/inactive)
  • Only works if authenticated
  • Returns 400 for invalid filters
  • Returns 404 if no results
  • Logs the request
  • Handles database errors gracefully"

The prompt: "Write a Node/Express endpoint for the above. Handle the edge cases. Include error handling and logging."

The review:

  • ✅ Auth check is first (security win)
  • ✅ Pagination is correct
  • ✅ Logging is there
  • ❌ Status filter allows invalid values → Ask for validation
  • ❌ No rate limiting → Add it or document why it's not needed
  • ✅ Database errors handled

The iteration: Paste the issues back. Claude fixes them in 1 minute.

The test: Write tests for valid request, invalid filter, no auth, no results, database error.

Result: Production-ready endpoint in 45 minutes. Every case covered. You understand all of it.

When Not to Use AI

Security-sensitive code: Auth, payments, data deletion

  • Generate baseline, review thoroughly, test extensively
  • Don't ship AI code here without manual audit

Your core differentiation: If this is your secret sauce, think it through. Don't outsource to AI.

One-time learning: If this teaches you something important, do it manually. You need the experience.

Ambiguous requirements: If you don't know what you want, AI won't either. Think first.

The Skill That Matters Most

The most valuable skill in the AI era is judgment. Knowing when to trust AI output and when to think.

This is what separates:

  • Fast + broken (prompt, paste, ship)
  • Fast + solid (prompt, review, refine, ship)

The second takes 5x longer than the first, but prevents 95% of issues.

We Train This Skill

Our "AI + Foundations" tutoring track teaches exactly this:

  • How to use AI as a thinking partner, not a replacement
  • When to trust AI and when to verify
  • Workflows that keep you fast and safe
  • Real projects where you practice this judgment

Not "ChatGPT for developers." But "Developer using AI like a pro."

Learn more about our programming tutoring or get in touch to discuss your team's AI adoption.

Have questions or want to discuss this further? Reach out on WhatsApp or email.

Get in touch →