DOKKAEBILABS
WhatsApp us
← All posts

Dokkaebi Labs · April 20, 2026 · 7 min read

AI Agents Are Everywhere. Most of Them Are Useless.

Everyone's building AI agents right now. Most of them are glorified autocomplete with a for-loop. Here's what agents actually are, what they're good for, and how to tell the difference before you waste six months building one.

ai-agentsautomationprogramminglangchainbusinesssingapore

Everyone's building AI agents right now. Most of them are glorified autocomplete with a for-loop. Here's what agents actually are, what they're good for, and how to tell the difference before you waste six months building one.


The problem with the word "agent"

Open LinkedIn right now. I'll wait.

At least three posts about someone's "AI agent that automates your entire workflow." A startup that raised $4M for an agent that "thinks like a human." A 22-year-old selling a course on "building autonomous AI agents in a weekend."

The word has been stretched so thin it means almost nothing.

So let's start with a definition that actually holds up.

An AI agent is a system that:

  1. Receives a goal (not just a prompt)
  2. Decides on its own what steps to take
  3. Uses tools (search, APIs, code execution, databases)
  4. Loops — checking its own work, adjusting course, trying again
  5. Stops when the goal is met (or when it should give up)

A chatbot that answers questions is not an agent. A pipeline that runs GPT on a CSV and emails the output is not an agent. A thing that "chats with your documents" is definitely not an agent.

An agent has autonomy. It figures out the plan. You give it the objective.


Why agents are actually a big deal

When you get this right — actually right, not demo-right — the shift is jarring.

Here's a real example. We built an agent for a client who runs a small security consultancy. Their workflow before: manually check client servers every morning, pull logs, write a summary, email it. About 2 hours a day.

The agent now: wakes up at 6am, SSHes into client environments, runs checks, queries threat intel feeds, cross-references against known CVEs, writes a plain-English briefing in their tone, and drops it in Slack. If it finds something suspicious, it escalates to the engineer. If it's routine, it handles it.

That's not autocomplete. That's something doing work.

The reason this matters beyond the obvious (time savings): agents can do things at a scale and consistency that humans physically cannot. Monitoring 50 endpoints simultaneously. Scanning 10,000 leads and qualifying them against 6 criteria. Running your own internal red team overnight while you sleep.


Why most agents are garbage

Because people build them wrong. The most common failure modes:

1. They chain prompts and call it an agent

This is the most widespread issue. Someone strings together 5 LLM calls in sequence: "Summarise this → rewrite this → check this → format this." That's a pipeline. It's deterministic. There's no decision-making. If step 3 fails, the agent doesn't figure out an alternative — it just breaks.

Real agents make branching decisions. They choose which tool to use based on what they find. They retry with a different approach. The loop matters.

2. They give the agent too much autonomy too soon

The other extreme: "just let the model decide everything." This sounds good until your agent goes down a rabbit hole, makes 47 API calls trying to solve the wrong problem, bills you $80 in tokens, and confidently returns the wrong answer.

Agents need guardrails. Sensible defaults. Clear criteria for when to ask a human. Autonomy within a defined scope — not unlimited autonomy.

3. They use the wrong framework for the job

LangChain. LangGraph. CrewAI. AutoGen. Flowise. n8n. There are approximately a thousand ways to build agents now, and most people pick one because they saw a YouTube video about it.

Each has tradeoffs. LangGraph is great for complex state machines. CrewAI shines for multi-agent collaboration. n8n is perfect if you need quick integrations without writing much code. Picking the wrong one for the job is like using a sledgehammer to hang a picture frame — it'll work, technically, but you'll hate yourself.


What agents are actually good for right now

Tested, not hypothetical:

Research and synthesis — Give an agent a question. It searches, reads, cross-references, discards junk, and writes you a briefing. Better than anything you'd do manually at 9pm.

Monitoring and alerting — Continuous checks on things that would bore a human to tears. Uptime, price changes, competitor activity, regulatory updates, security anomalies. The agent pings you when something matters.

Lead qualification and outreach — Pull a list of prospects, have the agent research each one, score them against your criteria, draft personalised first messages. What would take a BDR two weeks takes minutes.

Code review and documentation — Not replacing engineers. Augmenting them. Agent reads your PR, checks for common issues, generates draft docs, flags anything that looks off. Your senior engineer spends 10 minutes reviewing the agent's notes instead of 45 minutes doing it from scratch.

Internal ops workflows — Anything that involves: check this thing → decide what to do → do it → update the record. Booking confirmations, invoice chasing, support ticket triage. Tedious, rule-based, high-volume. Exactly what agents are built for.


What agents are NOT good for right now

Also worth saying plainly:

Anything requiring genuine creativity — Agents iterate and combine. They don't originate. If you need a truly novel idea, a human is still better.

High-stakes decisions without human review — Medical diagnoses, legal strategy, financial advice, anything with serious consequences if it's wrong. Agents can assist, prepare, analyse. They should not be the final word.

Replacing relationships — An agent can draft the email. The relationship is still yours. Anyone who tells you otherwise is selling something.

Things that require common sense — LLMs are still surprisingly bad at basic real-world reasoning sometimes. "There are 12 apples. I eat 3. How many are left?" Usually fine. "Given that the client is angry and it's a Friday afternoon, what should I do?" Surprisingly unreliable.


If you're thinking about building an agent

Three questions to ask first:

1. Is there an actual decision loop, or is this really just a pipeline? If every step is predetermined and nothing branches, you probably don't need an agent. A well-engineered function will do it faster, cheaper, and more reliably.

2. What's the failure mode? When the agent gets it wrong (and it will), what happens? If the answer is "catastrophic" or "invisible," design for human review checkpoints before you give it autonomy.

3. What's the 80% case? Agents shine when there's a clear common case and a manageable set of edge cases. If every instance is completely unique and requires judgment, agents will fight you the whole way.


What we're building with agents at Dokkaebi Labs

We build these for clients now — and use them internally. Current stack is mostly LangGraph for complex stateful agents, n8n for integration-heavy workflows, and custom Python for anything that needs tight control.

Recent projects: automated security monitoring for an SME client, a research assistant for a trading firm, and a qualification agent for a tutoring marketplace that was drowning in enquiries.

In every case, the agent didn't replace the human — it removed the work that was making the human useless.

That's the actual goal. Not science fiction. Not "AGI is here." Just: the boring, high-volume, low-judgment work handled automatically so the people can do the things only people can do.

If you're trying to figure out whether an agent is the right solution for something you're working on — that's exactly what our free scoping calls are for. We'll be direct with you if it's not the right fit. (/consultancy)


About the author: Dokkaebi Labs builds AI agents and automation systems for businesses in Singapore. We work in LangChain, LangGraph, CrewAI, and custom Python — and we're selective about the projects we take on.

Read next: How to Use AI in Code Without Breaking Things

Have questions or want to discuss this further? Reach out on WhatsApp or email.

Get in touch →