AI inside the work, not next to it.
Why ChatGPT in a browser tab does not move the needle on your business. What “AI inside the work” actually looks like — three patterns we ship most often, and where each one earns its keep.
Most businesses have tried AI. Someone on the team opened ChatGPT. Maybe two people did. Maybe the company paid for a few seats. The hours saved are real but private — they live in individuals who learnt the tool — and they do not show up anywhere on the business. The competitor’s hours saved show up on the business. That is the distinction.
AI in a browser tab is a help. AI inside the work is leverage. The difference is whether the model is reached when someone chooses to reach for it, or whether the model is in the path of work the business already runs every day.
Three patterns describe most of the useful integrations we ship.
The first is a model embedded in a workflow. The customer service inbox is a good example. Every incoming email is read by a model. The model writes a draft reply, tags the email with a category, surfaces the relevant order or account, and queues the draft in the human’s view. The human edits and sends. Time saved per email: about ninety per cent. Time saved per day across a small team: a person.
The second is structured-output extraction. Documents come in — contracts, invoices, applications, transcripts. A model reads them and emits a typed object that drops into the system of record. The model is not reasoning, it is reading. This is the most reliable AI work you can do today and the easiest to evaluate. If you have piles of unstructured documents and a team typing them into a form, this is your highest-value AI project.
The third is conversation in front of operational data. Your team’s question is “how is cohort 03 trending on retention this week?” The old answer is a report scheduled by someone who left two years ago. The new answer is a chat interface, grounded in your data, that returns the chart and the number. This is harder than the first two because grounding is a real engineering problem. But for the right business it is transformative.
The cost question. AI in the work is no longer expensive in the cost-per-token sense. A well-built customer-service workflow runs at single-digit cents per email. A document-extraction pipeline runs at less than a rand per document. What is expensive is the eval harness — the test suite that catches the model when it gets things wrong. Without an eval harness the implementation is a demo, not a system.
The wrong reason to install AI is that someone on the board read an article. The right reason is that there is a category of work in your business that consumes hours and does not require judgement, and you have a tolerable way to catch the model when it errors.
Start with one workflow. Measure the time it saves. Build the eval that proves it is still working in three months. Then do the next one. The compounding effect on margin is the whole story.
→ Book a Discovery. We will measure where AI is the leak you can close fastest.
When off-the-shelf software stops fitting.
The signs your business has outgrown a SaaS tool, the workarounds that compound, and how to tell whether custom software is the answer — or whether you just have not configured what you already pay for.
Inside a Flowuity Discovery.
A look at the two-week paid engagement that begins every Flowuity build. What we read, who we interview, what the memo contains, and why a clear no is the most useful outcome.
The work that does not need a person.
Lead qualification, support triage, reconciliation, scheduling. When an AI agent is the right answer, how the economics work, and where agents fail in ways that humans do not.