For the past two years, every software vendor, consulting firm, and LinkedIn post has been talking about AI in business. For an SME leader, this noise has become exhausting. Announcements keep coming, use cases keep piling up, and when the time comes to decide what to do within your own organization, you still don't know where to start.

Something has changed, however, and it's worth looking at closely: capabilities that three years ago required a six-figure project, eighteen months, and a dedicated team are now accessible on targeted scopes, at far smaller scales.

The leaders I meet have two reflexes when facing this change. The first is to do nothing and wait for the market to calm down. The second is to launch an AI project because "we have to move," without knowing what problem they're trying to solve.

The right entry point is less spectacular: start with an honest read of your processes, then choose the right AI building block.

The work begins well before buying any tool. It consists of identifying the processes that cost hidden time or quality, then injecting the right level of automation: simple automation, an internal RAG-based search engine, or a supervised agent that orchestrates multiple steps.

Three criteria for filtering

Before choosing a tool, you need to choose a process. Not all of them are worth the investment. Three criteria consistently hold up in practice.

Volume and repetition. A process that runs ten times a year will never pay off the cost of its tooling. A process that runs fifteen times a day justifies almost any investment. The first useful question is about frequency — before irritation.

Tolerance for approximation. A summary of a customer interaction reviewed by a human can afford an imperfect first version. A credit decision cannot. This tolerance determines the level of autonomy you delegate to the machine.

Data accessibility. Many AI projects fail before they start, due to a lack of usable material: scattered information, unstructured data, or knowledge that exists only in the heads of a few key people.

AI process evaluation matrix for SMEs

An industrial aftersales team moving beyond Excel

First case: an industrial SME of around two hundred people, with a field aftersales network. Intervention tracking has relied for fifteen years on a VBA Excel file that has become critical but fragile. Each branch has its own variant, reports come in by email, and technical documentation remains hard to access on the go.

The classic reflex would be to immediately launch an "AI project." That would be a mistake. The first step is to replace the Excel file with a structured tool that handles data cleanly and is designed for the technicians who use it every day.

AI then comes in at two levels. First, to accelerate the migration of legacy content: an LLM helps read the VBA, clarify the business rules, and secure the data migration. Then, to improve operations: voice input for field service reports, and assisted search in the technical documentation via a RAG with verifiable sources.

Field service technician modernizing a VBA Excel system with AI assistance

A property management firm getting some breathing room

Second case: a property management firm with around fifteen staff, managing hundreds of buildings. The business is document-intensive, under pressure for responsiveness and legal precision.

Two use cases quickly create value. First, document search: regulations, AGM minutes, contracts, histories, and correspondence become queryable in natural language, with verifiable references. Second, assisted drafting: recurring emails and letters start from a contextualized first version, reviewed and validated by the manager.

Business judgment remains human. AI absorbs the mechanical part; the professional retains responsibility for what goes out.

Document management and assisted drafting for a property management firm

A claims process finally orchestrated

Third case: an insurance company's claims management. A file can sometimes take several months and involve many stakeholders. A significant portion of the workload comes from follow-up reminders, progress summaries, and status communications.

An AI orchestration agent can automatically follow up with stakeholders, produce readable progress updates, detect discrepancies between expected and received documents, and maintain a reliable timeline of the case. But it doesn't decide: sensitive judgments remain in the hands of the manager.

The expected result is twofold: fewer "anxiety calls" from policyholders thanks to proactive communication, and more time for managers to handle genuinely complex and human cases.

AI orchestration of a claims process with human supervision

What ties the three cases together

These three cases don't differ by some level of technological "modernity," but by the degree of autonomy delegated to the machine. And this degree depends on two things: the risk attached to errors, and the actual capacity for human supervision.

This is precisely where much commercial discourse goes wrong: it sells the autonomous agent as a universal goal. For most SMEs, this target is poorly calibrated. The right objective is, first, the right level of autonomy — process by process.

Where to look first

The starting point is not necessarily the process most visible in the leadership committee. That's often the most political, the most risky, and the hardest to transform quickly.

The best entry point is found instead in the everyday processes that generate hidden time costs, repeated errors, or chronic customer dissatisfaction. The ones that run silently, without making noise, but consume enormous organizational energy.

Start small, succeed, learn, then scale up: this order makes all the difference. It builds internal capability, installs the right guardrails, and builds credibility for the approach with your teams.

AI building blocks have become accessible. What remains rare is the discernment to decide what to automate, what to assist, and what to leave entirely to humans.