AI projects in ecommerce fail at a rate that nobody likes to write about. The model is usually fine. The readiness of the business to support the model is where the work actually breaks down. Product catalogues with missing attributes. Customer records split across three systems. Order history nobody can query without asking a developer. Brilliant models asked to infer signals from data that was never captured.

Before you hire an AI agency, or before an AI agency accepts your project, a readiness audit answers a specific question: does this business have the foundations in place for AI to produce value? This piece walks through the fifteen questions we ask during that audit, why each matters, and what a good answer looks like.

Why AI Readiness Matters More Than Model Selection

The model is a commodity. In 2026, a good implementation team can choose between OpenAI, Anthropic, Google, and open-weight models in the Llama or Qwen families and hit similar quality on most ecommerce tasks. The differentiator is almost always what surrounds the model: the data that feeds it, the infrastructure that serves it, the evaluation that validates it, and the operational processes that act on its output.

Ecommerce businesses that leap into AI work without auditing readiness tend to spend twelve months, half a million pounds, and a reputational hit before reaching the uncomfortable conclusion that the problem was upstream of the AI. The audit is cheaper than the rework.

The Fifteen Questions

The checklist is grouped into four categories: data, infrastructure, use case, and organisation. A business that cannot answer most of these confidently is not yet ready for production AI.

Data readiness (Q1–5)

Q1: Is your product catalogue complete and structured?

For every SKU, you need consistent core metadata: title, description, category path, price, availability, brand, key attributes (size, colour, material), and high-quality imagery. For any AI feature that touches products — search, recommendations, attribute extraction, personalised email — missing metadata translates directly into worse output. A catalogue with 40% of products missing the material field cannot power a “shop by material” AI feature regardless of how good the model is.

Q2: Can you query order history programmatically?

Order history feeds recommendation systems, churn prediction, cohort analysis, and many personalisation features. If your analytics team uses the Shopify admin UI or downloads CSVs, the data layer is not AI-ready. You need a queryable data warehouse — BigQuery, Snowflake, Redshift, or an equivalent — with orders, line items, customers, and products joined consistently.

Q3: Are customer records reconciled across systems?

A customer buys on the website, subscribes on email, contacts support, and browses the mobile app. Four systems, often four IDs. Without a stable customer identity layer that reconciles across these, AI features that personalise across channels will produce inconsistent or contradictory behaviour. A customer data platform or a lightweight identity resolution layer fixes this.

Q4: Do you have clean return and refund data?

Returns reveal genuine mismatches between customer expectation and product reality. For recommendations, search, and product insight features, return data is often more valuable than order data. It must be captured with reason codes, linked to the original order and product, and accessible alongside the sales data.

Q5: Is your customer service history captured in a queryable format?

Support tickets, live chat transcripts, and post-purchase emails contain the richest signal about what customers actually think. For any AI feature touching customer communication — draft responses, FAQ generation, issue triage — this data needs to be accessible, time-stamped, and linked to the customer and product. Zendesk, Intercom, Gorgias, and similar platforms all provide exports; the question is whether anyone has set up the pipeline.

Infrastructure readiness (Q6–9)

Q6: Do you have a data warehouse with recent, reliable data?

AI infrastructure assumes a warehouse. If your analytics still runs off production database replicas or CSV extracts, adding AI use cases will force the warehouse build anyway. Do it first. BigQuery and Snowflake remain the defaults for mid-market ecommerce in 2026; the build cost is £30,000 to £100,000 depending on source system complexity.

Q7: Are your production systems integrated enough to act on AI output?

AI produces decisions. If those decisions cannot reach the systems that need to act on them, the AI is a demo, not a feature. If your recommendation engine needs to update the storefront, does your frontend have a production path to consume it? If your personalisation engine produces email segments, can your email platform ingest them without a human export? Integration readiness is often the hidden blocker.

Q8: Do you have a staging environment that mirrors production for AI testing?

AI features need to be tested against realistic data before they touch customers. A staging environment with a recent production data snapshot lets the team evaluate model behaviour on real catalogue and real order patterns without risk. Businesses that test AI features only in production ship bugs that customers discover first.

Q9: Is security and data handling mature enough to pass a vendor review?

AI pipelines move customer data between systems. That means DPA agreements, encryption at rest and in transit, access controls, and audit logs. If your current vendor security review is informal, an AI project will force it to mature. Better to do that work before the project starts than during the vendor due diligence for an agentic AI feature six months in.

Use case readiness (Q10–12)

Q10: Is the first use case narrow and measurable?

“Build an AI shopping assistant” is not a use case; it is an aspiration. “Improve on-site search relevance for fashion queries by 15% in six months” is a use case. Narrow, measurable, and linked to a business metric. The first AI project sets the standard for every one that follows; a vague first project calibrates the team to produce vague outcomes.

Q11: Is there a business owner who will act on the results?

Every AI feature sits in a business workflow. A personalisation feature needs a marketing owner; a search feature needs a merchandising owner; a customer service AI needs a support operations owner. If no one owns the feature after launch, it will not be maintained, evaluated, or improved. The business owner must be named before the build starts.

Q12: Have you defined what “good” looks like?

Evaluation criteria need to be explicit and pre-registered before the build. For a recommendation engine: click-through rate, add-to-cart rate, conversion rate, revenue per session, all segmented by customer type. For a search feature: zero-results rate, refinement rate, session conversion. Without pre-registered metrics, any result will look plausible in hindsight — and that is how AI projects get renewed for a second year despite producing nothing of value.

Organisational readiness (Q13–15)

Q13: Is there a technical leader accountable for the AI work?

AI projects need an owner who can make architecture decisions, evaluate vendor work, and hold the quality line. This is rarely the CEO, the CMO, or the head of ecommerce. In companies without a CTO, a fractional CTO often fills this gap for the duration of the AI build. Agencies that accept AI projects without a counterpart on the client side produce worse outcomes because nobody internal can evaluate what they are shipping.

Q14: Does the team have the capacity to absorb new workflows?

A launched AI feature creates new operational work: monitoring, evaluation, iteration, handling edge cases, managing customer complaints when the AI produces surprising output. If the team is already stretched thin, the new workload will either erode quality elsewhere or cause the feature to be quietly abandoned. Capacity planning should cover both the build phase and the steady-state operational load.

Q15: Does leadership understand what AI can and cannot reliably do in 2026?

Leadership expectations calibrate the project from day one. Executives who expect AI to “increase conversion by 40%” off a narrow chat feature are setting up the team for failure. Executives who expect “a useful first feature that informs a broader roadmap” calibrate realistically. Thirty minutes of honest leadership alignment at project start prevents six months of disappointed review meetings later.

What the Checklist Is For

The honest purpose of this checklist is to give ecommerce operators the language to refuse their own bad projects before an agency has to. Too much AI work in 2026 starts from a vague directive rather than a readiness-validated use case, and the cost of that lands on the first six months of wasted engineering.

A scoring rule of thumb: a “yes” on twelve or more of the fifteen means you are ready to run a first AI project. Between eight and eleven, you are three to six months away after closing specific gaps. Below eight, the honest recommendation is to build data and infrastructure foundations first and revisit AI work afterwards.

When to Bring in External Help

If the gaps in your readiness are large, an AI agency with a foundations-first practice is a better first engagement than an AI agency that only builds models. The early work looks more like data engineering and infrastructure than machine learning, which reads as unglamorous but is what actually unblocks the downstream AI value.

We run AI readiness audits for UK ecommerce businesses through our AI readiness audit service and deliver the subsequent engineering through our AI engineering services. For wider AI strategy context, our AI strategy for mid-market businesses piece covers the strategic framing; our RAG vs fine-tuning framework covers the architectural choice that comes up on most production AI projects.

If you want a second opinion before committing to an AI programme, get in touch — the readiness audit is the cheapest insurance against an eighteen-month detour.