Most technical due diligence is theatre. An investor hires a consultant who spends two days asking the CTO about their stack, writes a report confirming that the company uses modern technologies, and everyone moves on. The investment closes. Eighteen months later, the platform cannot scale, the original engineering team has left, and the new CTO is proposing a rewrite that will take a year and cost millions.
We have seen this pattern repeatedly. Not because the due diligence consultants are incompetent, but because they are asking the wrong questions. They focus on what technology a company uses when they should be examining how the company uses technology to build, ship, and maintain its product. The stack is almost irrelevant. The architecture, the engineering culture, and the operational maturity are what determine whether the technology will support the business through the growth that investment is meant to fuel.
This guide covers what technical due diligence should actually examine, how to identify the signals that matter, and how to structure a process that surfaces real risk rather than confirming existing biases.
Why Technical Due Diligence Matters
For software-enabled businesses — which, in 2026, includes most businesses worth investing in — the technology is not a supporting function. It is the product. Or it is the delivery mechanism for the product. Or it is the operational backbone that makes the business model work at all. In any of these cases, the technology’s ability to scale, evolve, and remain reliable directly determines the value of the investment.
Technical risk in a portfolio company manifests in specific, measurable ways. Platform instability drives customer churn. Inability to ship features quickly means losing market windows. Security vulnerabilities create liability. An architecture that cannot scale forces a rebuild at exactly the moment the business should be growing. Each of these outcomes destroys value.
The purpose of technical due diligence is not to confirm that the company has made good technology choices. It is to understand the technology’s capacity to support the business plan. Can this platform handle 10x the current load? Can the team ship features at the pace the growth plan requires? Is there hidden technical debt that will demand significant investment before new features are possible?
These are the questions that protect capital.
What Most Due Diligence Gets Wrong
The most common failure mode in technical DD is focusing on surface-level indicators that feel technical but reveal very little about actual risk.
Stack choice is not a risk indicator. A company built on Rails is not inherently riskier than one built on Go. What matters is whether the technology choices are appropriate for the workload, whether the team understands the tools they have chosen, and whether the architecture can evolve as requirements change.
Code quality metrics are misleading in isolation. Test coverage percentages and linting scores are easy to produce and easy to game. A codebase with 90% test coverage might have tests that verify nothing meaningful. A codebase with 40% coverage might have comprehensive integration tests covering every critical user journey. The number alone tells you nothing.
Recent technology adoption is not a positive signal by default. A company that just migrated to Kubernetes might be demonstrating engineering maturity, or it might be over-engineering their infrastructure. Context determines which interpretation is correct.
Effective technical DD requires examining the substance behind these surface indicators. That requires evaluators who have built and scaled real systems, not just reviewed them.
The Seven Areas That Actually Matter
1. Architecture Scalability
The central question: can this system handle the growth the investment thesis assumes, without a fundamental rewrite?
Examine the architecture at three levels. First, the macro architecture: is this a monolith, a set of services, or something in between? A three-person team running 40 microservices is a red flag regardless of how elegant the service boundaries are. Second, the data layer: are there single databases that will become bottlenecks? Is there a clear path to horizontal scaling? Third, the integration points: what happens when external services are slow or unavailable? Resilience patterns like circuit breakers and graceful degradation indicate maturity. Their absence indicates a system that has not been tested under stress.
Ask the team to walk through what happens when traffic increases by a factor of ten. The specificity of their answers tells you more than any architecture diagram.
2. Technical Debt Assessment
Every codebase has technical debt. The question is whether it is managed or compounding.
Managed technical debt is documented, understood, and addressed systematically. The team knows where the shortcuts are, why they were taken, and what it would cost to resolve them. They may have a backlog of tech debt items that are prioritised alongside feature work. This is healthy engineering practice.
Compounding technical debt is the dangerous kind. It manifests as areas of the codebase that nobody wants to touch, features that take increasingly longer to build, deployments that frequently break unrelated functionality, and a general sense among engineers that the system is fragile. When you hear “we need to rewrite that” from multiple engineers about different parts of the system, you are looking at compounding debt.
During DD, ask engineers to identify the three areas of the codebase they find most difficult to work with. Ask them how long a typical feature takes from specification to production, and whether that timeline has been increasing. Ask about the deployment failure rate. These conversations reveal the real state of the codebase far more accurately than any code review.
3. Team Capability and Bus Factor
Technology is built and maintained by people. The team’s capability, stability, and organisational resilience directly determine the technology’s trajectory.
Bus factor — how many people would need to leave before critical knowledge is lost — is one of the most important and most overlooked risk factors. If the CTO is the only person who understands the core architecture, that is a single point of failure that investment cannot mitigate.
Evaluate knowledge-sharing practices, onboarding timelines, and attrition patterns. Can multiple engineers work on any part of the system, or are there siloed knowledge domains?
A strong engineering team with a mediocre codebase is a better investment than a beautiful codebase maintained by a single genius who might leave.
4. Security Posture
Security due diligence needs to be proportionate to the business context. A fintech handling payment data requires a different security bar than a content platform. But baseline expectations apply universally.
Examine authentication and authorisation patterns. Are credentials stored securely? Is authorisation enforced consistently, or are there endpoints that rely on obscurity? Has the application been tested against OWASP Top Ten vulnerabilities?
Review data handling practices. Where is sensitive data stored? Is it encrypted at rest and in transit? Who has access to production data, and is that access audited?
Assess compliance readiness. If the business handles EU personal data, is there a credible GDPR posture? If the growth plan involves enterprise customers, can the platform support SOC 2 without a rebuild? Compliance gaps are not deal-breakers if understood and budgeted for, but they are deal-breakers as surprises.
5. Infrastructure and Deployment
How a team deploys and operates its software reveals its engineering maturity more clearly than almost any other indicator.
The baseline expectation in 2026 is CI/CD: automated testing on every commit, automated deployment to staging, and a reliable process for promoting to production. Teams that deploy manually or infrequently are operating below the standard growth-stage businesses require.
Key questions: how long from merged PR to production? What percentage of deployments require manual intervention? How quickly can the team roll back? When was disaster recovery last actually tested? Can the team detect incidents before customers report them?
6. Data Architecture
Data is often the most valuable and most fragile asset in a technology business. The data architecture determines not just current functionality but future capability.
Evaluate the data model’s soundness. Is the schema well-structured? Can the system handle the data volumes the growth plan implies? If the company claims ML capabilities, is the data actually structured and clean enough to support them?
Data portability is a critical investor concern. Can data be extracted in standard formats? Are there vendor lock-in risks — proprietary database features, undocumented schemas, or formats that only the current application can read?
Examine analytics infrastructure. Can the business answer basic questions about its own performance from its data? Or is reporting manual, incomplete, or dependent on a single person’s knowledge of the database? The ability to make data-informed decisions is a competitive advantage; the inability to do so is a growth constraint.
7. IP and Licensing
This area often receives insufficient attention during technical DD, but it carries material risk.
Open source compliance is the most common issue. GPL-licensed code incorporated into a proprietary product creates legal exposure. Dependencies with unclear or changing licence terms create uncertainty. A systematic inventory of dependencies and their licences should be part of any DD process.
Vendor dependencies and lock-in deserve scrutiny. If the product depends on a specific cloud provider’s proprietary services, switching providers would require significant engineering work. That dependency is not inherently negative, but it should be understood and priced into the risk model.
Assess the genuineness of proprietary IP claims. Is the “AI-powered” feature actually a rules engine with a marketing label? Is the “proprietary platform” actually a WordPress installation with custom plugins? These are not hypothetical examples.
Red Flags That Should Kill a Deal
Some findings during technical DD indicate risk levels that no amount of additional investment can easily mitigate.
No version control, or version control with no meaningful history. If the codebase is not in Git (or equivalent) with a clear commit history, the engineering practice is below any acceptable standard. Similarly, if the history shows that one person makes all commits, or that large, unreviewed changes are pushed directly to production, the development process has fundamental problems.
Production access without audit trails. If engineers can modify production data or systems without any logging of who did what and when, the operational risk is severe. This is not about trust; it is about the ability to diagnose problems and ensure accountability.
No automated testing of any kind. Some companies have limited test coverage, which is addressable. No testing at all — no CI pipeline, no automated checks, no test suite — indicates an engineering culture that does not value reliability. This is extremely expensive to retrofit.
Key-person dependency with no mitigation. If a single individual holds all critical knowledge and there is no documentation, no code review process, and no second person who understands the core systems, the technology investment is effectively an investment in that individual’s continued employment and goodwill.
Evidence of data handling violations. Unencrypted sensitive data, credentials stored in code repositories, or logging of personally identifiable information without consent are not just technical issues. They are potential legal liabilities that can destroy the value of an investment.
Green Flags That Indicate Strong Engineering Culture
Conversely, certain signals indicate engineering organisations that will scale well with investment.
Pull request culture with meaningful code review. Teams where every change is reviewed by at least one other engineer before merging consistently produce higher-quality software and distribute knowledge effectively.
Incident post-mortems without blame. When a team responds to outages with blameless post-mortems that produce concrete action items — and those items actually get implemented — you are looking at a learning organisation.
Engineers can articulate trade-offs. When individual engineers can explain why they chose one approach over another, what the downsides of their choice are, and what they would do differently with more time, you are looking at a team that thinks critically about their work.
Reasonable test coverage with meaningful tests. Not 100% coverage (which often indicates testing for its own sake), but targeted coverage of critical paths with tests that would actually catch regressions.
Documentation that exists and is maintained. Architecture decision records, runbooks for common operations, and onboarding documentation that new hires actually use indicate a team that thinks beyond the immediate moment.
How to Run a Technical DD Process
Timeline
Allocate two to four weeks for a thorough technical DD. Less than two weeks forces superficial assessment. More than four weeks usually means the scope has expanded beyond what is useful for an investment decision.
Who to Involve
The DD team should include at least one senior engineer who has built and scaled production systems — not a consultant who only reviews them. They need to be able to evaluate architectural decisions in context, not just against a checklist. Supplement with specialists for security assessment and infrastructure review if those areas are relevant to the investment thesis.
Process Structure
Week one: Review codebase, documentation, and infrastructure. Conduct initial interviews with the CTO and senior engineers. Week two: Deep evaluation of the seven areas above. Technical interviews with individual contributors. Review of deployment pipelines and incident history. Week three: Verify claims against evidence. Stress-test optimistic assumptions. Draft findings. Week four: Deliver findings mapping technical risk to business impact with clear deal-structure recommendations.
Deliverables
The DD report should answer three questions: can this technology support the business plan? What are the material risks and mitigation costs? Are there findings that fundamentally change the investment thesis? Map every finding to deal-breaker, material risk, or acceptable trade-off.
Turning DD Into Value Creation
The best technical due diligence does not end when the deal closes. The findings should become the foundation of a technology improvement plan that the portfolio company executes post-investment. Prioritise by business impact: security vulnerabilities and operational risks first, then architectural constraints that would limit growth, then engineering culture improvements that compound over time.
If you are evaluating a technology investment and need technical due diligence that surfaces real risk rather than confirming assumptions, we can help. We work with venture capital firms and growth equity investors to assess technology assets with the depth and honesty that investment decisions require. Our digital strategy practice extends beyond assessment into remediation planning and execution support.