Build vs. buy is not the most important AI question in insurance. The real question is how much business logic lives in code versus configuration.
.jpg)
Every carrier CTO we talk to frames their AI strategy around the same question: do we build it ourselves or buy a platform?
The board wants to move fast, which pulls toward buying. The engineering team wants control, which pulls toward building. The debate goes back and forth until someone picks a lane, and then the real problems start.
The issue isn't which side you choose. The issue is that neither option, as typically framed, addresses how insurance AI actually needs to work.
The vendor pitch is compelling: a turnkey AI underwriting platform, pre-trained on insurance data, integrated with your core systems in weeks. The demo looks great. The ROI model is clean. Six months later, you're in a different conversation.
The platform extracts data reliably on the submission formats it was trained on — but your West Coast brokers use a different ACORD variant and your excess lines come in as unstructured email threads. The rules engine handles standard appetite checks but doesn't accommodate the program-specific eligibility logic your underwriting team actually runs. The enrichment layer pulls from two data providers, but your team needs five, and the vendor's roadmap won't get there until next year.
The core problem with buying a platform is configurability at the business logic layer. Insurance underwriting isn't generic. Every carrier has specific appetite definitions, risk tolerances, workflow preferences, and regulatory obligations that make their operation distinct. A platform that handles the common 70% leaves the carrier doing manual workarounds on the 30% that defines their competitive position.
The result: the platform becomes an expensive extraction tool that sits alongside — not inside — the actual underwriting workflow.
The alternative — building a proprietary AI underwriting stack — sounds like it solves the configurability problem. You own the code, you control the logic, you can tailor everything to your operation.
In practice, most mid-market carriers don't have the team to pull this off sustainably. Building an AI submission intake pipeline means standing up document parsing, entity extraction, rules engines, enrichment integrations, confidence scoring, exception handling, and a UX layer for underwriters — then maintaining all of it as models evolve, data providers change their APIs, and your own underwriting guidelines shift quarterly.
We've seen carriers spend 12+ months and significant budget building a custom solution that works for one line of business and can't be extended to the next without another major engineering effort. The problem isn't capability. It's that custom-built systems tend to encode business logic directly into the code, making every change a development project rather than a configuration change.
The build approach gives you control, but at an operational cost that most mid-market teams can't sustain — especially when the underlying AI technology is changing faster than your team can rebuild around it.
The carriers we've seen get the most traction aren't building from scratch or buying a sealed platform. They're working with an architecture that separates the technology layer from the business logic layer — so the AI capabilities are reusable, but the rules, thresholds, workflows, and risk appetite are configured per program.
What this looks like in practice:
The technology layer is standardized. Document parsing, entity extraction, enrichment orchestration, confidence scoring — these capabilities don't change fundamentally between a contractor GL program and a commercial property program. They should be built (or sourced) once and reused.
The business logic layer is configured, not coded. Appetite rules, eligibility gates, risk scoring criteria, required enrichment sources, exception routing logic — these vary by line of business, by program, and sometimes by state. They need to be changeable by someone who understands the underwriting operation, not just someone who can write code. When a new program launches or guidelines change, the adjustment should take days, not quarters.
The integration layer is modular. Your enrichment providers, core policy system, rating engine, and communication tools are connected through defined interfaces — so swapping a data provider or adding a new one doesn't require re-architecting the pipeline.
This isn't a philosophical distinction. It's an architectural one. And it has direct consequences for how fast you can move. A carrier using a configured approach can onboard a new line of business by defining its rules, mapping its data sources, and adjusting its workflow — without rebuilding the underlying system. A carrier locked into a build or a rigid platform has to start a project.
Whether you've already committed to a build, a buy, or something in between, these questions will tell you how well-positioned your architecture is:
It's: how much of your underwriting intelligence lives in configuration versus code?
The more that's in configuration — governed, auditable, changeable by the business — the faster you move and the less dependent you are on any single vendor or engineering cycle. The more that's in code, the more every business change becomes a technology project.
The carriers pulling ahead aren't the ones with the best AI models or the biggest engineering teams. They're the ones who architected for adaptability — so when the market shifts, the underwriting guidelines change, or a better model comes along, they can respond in days instead of months.
NextAmp helps mid-market carriers and MGAs architect AI-enabled underwriting operations built for adaptability — separating reusable technology from configurable business logic so you can move fast without rebuilding.