From Code
to Configuration
Business users knew the underwriting rules cold but couldn't touch them without engineering. I audited the codebase, mapped three incompatible mental models, found three structural patterns hiding inside hundreds of rules, and designed the constrained interfaces that cut product launch time from a year to 90 days.
1 yr → 90 days
Product launch time
3 patterns
Found inside hundreds of rules
Sole designer
Embedded in engineering team
Before: The business user rule specification
The document business users handed to engineering. Every underwriting requirement written out in plain language: conditions, responses, edge cases, notes. Accurate, detailed, and completely inaccessible to the system that needed to run it.
The Situation
Launching a new insurance product took a year.
Business users at Haven Technology were the domain experts. Product owners and analysts with 20+ years in the industry, they had shaped many products and knew the underwriting logic cold: if the applicant is in the armed forces, decline. If BMI is above a certain threshold, adjust the rate class. If they've had a specific diagnosis within a lookback period, flag for review.
But none of that knowledge could reach production without going through engineering. Business users wrote requirements in spreadsheets and documents. Engineers translated those into nested if-then trees in code. Every translation was an opportunity for error. In insurance, an underwriting error is a serious risk.
Product launches moved at the speed of the ticket queue, not the speed of business decisions.
Haven Technology's leadership wanted to sell more insurance products faster. The knowledge existed. The bottleneck was the distance between knowing and shipping.
The Diagnosis
The same problem, described in three incompatible languages.
The obvious framing was "business users can't self-serve, so build them a form builder." But before designing anything, I needed to understand how different people in the system were actually thinking about the problem.
I audited three artifacts: the codebase, the business user spreadsheets, and the actual insurance application forms. The same underwriting logic appeared in all three, described completely differently each time. That wasn't just an access problem. It was a translation problem, and it was happening at every handoff.
Discovery: six stakeholders, six mental models
Each person's map of how the Rules Manager should work. Adam (dev), Katherine, Thomas, Norm (PMs), Kristen and Laura (business users) — same tool, fundamentally different mental models of what it was for and how it would be used.
The business users were the source of truth. They had the deepest domain knowledge, but they were used to working without guardrails: docs, spreadsheets, no structure enforced. The PMs would be the first real users of the tool, translating between business intent and production. The developers were the current gatekeepers and would remain necessary for edge cases the tool couldn't cover.
Before
Why can't business users just configure the rules directly?
After
What shared structure can three incompatible mental models all map onto?
The workshop brought all three groups together to surface where their models diverged. What emerged: the codebase wasn't as custom as it looked. Beneath hundreds of seemingly unique rules were three structural patterns. That finding became the foundation of the entire product.
Core insight
Hundreds of rules → 3 patterns
Standard rules
Simple condition → outcome. "If armed forces = yes, decline."
Lookback rules
Condition + time window. "If diagnosed with X within the last Y years, flag."
Aggregation rules
Rules that operate on the outputs of other rules. "If 3+ risk factors are flagged, escalate."
This was the structural layer all three mental models could map onto. The business user's plain-language checklist, the PM's configuration workflow, the developer's code logic (each was just a different representation of the same three patterns).
The Intervention
Build for the translator first. Abstract toward the expert.
Jumping straight to a business-user-facing tool wasn't the right move. Business users were used to working without guardrails, in docs and spreadsheets, with no enforced structure. Designing a tool that fit their existing mental model would take significant research and iteration (time the product didn't have).
Instead, I designed the first version for product managers: people who understood both the business logic and the technical structure, and who would act as the translation layer. Getting the PM-facing tool right first meant validating the structural patterns against real underwriting rules before investing in the more abstracted business-user interface.
Design philosophy
The instinct
Build the most abstracted, business-user-friendly tool first
The decision
Validate the structural patterns with PMs first, then abstract toward business users
For each of the three patterns, I designed a constrained interface that matched how that type of rule actually worked. Each interface limited what users could input in ways that prevented the translation errors that had been causing underwriting mistakes.
Standard rule (constrained form)
A Standard rule: condition → decision → output. The form constrains inputs to valid options. A PM can configure this in minutes without filing a ticket.
Lookback rule (constrained form)
A Lookback rule adds a time window and property checks. The interface enforces valid lookback periods automatically — a class of error that previously required careful manual QA.
I also designed Freestyle mode for developers: a structured frontend for the 15-20% of rules too complex for the constrained interfaces. This wasn't a concession (it was a deliberate architectural decision). Keeping developers in the system for genuinely complex edge cases was the right call. Freestyle gave them a faster, more structured way to do that work without returning to raw code.
Freestyle mode: for engineers, for edge cases
The developer view. Not a workaround — a deliberate part of the system. Engineers handle the 15-20% of rules too complex for the constrained interfaces, with a structured frontend rather than raw code.
The design went through 5-6 major iterations, from early explorations of basic if-then toggles through dependency flow diagrams to the final constrained forms. Each iteration was tested against real underwriting rules from Haven Simple.
Rule dependencies made visible
The Workflow view made rule execution order visible for the first time. Business users could see how rules connected and sequenced, rather than treating each one as isolated.
What Changed
The translation layer became the tool.
Before
Write requirement → file ticket → engineer translates → QA → deploy
After
Configure directly in a single session, with the structure enforced by the form
Product managers could configure the majority of underwriting rules directly, without filing tickets or waiting for engineering cycles. Rule dependencies became visible through the workflow view, so the team could understand how rules interacted rather than treating each one as isolated. Developers remained in the loop for genuinely complex edge cases (by design, not by default).
The Outcome
Haven Simple: 90 days instead of a year.
1 year → 90 days
Product configuration time for Haven Simple
Haven Simple was configured through the Rules Manager in approximately 90 days, compared to the roughly year-long timeline previous products had required. Product managers adopted the tool for rule configuration. Engineering tickets for routine rule changes dropped significantly.
The company dissolved before the tool reached full commercial rollout. But the system worked: validated against a real product, used by actual stakeholders, and structured to progressively abstract toward business users as the next phase. The 90-day timeline was the proof of concept for that approach.
My Role
Sole designer. Codebase auditor. Sequencing strategist.
I was the sole designer embedded in an engineering team. I personally audited the codebase, the business user spreadsheets, and the insurance application forms (the diagnostic work that revealed the three-mental-models problem before a single screen was designed). I facilitated the workshop that surfaced where those models diverged, designed every iteration of the rule configuration UI, and tested each against real underwriting rules from Haven Simple.
The call I owned: sequencing toward business users rather than jumping to them. The instinct was to build the most accessible tool first. I argued that validating the structural patterns with PMs first was lower risk and would produce a better business-user tool in the long run. The 90-day Haven Simple configuration validated that bet.
The Pattern
When I encounter complex domain knowledge trapped in the wrong format, I look for the structural layer underneath before designing any interface. The business users had 20+ years of expertise encoded in spreadsheets. The developers had that same logic encoded in bespoke code. Neither representation was wrong (they just couldn't talk to each other). Finding the three patterns that both representations shared made it possible to design a system that honored all three mental models without trying to collapse them into one.