← Back to work
Professional 2022 — present Anonymized

P&V — Insurance Data Work

Four years inside P&V Insurance — first as a data steward resolving cases the company's AI systems handed back as uncertain, now as an analyst turning that improved data into business decisions. Specifics anonymized; methodology and outcomes described in detail.

2022 — 2024 · Data Steward

The data steward years were spent on the cases that didn't fit. Records that wouldn't merge across systems. Fields that disagreed with each other. Edge cases the model handed back with low confidence. The work was unglamorous and slow, and it taught me more about how insurance data actually behaves than any course could have.

What I learned: data quality is not a problem you fix once. It's a continuous translation between what the business says it wants, what the systems actually capture, and what humans actually do. AI handles the patterns. What stays on a human desk is the texture — and the texture is where the real decisions live.

2024 — present · Data Analyst

The analyst years involve turning the better-quality data the stewardship years produced into business decisions. Working across SQL, Python, and BI tooling on production claims, risk, and customer data.

Selected case studies

The following case studies are described at the methodology level, with company-specific details and identifying information anonymized in line with confidentiality obligations.

Case 1 · Improving merge accuracy on legacy customer records

A category of customer records had been routinely flagged as uncertain merges by the deduplication pipeline. Manual review revealed a pattern the model had not been trained on. By documenting the pattern and adjusting the human-review queue, merge accuracy on this category improved meaningfully and the review burden dropped.

Methodology — pattern identification through manual review, documentation, queue restructuring.

Case 2 · Reducing false-positive escalations

A specific class of claims was being escalated to human review at a higher-than-expected rate. Analysis showed the model was treating routine variations in input as anomalies. Refined input pre-processing reduced false-positive escalations substantially without affecting true-positive detection.

Methodology — escalation pattern analysis, input normalization, controlled rollout.

Case 3 · Cross-system reconciliation for regulatory reporting

A reporting requirement involved reconciling fields across three internal systems whose definitions had drifted over time. The reconciliation pipeline was rebuilt with explicit mapping documentation, reducing manual reconciliation work and producing a clearer audit trail for the regulator.

Methodology — system-by-system field audit, definition reconciliation, documented mapping.

What stays on a human desk is the texture — the cases that don't fit. That texture is where most real decisions live.

What this work taught me

For role-specific questions about methodology, tools, or anonymized examples, I'm happy to discuss in interview.