We're working with a leading global specialty insurer to recruit a data engineer into a high-profile analytics function supporting underwriting, portfolio management, and risk strategy across international markets.
This is a genuinely embedded role within the underwriting business - not a back-office data function. You'll sit at the intersection of data engineering, advanced analytics, and insurance risk, helping to modernise platforms, automate workflows, and deliver best-in-class exposure and catastrophe insights.
If you enjoy owning technical solutions end-to-end, influencing senior stakeholders, and working with complex insurance data, this role offers real scope and autonomy.
The Role
As part of a multi-disciplinary analytics team, you'll take technical ownership of data engineering and analytical workflows that underpin exposure management and catastrophe analytics.
Key Responsibilities
Designing and enhancing data pipelines and analytical workflows to improve efficiency, insight, and scalability
Leading development initiatives to modernise legacy processes, improve methodologies, and enhance documentation
Supporting platform upgrades, model changes, and troubleshooting complex technical issues
Building automated tools using a modern data stack, with a focus on clean data, performance, and reusability
Exploring opportunities to integrate AI and advanced automation into analytical workflows
Partnering closely with underwriting, exposure management, and senior analytics leadership to shape the technical roadmap
Acting as a technical mentor and raising data engineering capability across the team
What We're Looking For
This role will suit someone from the insurance or reinsurance market with strong technical depth and a pragmatic, delivery-focused mindset.
Essential skills & experience:
Strong programming capability in SQL (T-SQL / Dynamic SQL) and Python
Proven data engineering experience, including ETL design, data cleansing, and transformation of complex exposure data
Experience with text mining, regex, and unstructured data
Hands-on database design, optimisation, and ongoing maintenance
Experience integrating data via REST and SOAP APIs (XML, JSON, HTML)
Comfortable managing multiple workstreams and engaging confidently with stakeholders
Desirable (but not essential):
Experience working with catastrophe or exposure data
Familiarity with industry-standard catastrophe models (e.g. Verisk)