This role focuses on designing and delivering a cloud-based data platform using Databricks on AWS, as part of a wider transformation programme spanning systems, processes, and ways of working.
You ll be responsible for shaping the platform architecture, migrating legacy data into the new environment, and ensuring the platform is scalable, performant, and trusted. The role sits within a fast-paced, programme-led environment and combines hands-on engineering with architectural ownership.
Key responsibilitiesDesign and architect a secure, scalable AWS-based data platform using Databricks
Build and operate pipelines using Databricks, PySpark, Delta Lake, and CI/CD
Assess the existing data landscape and define migration approaches
Lead early data migrations from legacy platforms into the new environment
Define ingestion, processing, storage, validation, and access patterns
Implement robust data validation and verification processes
Identify opportunities to simplify, standardise, and consolidate data during migration
Work closely with technical and non-technical stakeholders, clearly documenting decisions
This is not a pure build role strong communication and documentation are essential.
What good looks likeSignificant hands-on experience with Databricks (engineering and architecture)
Strong practical skills in:
Databricks
PySpark
Delta Lake
Data modelling and performance optimisation
CI/CD pipelines
Comfortable operating in a complex, programme-driven environment
Able to clearly explain technical decisions to mixed audiences
Please apply with your latest CV and we'll be in touch!