Description
We are seeking an experienced Technical Architect to support the design and evolution of large-scale, cloud-based data platforms across, working across our portfolio of clients.
The Technical Architect will play a key role in shaping solution design patterns, ensuring alignment with established standards, and supporting a strategic transition and migrations across/to/from AWS & Azure.
Key Responsibilities
- Define and evolve technical architecture patterns for data ingestion, processing, and access.
- Design scalable, resilient, and cost-efficient data solutions within a Hub and Spoke model.
- Support the design of new data ingestion pipelines (batch and Real Time).
- Ensure alignment with organisational architectural standards and governance frameworks.
- Contribute to target architecture roadmaps.
- Provide architectural guidance across:
- Data ingestion (Kafka, APIs, SFTP)
- Data processing (PySpark, EMR, Glue)
- Storage (S3 and data lake patterns)
- Collaborate with DevOps, Data Engineers, and Testers to ensure cohesive delivery.
- Promote engineering best practices, including CI/CD, infrastructure as code, and observability.
- Ensure robust handling of schema evolution and upstream data changes.
- Support onboarding of new data sources and services into the platform.
- Ensure solutions meet requirements for:
- Data quality and consistency
- Performance and scalability
- Security and compliance
- Work within defined data modelling ownership boundaries where applicable.
- Support cloud strategy evolution.
- Avoid platform lock-in and ensure portable, future-proof designs.
- Contribute to technical decision-making for future platform direction.
- Work in blended, cross-functional teams.
- Provide technical leadership and mentoring to delivery teams.
- Ensure effective knowledge transfer and capability uplift.
Required Skills & Experience
- Strong experience designing modern cloud-based data platforms.
- Hands-on architectural experience with:
- AWS (essential): S3, EMR, Glue
- Kafka/event streaming architectures
- Python & PySpark-based data processing
- Experience designing data ingestion pipelines (batch and Real Time).
- Proficiency in Infrastructure as Code (Terraform).
- Experience with GitHub-based workflows and CI/CD pipelines.
- Experience with data lake and lakehouse architectures.
- Strong understanding of:
- Data ingestion patterns
- Data transformation and curation layers
- Data access and productisation
- Ability to design for large-scale datasets.
- Experience supporting cloud migrations
- Knowledge and experience with Azure & Microsoft Fabric & Databricks would be beneficial
- Familiarity with event-driven and streaming-first architectures at scale.
- Strong stakeholder engagement and cross-team collaboration skills.
- Ability to operate effectively within existing governance and standards.
- Pragmatic decision-making balancing delivery pace and technical quality.
- Clear communicator able to translate complex architecture into actionable guidance.
- Experience working in large, complex enterprise environments.
This role will require the ability to obtain and hold UK SC Clearance