Senior Data Engineer - Microsoft Fabric We're looking for a passionate and driven Senior Data Engineer to join a growing team, building a modern Microsoft Fabric data platform. This is a hands-on role designing and delivering scalable data pipelines, Lakehouse solutions, and analytics models within the Azure ecosystem. What's on Offer: Salary: 65,000- 70,000 Private Health Scheme 25 Days' Annual Leave + Bank Holidays EV Car & Cycle to work Schemes World-class Training Platform + Certifications Remote first - Office once per two months Strong progression & development opportunities Opportunity to work on a modern, AI-enabled data platform Real ownership and influence in a growing, forward-thinking data team What You'll Do: Build and maintain ETL/ELT pipelines and data models in Fabric (Data Factory, Notebooks, Spark) Write high-performance Spark SQL, T-SQL, Python/PySpark Manage ingestion, transformation, and loading from multiple sources Translate stakeholder requirements into scalable technical solutions Mentor team members and establish engineering standards, security, and governance Leverage AI-assisted development tools like GitHub Copilot, ChatGPT, and Fabric Copilot Essential Experience: Microsoft Fabric is essential Azure Data ecosystem Lakehouse architectures & Data Factory Python, PySpark, Spark SQL Proven hands-on delivery in this stack If you're an experienced Data Engineer with strong Microsoft Fabric and Azure experience, this could be the perfect next step for you. Hit APPLY to be considered!
May 16, 2026
Full time
Senior Data Engineer - Microsoft Fabric We're looking for a passionate and driven Senior Data Engineer to join a growing team, building a modern Microsoft Fabric data platform. This is a hands-on role designing and delivering scalable data pipelines, Lakehouse solutions, and analytics models within the Azure ecosystem. What's on Offer: Salary: 65,000- 70,000 Private Health Scheme 25 Days' Annual Leave + Bank Holidays EV Car & Cycle to work Schemes World-class Training Platform + Certifications Remote first - Office once per two months Strong progression & development opportunities Opportunity to work on a modern, AI-enabled data platform Real ownership and influence in a growing, forward-thinking data team What You'll Do: Build and maintain ETL/ELT pipelines and data models in Fabric (Data Factory, Notebooks, Spark) Write high-performance Spark SQL, T-SQL, Python/PySpark Manage ingestion, transformation, and loading from multiple sources Translate stakeholder requirements into scalable technical solutions Mentor team members and establish engineering standards, security, and governance Leverage AI-assisted development tools like GitHub Copilot, ChatGPT, and Fabric Copilot Essential Experience: Microsoft Fabric is essential Azure Data ecosystem Lakehouse architectures & Data Factory Python, PySpark, Spark SQL Proven hands-on delivery in this stack If you're an experienced Data Engineer with strong Microsoft Fabric and Azure experience, this could be the perfect next step for you. Hit APPLY to be considered!
Overview We are looking for a skilled Data Engineer with strong experience in Snowflake to join our growing data team. You will be responsible for designing, building, and maintaining scalable data pipelines and architectures that support analytics, reporting, and data-driven decision-making across the organization. 6 months initial contract (OUTSIDE IR35) Remote with occasional travel into London (1 day per month) Key Responsibilities Design, develop, and maintain robust data pipelines and ETL/ELT processes Build and optimize data models within Snowflake for performance and scalability Ingest data from various sources (APIs, databases, streaming platforms, etc.) Ensure data quality, integrity, and governance across systems Collaborate with data analysts, scientists, and business stakeholders to deliver data solutions Monitor and troubleshoot data workflows and pipeline performance Implement best practices for data security, privacy, and compliance Document data architecture, processes, and workflows Required Skills & Experience Proven experience as a Data Engineer or in a similar role Strong hands-on experience with Snowflake data platform Proficiency in SQL and data modeling techniques Experience with ETL/ELT tools (e.g., dbt, Apache Airflow, Talend, Informatica) Experience with cloud platforms (AWS, Azure, or GCP) Familiarity with data warehousing concepts and best practices Understanding of data governance and data quality principles Preferred Qualifications Experience with modern data stack tools (e.g., dbt, Fivetran, Kafka) Knowledge of CI/CD pipelines and DevOps practices Experience working with large-scale or real-time data processing systems Familiarity with BI tools (e.g., Power BI, Tableau, Looker) Snowflake certification is a plus
May 16, 2026
Contractor
Overview We are looking for a skilled Data Engineer with strong experience in Snowflake to join our growing data team. You will be responsible for designing, building, and maintaining scalable data pipelines and architectures that support analytics, reporting, and data-driven decision-making across the organization. 6 months initial contract (OUTSIDE IR35) Remote with occasional travel into London (1 day per month) Key Responsibilities Design, develop, and maintain robust data pipelines and ETL/ELT processes Build and optimize data models within Snowflake for performance and scalability Ingest data from various sources (APIs, databases, streaming platforms, etc.) Ensure data quality, integrity, and governance across systems Collaborate with data analysts, scientists, and business stakeholders to deliver data solutions Monitor and troubleshoot data workflows and pipeline performance Implement best practices for data security, privacy, and compliance Document data architecture, processes, and workflows Required Skills & Experience Proven experience as a Data Engineer or in a similar role Strong hands-on experience with Snowflake data platform Proficiency in SQL and data modeling techniques Experience with ETL/ELT tools (e.g., dbt, Apache Airflow, Talend, Informatica) Experience with cloud platforms (AWS, Azure, or GCP) Familiarity with data warehousing concepts and best practices Understanding of data governance and data quality principles Preferred Qualifications Experience with modern data stack tools (e.g., dbt, Fivetran, Kafka) Knowledge of CI/CD pipelines and DevOps practices Experience working with large-scale or real-time data processing systems Familiarity with BI tools (e.g., Power BI, Tableau, Looker) Snowflake certification is a plus
ETL Data Engineer/Data Architect - Azure Stack We are looking for an experienced ETL Data Engineer / Data Architect to deliver projects in collaboration with our IT partners and internal stakeholders that will transform our product landscape. This role is hands-on so you would be expected to contribute to ETL development as well data architecture. You will be taking business requirements from stuctured data sets from the research teams and and transforming into technical documents. Essential Skills Educated to degree level in a relevant IT subject. Expertise in designing and implementing data pipelines using Azure services (Azure Data Factory, Data storage), Spark and Databricks. Expertise in data modelling, database design and designing enterprise data architecture . Azure Data Stack SQL server, NoSQL, Nanobricks, ADT, Spark, Data Tables. API, PostgreSQL Knowledge of ETL/ELT frameworks and data integration patterns with programming experience with Python or PySpark. Data modelling Used to working with product teams Ability to collaborate and communicate with stakeholders and product manager Desirable Skills Experience of the life sciences sector Experience working with structured and semi-structured data, preferably having worked previously with a variety of life science data (e.g. omics, health records) . Key Duties Deliver data pipelines or products that support our business operations and research, providing clear documentation on specifications and having a deep understanding of each solution at the data, technical and business process levels. Implement data engineering best practices to standardise the development process. Design and maintain data integration frameworks for multiple data sources (e.g. practice management systems, lab systems, registries). Be responsible for ensuring that high quality data is presented from the products to our researchers, and data dictionaries are maintained in the data catalogue. The role offers hybrid working with only one day a week required on site near Nottingham. This is a great opportunity to secure a long term contract working for a global brand. So don't delay and apply ASAP as I have interview slots ready to be filled. Randstad Technologies is acting as an Employment Business in relation to this vacancy.
May 15, 2026
Contractor
ETL Data Engineer/Data Architect - Azure Stack We are looking for an experienced ETL Data Engineer / Data Architect to deliver projects in collaboration with our IT partners and internal stakeholders that will transform our product landscape. This role is hands-on so you would be expected to contribute to ETL development as well data architecture. You will be taking business requirements from stuctured data sets from the research teams and and transforming into technical documents. Essential Skills Educated to degree level in a relevant IT subject. Expertise in designing and implementing data pipelines using Azure services (Azure Data Factory, Data storage), Spark and Databricks. Expertise in data modelling, database design and designing enterprise data architecture . Azure Data Stack SQL server, NoSQL, Nanobricks, ADT, Spark, Data Tables. API, PostgreSQL Knowledge of ETL/ELT frameworks and data integration patterns with programming experience with Python or PySpark. Data modelling Used to working with product teams Ability to collaborate and communicate with stakeholders and product manager Desirable Skills Experience of the life sciences sector Experience working with structured and semi-structured data, preferably having worked previously with a variety of life science data (e.g. omics, health records) . Key Duties Deliver data pipelines or products that support our business operations and research, providing clear documentation on specifications and having a deep understanding of each solution at the data, technical and business process levels. Implement data engineering best practices to standardise the development process. Design and maintain data integration frameworks for multiple data sources (e.g. practice management systems, lab systems, registries). Be responsible for ensuring that high quality data is presented from the products to our researchers, and data dictionaries are maintained in the data catalogue. The role offers hybrid working with only one day a week required on site near Nottingham. This is a great opportunity to secure a long term contract working for a global brand. So don't delay and apply ASAP as I have interview slots ready to be filled. Randstad Technologies is acting as an Employment Business in relation to this vacancy.
Pricing & Market Data (Migration Specialist) The Opportunity We are seeking a high-calibre Technical Lead to own the evolution of our pricing ecosystem. You will work closely with the Architect on migration from Gresham Asset Control to Prime EDM (Cloud) including implement features that meet business needs and are aligned with strategic objectives. You will sit at the intersection of Trading, Supply, and Technology, ensuring our desks have the high-fidelity data needed to move global energy markets. Core Responsibilities Legacy-to-Cloud Migration: Lead the technical execution of the move to Prime Cloud Service. You will be responsible for mapping Legacy Gresham schemas to the new Prime data model while maintaining data lineage and auditability. Customization Engineering: Directly manage the implementation of critical STS business logic, pricing mechanisms and complex curve remarking. Project Sizing & Strategy: Provide expert estimates on project duration, technical debt, and resource requirements. You will define the "Definition of Done" for the migration. Stakeholder Face-off: Drive technical discussions with Front Office Traders, Quants, and Risk Managers to translate complex business needs into scalable technical specs. Technical Governance: Own the ETL pipelines and data parsers. Ensure the system is capable of handling increasing volumetric data demands without compromising latency. Mentorship & Hiring: Define the technical bar for the team. You will suggest the ideal skill mix for new hires and mentor existing engineers in EDM best practices. Team management: Proven team leadership/supervisory experience Required Experience & Technical Skillset Proven Track Record: At least 8-10 years in technical roles, with 2+ years in a Lead capacity within Commodities Trading (Oil, Gas, or Power). Technology Stack: Deep Mastery: Gresham Asset Control (Legacy) and Prime EDM. Architecture: Cloud-native migrations (AWS/Azure), API design, and SQL/NoSQL performance tuning. Scripting: Python (preferred) or Java for custom business logic and automation. Pricing Logic: Strong understanding of EOD (End of Day) vs. Real Time pricing, forward curve construction, and basis risk calculations. Agile Leadership: Experience driving delivery within Azure DevOps environments using SCRUM or Kanban. Preferred Qualifications Previous experience in a "Greenfield" migration or a major vendor-to-vendor system swap. Familiarity with other market data providers (eg, Bloomberg, Refinitiv, Platts, Argus). A "Build-First" mindset: The ability to decide when to use out-of-the-box Prime functionality vs. building custom modules.
Mar 30, 2026
Contractor
Pricing & Market Data (Migration Specialist) The Opportunity We are seeking a high-calibre Technical Lead to own the evolution of our pricing ecosystem. You will work closely with the Architect on migration from Gresham Asset Control to Prime EDM (Cloud) including implement features that meet business needs and are aligned with strategic objectives. You will sit at the intersection of Trading, Supply, and Technology, ensuring our desks have the high-fidelity data needed to move global energy markets. Core Responsibilities Legacy-to-Cloud Migration: Lead the technical execution of the move to Prime Cloud Service. You will be responsible for mapping Legacy Gresham schemas to the new Prime data model while maintaining data lineage and auditability. Customization Engineering: Directly manage the implementation of critical STS business logic, pricing mechanisms and complex curve remarking. Project Sizing & Strategy: Provide expert estimates on project duration, technical debt, and resource requirements. You will define the "Definition of Done" for the migration. Stakeholder Face-off: Drive technical discussions with Front Office Traders, Quants, and Risk Managers to translate complex business needs into scalable technical specs. Technical Governance: Own the ETL pipelines and data parsers. Ensure the system is capable of handling increasing volumetric data demands without compromising latency. Mentorship & Hiring: Define the technical bar for the team. You will suggest the ideal skill mix for new hires and mentor existing engineers in EDM best practices. Team management: Proven team leadership/supervisory experience Required Experience & Technical Skillset Proven Track Record: At least 8-10 years in technical roles, with 2+ years in a Lead capacity within Commodities Trading (Oil, Gas, or Power). Technology Stack: Deep Mastery: Gresham Asset Control (Legacy) and Prime EDM. Architecture: Cloud-native migrations (AWS/Azure), API design, and SQL/NoSQL performance tuning. Scripting: Python (preferred) or Java for custom business logic and automation. Pricing Logic: Strong understanding of EOD (End of Day) vs. Real Time pricing, forward curve construction, and basis risk calculations. Agile Leadership: Experience driving delivery within Azure DevOps environments using SCRUM or Kanban. Preferred Qualifications Previous experience in a "Greenfield" migration or a major vendor-to-vendor system swap. Familiarity with other market data providers (eg, Bloomberg, Refinitiv, Platts, Argus). A "Build-First" mindset: The ability to decide when to use out-of-the-box Prime functionality vs. building custom modules.
Data Engineer- £45,000- Remote About the Role As a Data Engineer , you will be a key member of our agile delivery team, working closely with clients to unlock the full value of their data. This is a hands-on role where you'll be designing and implementing data solutions using cutting-edge tools in the Azure ecosystem. You'll have the opportunity to develop your technical expertise while contributing to high-impact projects across a variety of industries. We operate with a Winning from Anywhere® philosophy, offering flexibility in where you work while maintaining a strong team culture through regular client site visits, company events, and collaboration opportunities. Key Responsibilities Deliver end-to-end data solutions, including acquisition, engineering, modelling, analysis, and visualisation. Lead and participate in workshops to gather requirements and engage with clients on both technical and business levels. Design and implement scalable, robust ETL/ELT pipelines using Microsoft/Azure technologies such as Azure Data Factory, Synapse, Databricks, or Fabric. Build and optimise data lake solutions using medallion architecture. Support cloud migration of on-premises SQL Server-based data platforms (SQL, SSIS, SSAS, SSRS). Develop reports, dashboards, and analytics solutions using Power BI. Provide ongoing support and enhancements to solutions post-deployment. Skills and Experience Essential: Proven experience in a Data Engineering or Data Warehouse Development role. Strong hands-on expertise with Azure/Microsoft and/or SQL Server technology stacks. Proficiency in ETL/ELT development using tools like Azure Synapse, Data Factory, Databricks, or Fabric. Advanced SQL and Python skills (DDL, DML, Stored Procedures, Notebooks). Understanding of lakehouse architecture and medallion design principles. Ability to work with large, complex datasets from multiple sources. Strong knowledge of BI and data warehousing concepts. Experience with Power BI for reporting and data visualisation. Excellent communication and client engagement skills. What We Offer Remote-first working model (Winning from Anywhere®) 25 days annual holiday Monthly home working allowance Set-up allowance for home office 24/7 virtual GP access Employee Assistance Programme (available 24/7) Company sick pay scheme Life assurance (4x base salary) Private health insurance after 1 year of service Enhanced parental leave and pay Cyclescheme and electric car scheme Opportunity to work with a 3 World Class Best Company* To apply for this role please submit your CV or contact Dillon Blackburn (see below) Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Oct 07, 2025
Full time
Data Engineer- £45,000- Remote About the Role As a Data Engineer , you will be a key member of our agile delivery team, working closely with clients to unlock the full value of their data. This is a hands-on role where you'll be designing and implementing data solutions using cutting-edge tools in the Azure ecosystem. You'll have the opportunity to develop your technical expertise while contributing to high-impact projects across a variety of industries. We operate with a Winning from Anywhere® philosophy, offering flexibility in where you work while maintaining a strong team culture through regular client site visits, company events, and collaboration opportunities. Key Responsibilities Deliver end-to-end data solutions, including acquisition, engineering, modelling, analysis, and visualisation. Lead and participate in workshops to gather requirements and engage with clients on both technical and business levels. Design and implement scalable, robust ETL/ELT pipelines using Microsoft/Azure technologies such as Azure Data Factory, Synapse, Databricks, or Fabric. Build and optimise data lake solutions using medallion architecture. Support cloud migration of on-premises SQL Server-based data platforms (SQL, SSIS, SSAS, SSRS). Develop reports, dashboards, and analytics solutions using Power BI. Provide ongoing support and enhancements to solutions post-deployment. Skills and Experience Essential: Proven experience in a Data Engineering or Data Warehouse Development role. Strong hands-on expertise with Azure/Microsoft and/or SQL Server technology stacks. Proficiency in ETL/ELT development using tools like Azure Synapse, Data Factory, Databricks, or Fabric. Advanced SQL and Python skills (DDL, DML, Stored Procedures, Notebooks). Understanding of lakehouse architecture and medallion design principles. Ability to work with large, complex datasets from multiple sources. Strong knowledge of BI and data warehousing concepts. Experience with Power BI for reporting and data visualisation. Excellent communication and client engagement skills. What We Offer Remote-first working model (Winning from Anywhere®) 25 days annual holiday Monthly home working allowance Set-up allowance for home office 24/7 virtual GP access Employee Assistance Programme (available 24/7) Company sick pay scheme Life assurance (4x base salary) Private health insurance after 1 year of service Enhanced parental leave and pay Cyclescheme and electric car scheme Opportunity to work with a 3 World Class Best Company* To apply for this role please submit your CV or contact Dillon Blackburn (see below) Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Type: Full-time, Permanent The OpportunityWe're recruiting on behalf of a leading organisation undergoing a major digital transformation. This is a hands-on, senior engineering role for someone who thrives on solving complex data challenges, building scalable platforms, and integrating operational systems across a diverse business landscape. You'll work closely with stakeholders in Logistics, Operations, Finance, and Compliance to modernise data infrastructure, automate workflows, and embed AI into BI and operational processes. If you're ready to take ownership of high-impact projects and shape the future of data in logistics, this is the role for you. What You'll Be DoingData Platform & BI Engineering Architect and implement cloud-native data platforms (AWS S3, Glue, Athena, Redshift, QuickSight). Build reliable, governed data pipelines with CI/CD and infrastructure as code. Design dimensional models and deliver robust SQL/Python transformations. Systems Integration & Application Support Provide expert-level support for transport, warehouse, and fleet systems (TMS/WMS/FMS). Develop and maintain integrations using REST/SOAP APIs, EDI (XML/JSON), and flat-file interfaces. Implement observability, error-handling, and retry logic for mission-critical interfaces. Automation & Process Improvement Replace manual, spreadsheet-driven processes with governed datasets and internal tools. Build lightweight portals, scripts, and APIs to streamline business workflows. AI & Advanced Analytics Integrate AI services into BI dashboards and operational workflows (eg, anomaly detection, natural language Q&A). Implement semantic search and intelligent alerting using AWS Bedrock or Azure equivalents. Security, Governance & Resilience Enforce least-privilege access, RBAC, and secrets management. Apply data governance across AWS/Microsoft estates and contribute to DR strategies. What You'll BringEssential Experience 5+ years in SQL (T-SQL), Python, and BI/data platform engineering. Strong hands-on experience with AWS analytics stack and Power BI. Proven track record in designing and deploying production-grade ETL/ELT pipelines. Experience supporting and integrating operational systems (TMS/WMS/FMS). Solid understanding of data modelling, performance tuning, and infrastructure as code. Desirable Skills & Certifications AWS or Microsoft certifications (eg, Data Analytics Speciality, DP-203, PL-300). Experience with Azure Data Factory, Kafka/Kinesis, or message brokers. Familiarity with LLMs (eg, Claude, Azure OpenAI) and vector databases. Why You Should Apply Be part of a company driving innovation and sustainability in logistics. Lead and deliver high-impact digital transformation initiatives. Work in a collaborative, forward-thinking environment. Competitive salary and benefits, with professional development opportunities. If you would like more information or some career advice, please do not hesitate to reach out directly. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
Oct 06, 2025
Full time
Type: Full-time, Permanent The OpportunityWe're recruiting on behalf of a leading organisation undergoing a major digital transformation. This is a hands-on, senior engineering role for someone who thrives on solving complex data challenges, building scalable platforms, and integrating operational systems across a diverse business landscape. You'll work closely with stakeholders in Logistics, Operations, Finance, and Compliance to modernise data infrastructure, automate workflows, and embed AI into BI and operational processes. If you're ready to take ownership of high-impact projects and shape the future of data in logistics, this is the role for you. What You'll Be DoingData Platform & BI Engineering Architect and implement cloud-native data platforms (AWS S3, Glue, Athena, Redshift, QuickSight). Build reliable, governed data pipelines with CI/CD and infrastructure as code. Design dimensional models and deliver robust SQL/Python transformations. Systems Integration & Application Support Provide expert-level support for transport, warehouse, and fleet systems (TMS/WMS/FMS). Develop and maintain integrations using REST/SOAP APIs, EDI (XML/JSON), and flat-file interfaces. Implement observability, error-handling, and retry logic for mission-critical interfaces. Automation & Process Improvement Replace manual, spreadsheet-driven processes with governed datasets and internal tools. Build lightweight portals, scripts, and APIs to streamline business workflows. AI & Advanced Analytics Integrate AI services into BI dashboards and operational workflows (eg, anomaly detection, natural language Q&A). Implement semantic search and intelligent alerting using AWS Bedrock or Azure equivalents. Security, Governance & Resilience Enforce least-privilege access, RBAC, and secrets management. Apply data governance across AWS/Microsoft estates and contribute to DR strategies. What You'll BringEssential Experience 5+ years in SQL (T-SQL), Python, and BI/data platform engineering. Strong hands-on experience with AWS analytics stack and Power BI. Proven track record in designing and deploying production-grade ETL/ELT pipelines. Experience supporting and integrating operational systems (TMS/WMS/FMS). Solid understanding of data modelling, performance tuning, and infrastructure as code. Desirable Skills & Certifications AWS or Microsoft certifications (eg, Data Analytics Speciality, DP-203, PL-300). Experience with Azure Data Factory, Kafka/Kinesis, or message brokers. Familiarity with LLMs (eg, Claude, Azure OpenAI) and vector databases. Why You Should Apply Be part of a company driving innovation and sustainability in logistics. Lead and deliver high-impact digital transformation initiatives. Work in a collaborative, forward-thinking environment. Competitive salary and benefits, with professional development opportunities. If you would like more information or some career advice, please do not hesitate to reach out directly. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
Data Architect - Data Vault, Snowflake, ADF £Market rate (Inside IR35) London/Hybrid 6 months My client is an instantly recognisable Insurance brand who urgently require a Data Architect with expertise in Data Vault, Data Modelling, Data Engineering Design, strong Azure stack (including Data Factory (ADF), Data Lake Storage, SQL, Azure ML and Power BI), Snowflake and Enterprise scale Data Sources to join a business-critical Programme ASAP. Key Requirements: Proven expertise in Data Architecture working on large, complex Data Programmes Strong Data Modelling experience (especially Conceptual Data Models) Demonstrable background in Data Engineering Design processes (ETL, ELT), with good knowledge of Data Vault, Inmon and Kimball Strong knowledge of MS Azure stack (including Data Factory (ADF), Data Lake Storage, SQL, Azure ML and Power BI) Proven experience with Snowflake Architecture, including DevOps integration and CI/CD practices Previous experience with large, Enterprise scale Data sets and sources Strong understanding of Data Governance principles (Data stewardship, Data quality and compliance guardrails as well as Data quality, lineage, life cycle management and security) Excellent communication and stakeholder management skills Flexible approach to hybrid working (attending workshops, possibly across differ locations if needed) Nice to have: Insurance industry experience Previous work on Data strategies Exposure to collaborative modelling tools such as Ellie.ai Immediate availability If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
Oct 03, 2025
Contractor
Data Architect - Data Vault, Snowflake, ADF £Market rate (Inside IR35) London/Hybrid 6 months My client is an instantly recognisable Insurance brand who urgently require a Data Architect with expertise in Data Vault, Data Modelling, Data Engineering Design, strong Azure stack (including Data Factory (ADF), Data Lake Storage, SQL, Azure ML and Power BI), Snowflake and Enterprise scale Data Sources to join a business-critical Programme ASAP. Key Requirements: Proven expertise in Data Architecture working on large, complex Data Programmes Strong Data Modelling experience (especially Conceptual Data Models) Demonstrable background in Data Engineering Design processes (ETL, ELT), with good knowledge of Data Vault, Inmon and Kimball Strong knowledge of MS Azure stack (including Data Factory (ADF), Data Lake Storage, SQL, Azure ML and Power BI) Proven experience with Snowflake Architecture, including DevOps integration and CI/CD practices Previous experience with large, Enterprise scale Data sets and sources Strong understanding of Data Governance principles (Data stewardship, Data quality and compliance guardrails as well as Data quality, lineage, life cycle management and security) Excellent communication and stakeholder management skills Flexible approach to hybrid working (attending workshops, possibly across differ locations if needed) Nice to have: Insurance industry experience Previous work on Data strategies Exposure to collaborative modelling tools such as Ellie.ai Immediate availability If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
Role Overview We are seeking an accomplished Lead Data Engineer (SFIA Level 5) to provide technical leadership, direction, and strategy across complex data engineering initiatives. This role involves leading teams to deliver resilient, scalable, and standards-driven data solutions, while engaging with senior stakeholders to translate requirements into robust data products and services. The Lead Data Engineer will be responsible for setting engineering standards, embedding best practices, and ensuring delivery aligns with strategic principles. The role also carries line management responsibilities, helping to build capability within the engineering community. Key Responsibilities Provide technical leadership and set the direction for engineering teams and communities, ensuring delivery adheres to data standards and strategic principles. Engage with senior stakeholders to define requirements for complex and sensitive data products, pipelines, and platforms. Oversee delivery and user testing of requirements across teams. Lead the redevelopment of existing data journeys, enhancing performance, resilience, and scalability, while adapting to evolving systems, tools, and platforms. Translate user requirements and data designs into effective, reusable, and standards-compliant data solutions. Define, deliver, and embed engineering standards across multiple platforms, keeping them updated and ensuring adherence across teams. Champion data validation methods and standards to ensure data quality from source to consumption. Standardise code, product, and service quality across engineering teams by implementing best practices and governance. Design and promote reusable metadata libraries and cross-business standards in collaboration with other technical disciplines. Partner with Lead Data Engineers, Data Architects, and Technologists to design and develop innovative solutions. Continually improve data engineering, automation, and scaling capabilities. Provide leadership and mentoring, helping to develop the skills and capabilities of the Data Engineer community. Carry out line management responsibilities, supporting the growth and development of Data Engineers. Ensure alignment and collaboration with teams across Digital Group, Data & Analytics, and Data Practice for consistency and scalability. Skills & Experience Required Practitioner Level: Proven ability to lead development and delivery of complex data pipelines using big data technologies (Apache Kafka, Spark, or similar) and cloud platforms (AWS, Azure). Technical leadership experience, setting team direction and ensuring adherence to data standards and strategic principles. Strong programming/coding expertise across Python, SQL, Proc SQL, and cloud technologies (Azure stack, AWS). Experience in designing and executing data/code quality assurance processes, troubleshooting, and resolving processing issues. Ability to apply standards and tools to design, code, test, correct, and document programs/scripts from agreed specifications. Knowledge of data modelling, with experience leading strategies for delivery to wider communities. Expert Level: Proven track record leading the development of ETL pipelines for large, complex, or high-volume datasets, with effective delegation and oversight. Advanced knowledge of data engineering standards, with the ability to establish, maintain, and enforce them across teams.
Oct 01, 2025
Role Overview We are seeking an accomplished Lead Data Engineer (SFIA Level 5) to provide technical leadership, direction, and strategy across complex data engineering initiatives. This role involves leading teams to deliver resilient, scalable, and standards-driven data solutions, while engaging with senior stakeholders to translate requirements into robust data products and services. The Lead Data Engineer will be responsible for setting engineering standards, embedding best practices, and ensuring delivery aligns with strategic principles. The role also carries line management responsibilities, helping to build capability within the engineering community. Key Responsibilities Provide technical leadership and set the direction for engineering teams and communities, ensuring delivery adheres to data standards and strategic principles. Engage with senior stakeholders to define requirements for complex and sensitive data products, pipelines, and platforms. Oversee delivery and user testing of requirements across teams. Lead the redevelopment of existing data journeys, enhancing performance, resilience, and scalability, while adapting to evolving systems, tools, and platforms. Translate user requirements and data designs into effective, reusable, and standards-compliant data solutions. Define, deliver, and embed engineering standards across multiple platforms, keeping them updated and ensuring adherence across teams. Champion data validation methods and standards to ensure data quality from source to consumption. Standardise code, product, and service quality across engineering teams by implementing best practices and governance. Design and promote reusable metadata libraries and cross-business standards in collaboration with other technical disciplines. Partner with Lead Data Engineers, Data Architects, and Technologists to design and develop innovative solutions. Continually improve data engineering, automation, and scaling capabilities. Provide leadership and mentoring, helping to develop the skills and capabilities of the Data Engineer community. Carry out line management responsibilities, supporting the growth and development of Data Engineers. Ensure alignment and collaboration with teams across Digital Group, Data & Analytics, and Data Practice for consistency and scalability. Skills & Experience Required Practitioner Level: Proven ability to lead development and delivery of complex data pipelines using big data technologies (Apache Kafka, Spark, or similar) and cloud platforms (AWS, Azure). Technical leadership experience, setting team direction and ensuring adherence to data standards and strategic principles. Strong programming/coding expertise across Python, SQL, Proc SQL, and cloud technologies (Azure stack, AWS). Experience in designing and executing data/code quality assurance processes, troubleshooting, and resolving processing issues. Ability to apply standards and tools to design, code, test, correct, and document programs/scripts from agreed specifications. Knowledge of data modelling, with experience leading strategies for delivery to wider communities. Expert Level: Proven track record leading the development of ETL pipelines for large, complex, or high-volume datasets, with effective delegation and oversight. Advanced knowledge of data engineering standards, with the ability to establish, maintain, and enforce them across teams.
Data Solution Architect - Azure, ADF, Snowflake, Insurance £Market rate (Inside IR35) London / Hybrid 6 Months My client is an instantly recognisable Insurance brand who urgently require a Data Solution Architect with expertise in Data Modelling, Data Engineering Design, strong Azure stack (including Data Factory (ADF), Data Lake Storage, SQL, Azure ML and Power BI), Snowflake and Enterprise scale Data Sources to join a business-critical Programme ASAP. Key Requirements: Proven expertise in Data Solution Architecture working on large, complex Data Programmes Strong Data Modelling experience (especially Conceptual Data Models) Demonstrable background in Data Engineering Design processes (ETL, ELT), with good knowledge of Data Vault, Inmon and Kimball Strong knowledge of MS Azure stack (including Data Factory (ADF), Data Lake Storage, SQL, Azure ML and Power BI) Proven experience with Snowflake Architecture, including DevOps integration and CI/CD practices Previous experience with large, Enterprise scale Data sets and sources Strong understanding of Data Governance principles (Data stewardship, Data quality and compliance guardrails as well as Data quality, lineage, lifecycle management and security) Excellent communication and stakeholder management skills Flexible approach to hybrid working (attending workshops, possibly across differ locations if needed) Nice to have: Insurance industry experience Previous work on Data strategies Exposure to collaborative modelling tools such as Ellie.ai Immediate availability If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk
Sep 22, 2025
Full time
Data Solution Architect - Azure, ADF, Snowflake, Insurance £Market rate (Inside IR35) London / Hybrid 6 Months My client is an instantly recognisable Insurance brand who urgently require a Data Solution Architect with expertise in Data Modelling, Data Engineering Design, strong Azure stack (including Data Factory (ADF), Data Lake Storage, SQL, Azure ML and Power BI), Snowflake and Enterprise scale Data Sources to join a business-critical Programme ASAP. Key Requirements: Proven expertise in Data Solution Architecture working on large, complex Data Programmes Strong Data Modelling experience (especially Conceptual Data Models) Demonstrable background in Data Engineering Design processes (ETL, ELT), with good knowledge of Data Vault, Inmon and Kimball Strong knowledge of MS Azure stack (including Data Factory (ADF), Data Lake Storage, SQL, Azure ML and Power BI) Proven experience with Snowflake Architecture, including DevOps integration and CI/CD practices Previous experience with large, Enterprise scale Data sets and sources Strong understanding of Data Governance principles (Data stewardship, Data quality and compliance guardrails as well as Data quality, lineage, lifecycle management and security) Excellent communication and stakeholder management skills Flexible approach to hybrid working (attending workshops, possibly across differ locations if needed) Nice to have: Insurance industry experience Previous work on Data strategies Exposure to collaborative modelling tools such as Ellie.ai Immediate availability If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk