Initialize IT

2 job(s) at Initialize IT

Initialize IT
Jan 30, 2026
Contractor
Data Solution Architect - SC Cleared - mostly remote - £477 per day Role Responsible for designing Data Related Solutions quickly and securely, whilst ensuring alignments to Digital Blueprint Principles, Standards and Patterns. 1) Design and represent architecture solutions Lead, deliver and support the technical and architecture design for a product. Working collaboratively with engineers, to create VFM solution designs that meet user needs, typically in multi-disciplinary agile teams. Create solution options and recommendations during project discovery and inception phases, to drive decision making on best solution design. Contribute to the update of Digital Blueprint technology choices. Create high-level solution architecture documentation and architecture design artefacts for governance, including all relevant SME input. 2) Ensure compliance of design with approved design in the Digital Blueprint Ensure technical quality and adherence of Product designs to the Digital Blueprint, Enterprise Architecture and Product Roadmaps. Engage with Enterprise and Lead Technical Architects and engineers to implement solutions according to agreed and approved designs. Advocate and support delivery of solutions that reduce architecture complexity and reduce technical debt. 3) Maintain Product roadmaps and help maintain the Digital Blueprint Be accountable for product architecture, contribute to product roadmaps, in alignment with engineering. Identify, capture, iterate and implement architecture patterns. 4) Contribute to, and build capability in, the Architecture Contribute to building an inclusive digital culture. Support development of the Architecture Practice. Coach and mentor other architects. 5.) Data Integration: - Knowledge of integrating data from disparate sources (ERP systems, CRM systems, IoT devices, etc.), Real Time data pipelines, and batch processing. 6.) Analytics & Reporting: Data warehousing, Data Lakes, Business Intelligence (BI) and analytics technologies, design patterns, tools and best practice.
Initialize IT
Oct 01, 2025
Contractor
ML Ops engineer with Data Science background (AWS services) - London/remote - £536 per day ML Ops engineer with experience i n data science, DevOps, and AWS SageMaker, along with a solid understanding of Agile software development principles. In this role, you will act as a bridge between Data Scientists and IT DevOps Engineers, helping translate experimental ML models into scalable, production-ready applications. You'll play a critical role in building practical solutions to real-world data science challenges, including automating workflows, packaging models, and deploying them as microservices using AWS services . The ideal candidate will be adept at developing end-to-end applications to serve AI/ML models, including those from platforms like Hugging Face, and will work with a modern AWS-based toolchain (SageMaker, Fargate, Bedrock). Your core responsibilities include: Serve as the day-to-day liaison between Data Science and DevOps, ensuring effective deployment and integration of AI/ML solutions using AWS services. Assist DevOps engineers with packaging and deploying ML models, helping them understand AI specific requirements and performance nuances. Design, develop, and deploy standalone and micro-applications to serve AI/ML models, including Hugging Face Transformers and other pre-trained architectures. Build, train, and evaluate ML models using services such as AWS SageMaker, Bedrock, Glue, Athena, Redshift, and RDS. Help create the knowledge artefacts for Data Scientist around DevOps and ML Ops. Where required, hand hold the data scientist and assist them with DevOps engineering issues, package installation issues, creating a Docker container, ML Ops tooling issues. Develop and expose secure APIs using Apigee, enabling easy access to AI functionality across the organization. Manage the entire ML life cycle-from training and validation to versioning, deployment, monitoring, and governance. Build automation pipelines and CI/CD integrations for ML projects using tools like Jenkins and Maven. Solve common challenges faced by Data Scientists, such as model reproducibility, deployment portability, and environment standardization. Assist the product owner to define and implement the ML Ops roadmap. Support knowledge sharing and mentorship across data Scientists teams, promoting a best practice-first culture. What skills are required? Minimum skills: Degree in computer science, economics, data science or another technical field (eg maths, physics, statistics etc.), or equivalent relevant experience Strong programming proficiency in Python (or R), with practical experience in machine learning and statistical modelling. Proven experience delivering end-to-end data science products, including both experimentation and deployment. Solid understanding of data cleaning, feature engineering, and model performance evaluation. Essential skills: Demonstrated experience deploying and maintaining AI/ML models in production environments. Hands-on experience with AWS Machine Learning and Data services: SageMaker, Bedrock, Glue, Kendra, Lambda, ECS Fargate, and Redshift. Familiarity with deploying Hugging Face models (eg, NLP, vision, and generative models) within AWS environments. Ability to develop and host microservices and REST APIs using Flask, FastAPI, or equivalent frameworks. Proficiency with SQL, version control (Git), and working with Jupyter or RStudio environments. Experience integrating with CI/CD pipelines and infrastructure tools like Jenkins, Maven, and Chef. Strong cross-functional collaboration skills and the ability to explain technical concepts to non technical stakeholders. Ability to work across cloud-based architectures. Tools & Technologies: AWS Services: SageMaker, Bedrock, Glue, ECS Fargate, Athena, Kendra, RDS, Redshift, Lambda, CloudWatch Other Tooling: Apigee, Hugging Face, RStudio, Jupyter, Git, Jenkins, Linux Languages & Frameworks: Python, R, Flask, FastAPI, SQL