Skip to main content

Data Engineer – FTC 3 – 4 months

Data Engineer – FTC 3 – 4 months

< Back to Job Search

Work for a strongly funded & award-winning start-up in the climate sector

Hybrid

Job Description

One of our clients is hiring a Data Engineer to join their existing data products and tooling team for a fixed term contract of 3-4 months.

This team is part of our broader data organisation and is focussed on developing carbon offset-related data products for our clients, as well as building internal data tools to increase the efficiency of our Ratings teams. This is a cross-functional role: you will be working together with colleagues from our product, ratings, and software engineering team every day.

They have invested heavily in the development of internal tools to increase the operational efficiency of its ratings production process. As part of this investment, the team has been developing an in-house central data portal that enables rating analysts to access prepared and curated data essential for evaluating carbon offset projects. Aside from helping our team with daily maintenance of the existing data pipelines, they have two projects this contractor role would be responsible for delivering:

  1. Rearchitecting dbt project: its currently a single dbt project that manages analytical data models. This will need to be adjusted enabling projects to achieve the following:
    1. Allow smaller slices of the project to run independently and on different frequencies.
    2. Better protect against bad data making its way to our presentation layers.
    3. Unify and define patterns to build a common toolset that can be applied to different use cases.
  2. Extend our analytical stack to power in-house analyses: It currently exposes ratings and analytics team datasets and analyses on Metabase to feed into their ratings process. Ratings processes differ substantially for different projects resulting in varying requirements for datasets and analyses.
    1. Re-assess and define patterns to ingest and make third-party data available to analytics users
    2. Extend & ingest dbt pipelines to surface and process the data
    3. Work with analytic stakeholders to fulfil their business requirements

Ideas on how to achieve the projects above but are looking for a dbt expertise to assess what they have, work with to find the best approach for their team, define best practices and ultimately execute the required changes.

Ideal candidate

You’d be a great fit if you’ve worked on similar projects before, perhaps as a Data Engineer or Analytics Engineer.

Tech stack

Data stack includes the following technologies:

  • AWS serves as our cloud infrastructure provider.
  • Snowflake acts as central data warehouse for tabular data. AWS S3 is used for geospatial raster data, and AWS RDS instances with PostGIS for storing and querying geospatial vector data.
  • Heavy use of dbt for building SQL data models and Python jobs for any non-SQL data ingestion and transformations (typically for API integrations).
  • Computational jobs are executed in Docker containers on AWS ECS, and Prefect for workflow orchestration engine.
  • GitHub Actions for CI / CD.
  • Metabase serves as a dashboarding solution.

This is a remote-friendly company and many colleagues work fully remote; however, for this position, we will only consider applications from candidates based in the UK.

You’ll be our ideal candidate if:

You have at least 4 years of experience building ELT/ETL pipelines in production, using Python and SQL.

You are deeply familiar with dbt and have experience scaling dbt repositories beyond a couple of hundred models.

You’ve designed back-end services and deployed APIs yourself, ideally using a framework like FastAPI.

You have experience in deploying and maintaining cloud resources into production using tools such as AWS Cloud Formation, Terraform, or others.

You have hands-on experience with workflow orchestration tools (e.g., Airflow, Prefect, Dagster), containerization using Docker, and a cloud platform like AWS.

You can write clean, maintainable, scalable, and robust code in Python and SQL, and are familiar with collaborative coding best practices and continuous integration tooling.

You are well-versed in code version control and have experience working in team setups on production code repositories.

You have experience in deploying and maintaining cloud resources into production using tools such as AWS Cloud Formation, Terraform, or others.

Interview process:

  • Initial screening interview with recruiter (15 mins)
  • Introduction call with Chief Data Officer (30 mins)
  • Technical interview with members from the data engineering team (90 mins)
  • Reference checks + offer

They value diversity, they need a team that brings different perspectives and backgrounds together to build the tools needed to make the voluntary carbon market transparent. Therefore, commitment to not discriminate based on race, religion, colour, national origin, sex, sexual orientation, gender identity, marital status, veteran status, age, or disability.

If you possess the required experience, skills to carry out the role & tasks to a high standard please get in touch with Ian at the earliest opportunity!

Ian@theBDPN.com 07944-841968 – www.TheBDPN.com

Upload your CV/resume or any other relevant file. Max. file size: 39 MB.

Job Overview
Category
Data Science
Offered Salary
£500 - £600 per day
Job Type
Contract
Consultant
Ian Benjamin

Ian Benjamin

Typically replies within a few minutes

I will be back soon

Ian Benjamin
Hey there 👋
How can I help you?
Messenger