Data / Records Management, IT & Telecomms

If you are someone who enjoys designing scalable data pipelines and building modern, high performance data platforms, this is a great opportunity!

We are partnering with a global tech consultancy to find a Data Engineer for an initial 3 month contract.

Responsibilities:

  • Build/ improve data pipelines using Python and PySpark for both batch and streaming workloads
  • Design and manage Delta Lake layers (Bronze, Silver, Gold) -Databricks
  • Set up and maintain Databricks workflows, jobs, notebooks, and DLT pipelines
  • Optimise performance using Databricks tools and follow best practices for clean, modular code
  • Work with AWS S3 to manage raw and processed data securely 
  • Use Terraform and Databricks Asset Bundles
  • Automate AWS workflows using Boto3 (S3, Glue, Lambda)
  • Work with data scientists and engineers to keep data flowing across teams
  • Apply place checks for data quality, validation, and monitoring

Required Skills & Experience:

  • Strong coding in Python and PySpark
  • Experience with Databricks and Delta Lake
  • experience in AWS services, especially S3, Glue, Lambda, and Redshift
  • Experience using Terraform or CloudFormation
  • Comfortable automating AWS tasks with Boto3
  • prior experience building efficient, secure, and scalable pipelines
  • Works well in cross functional and remote teams

Desirable:

  • Databricks certification
  • Experience working in consultancies or fast paced delivery environments
  • Exposure to data governance frameworks