Proximity Recruitment is seeking a talented Data Engineer (Salary £40,000 – £80,000 + Bonus + Benefits) to join a forward-thinking, award-winning team in either Manchester or London. If you're searching for a collaborative environment that supports professional growth, this could be the perfect opportunity for you!
In this role, you'll be working on a diverse range of data engineering projects, managing key client accounts, and helping to build scalable data pipelines that drive business performance. You’ll collaborate closely with internal teams, optimise data processes, and utilise advanced tools to process and transform large datasets. Strong technical expertise in Python, end-to-end pipeline development, and cloud platforms such as AWS, GCP, or Azure is essential for success in this position.
Key Responsibilities:
-
Design, develop, and maintain end-to-end data pipelines, using Python and cloud platforms such as AWS, GCP, or Azure to support business operations.
-
Collaborate with stakeholders to understand business requirements and translate them into scalable and efficient data engineering solutions.
-
Optimise data workflows to improve performance, scalability, and reliability across projects.
-
Work with data scientists and analysts to ensure data is available and optimised for analysis.
-
Develop and maintain cloud infrastructure and services, ensuring secure and efficient data storage and processing.
-
Implement best practices for data engineering and continuously improve existing systems.
-
Perform testing and troubleshooting of data pipelines to ensure data accuracy and integrity.
-
Stay updated on emerging technologies and trends in cloud platforms and data engineering practices.
-
Provide guidance and mentorship to junior data engineers and other team members.
The ideal candidate will have strong experience in Python, end-to-end data pipeline development, and working with cloud platforms like AWS, GCP, or Azure. Experience with PySpark is highly desirable. The ability to manage multiple projects concurrently, along with a proactive, problem-solving approach, is key.
Requirements:
-
Proven experience in Python and developing end-to-end data pipelines.
-
Strong experience with cloud platforms such as AWS, GCP, or Azure.
-
Familiarity with PySpark is a desirable skill.
-
Ability to manage large datasets and ensure the performance and reliability of data pipelines.
-
Excellent communication skills to collaborate with both technical and non-technical teams.
-
A desire to continuously develop skills and grow within a data engineering team.
Additional Details:
-
This role is hybrid (3 days a week in the office).
-
Includes a bonus scheme, health benefits, and additional time off for your birthday!
-
Candidates must have a driver’s licence and access to a vehicle for some travel to client sites.