Data Engineer

  • Data
  • London, United Kingdom

Data Engineer

Job description

The Company

The Plum Guide is on a mission is to build the definitive collection of the world’s best homes. Through expert human curation and innovative tech, we’re taking a scientific and systematic approach to vetting every home in every city we open, and accepting only the top 1%. Like a modern Michelin Guide - but for homes. 

We launched 3 years ago in London. Since then, we have grown 10-20% month on month; expanded to 6 cities; tested over 100,000 homes; developed a customer experience that’s returning the highest NPS scores in the hospitality sector; and are on track to reach over £25M in annualised sales. 

We are backed by some of the world’s top VCs and angel investors who have built many of the world’s most exciting companies of today. These include TransferWise, Citymapper, BuzzFeed, Appear Here, Graze, Depop, SoFarSounds, Marvel, GymBox, Threads, Second Home, Zoopla, LoveFilm, Secret Escapes and many more. 

We’ve just closed a £14m Series B round of funding. Our focus for the next 12 months is on building an exceptional brand & customer experience; and maintaining hyper growth through an accelerated global rollout. This is where you come in.

The Role

We believe in democratising access to data – we want everyone in the company to be able to access data analytics and insight to make our guest and host experiences as great as they can be. We want to do this by creating self- service platforms for the team to use and drive. We want to maintain high standards of data quality but balance that against what is practical to build, maintain and operate within a small, lean team. We take a similar view towards technology – our platform is continually evolving to support the needs of our guests and hosts, and we have the freedom and flexibility to change any aspect of it, but only when it makes sense.

In response to growing demands for data and our passion to improve our stack, we are looking to recruit a talented, skilled and passionate Data Engineer to come join our team.


The main responsibility of being Plum’s first Data Engineer is to develop and maintain our Data Infrastructure to support our scalability requirements and the growing demands of the company’s thirst for data.

This includes:

  • Owning our data warehouse that is within Snowflake.

  • Managing and improving our existing ETL jobs which is a mixture of custom code (Python, Spark), Azure Data Factory and Stitch.

  • Building and operating data pipelines.

  • Working closely with other team members to improve the quality of our data, building rich data sets and enabling users to self-serve.

  • Creating elegant, simple, tested and reusable code.

  • Integrating new data sources into our data warehouse.

  • Maintaining good relationships with third-party vendors.

  • Taking ownership of data projects and seeing them through to completion.


Who we are looking for

You don’t need to have prior experience in hospitality or tech start ups. The important thing is that you are:

  • Ambitious and with a high standard for what is good enough.

  • You care about getting stuff done. And when obstacles inevitably get in the way, you know how to hustle and think creatively in order to find a solution.

  • Organized - able to project manage complex processes with multiple stakeholders.

  • A self-starting learner, confident teaching yourself to do things you have never done before. 

  • Someone who’s a team player and a positive, motivated person to be around. 


We understand that everyone has different experiences and skill sets and just because you might not fulfil all our requirements it doesn’t mean we won’t talk to you. Feel free to reach out and tell us about your Data Engineering experience and what you can bring to Plum.

  • Excellent programming skills: Experience with object-oriented/object function scripting languages: Python.

  • Proficient in Big Data, distributed computing optimisation and performance, including related technologies: Hadoop, Spark.

  • Experience with cloud providers (Azure/AWS/GCP) and handling an array of data sources including SQL, NoSQL stores, Tool Data.

  • Willingness to research, learn and use new tools, technologies to build ETL Pipelines: Segment, Talend, FiveTran, Stitch.

  • Experience with ETL, data warehouse design, data modelling, user permissions and complex workflow management.

  • Any data warehouse experience would be useful: Redshift, BigQuery, Snowflake.

  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow etc.

  • Experience with deploying machine learning /deep learning/algorithms into production.

  • Experience with data monitoring, observability and threshold alerting tools.

  • Knowledge of good practices in version control: Git, Bitbucket.