Senior Platform Engineer (Databricks)

Posted Jul 21

About us

GetInData | Part of Xebia is a leading data company working for international Clients, delivering innovative projects related to Data, AI, Cloud, Analytics, ML/LLM, and GenAI. The company was founded in 2014 by data engineers and today brings together 120 Data & AI experts. Our Clients are both fast-growing scaleups and large corporations that are industry leaders. In 2022, we joined forces with Xebia Group to broaden our horizons and bring new international opportunities.

What about the projects we work with?

We run a variety of projects in which our sweepmasters can excel. Advanced Analytics, Data Platforms, Streaming Analytics Platforms, Machine Learning Models, Generative AI and more. We like working with top technologies and open-source solutions for Data & AI and ML/AI. In our portfolio, you can find Clients from many industries, e.g., media, e-commerce, retail, fintech, banking, and telcos, such as Truecaller, Spotify, ING, Acast, Volt, Play, and Allegro. You can read some customer stories here.

What else do we do besides working on projects?

We conduct many initiatives like Guilds and Labs and other knowledge-sharing initiatives. We build a community around Data & AI, thanks to our conference Big Data Technology Warsaw Summitmeetup Warsaw Data Tech TalksRadio Data podcast, and DATA Pill newsletter.

Data & AI projects that we run and the company's philosophy of sharing knowledge and ideas in this field make GetInData | Part of Xebia not only a great place to work but also a place that provides you with a real opportunity to boost your career.

If you want to be up to date with the latest news from us, please follow up on our LinkedIn profile.

About role

For our Data Product Creation team, we are looking for Data Platform Engineers. The team focus on the development of our data product creation frameworks and mostly works with: Databricks, Azure DevOps, Terraform, Spark and Python.

As a Data Platform Engineer, you will be responsible for managing, optimizing and automating our cloud-based infra, data pipelines, applications and services.

Responsibilities

  • Fostering a culture of eliminating toil and implementing optimal observability and monitoring
  • Observability, Incident Response, and Postmortems
  • Ensuring that data and production systems are highly reliable, available, efficient, secure, and scalable
  • Ensuring our assets are safeguarded against unforeseen events
  • Identifying areas for improvement (e.g., cost optimization, deployment processes) and providing technical hands-on assistance
  • Collaborating with a variety of people and ARTs to ensure everyone is aligned on data-related topics
  • Maintaining open communication lines with team members, aligning on project expectations and timelines

Job requirements

  • Advanced proficiency in Python
  • Solid background in Spark
  • Extensive experience with Databricks
  • Familiarity with the Azure DevOps
  • Proficiency in working with Terraform
  • Understanding complexity and relationships between businesses, suppliers, partners, competitors, and clients
  • Ability to actively participate/lead discussions with clients to identify and assess concrete and ambitious avenues for improvement

We offer

  • Salary: 150 - 185 PLN net + VAT/h B2B (depending on knowledge and experience)
  • 100% remote work
  • Flexible working hours
  • Possibility to work from the office located in the heart of Warsaw
  • Opportunity to learn and develop with the best Big Data experts
  • International projects
  • Possibility of conducting workshops and training
  • Certifications
  • Co-financing sport card
  • Co-financing health care
  • All equipment needed for work