Senior Data Engineer

Posted Jun 18

Upgrade is a fintech unicorn backed by a top 10 global bank and other leading fintech investors. Founded in 2017, Upgrade has already delivered $5 billion in consumer credit and achieved $125 million in annual revenue run rate and cash profitability.

Upgrade is building a neobank offering exceptional value to mainstream consumers, including affordable and responsible credit through cards and loans. In 4 short years 12 million people have already applied for an Upgrade Card or loan.

Upgrade has been named a “Best Place to Work in the Bay Area” by the San Francisco Business Times and Silicon Valley Business Journal 3 years in a row, and received “Best Company for Women” and “Best Company for Diversity” awards from Comparably.

We are looking for new team members who get excited about designing and implementing new and better products to join a team of over 700 talented and passionate professionals. Come join us if you like to tackle big problems and make a meaningful difference in people's lives.

This is a remote position based in the United States.


  • Design, architect, and maintain a data warehouse that supports analytical needs for a rapidly evolving product, in partnership with data, backend, techops, product, marketing, finance and risk team.
  • Quickly gain a deep understanding of the business and how data flows through the organization and through the data engineering codebase while playing a key role in building an efficient and scalable data and reporting layer for the organization.  
  • Build and scale awesome data pipelines to enrich our Enterprise Data warehouse.
  • Set up tools and processes for effective data management and drive data quality across data in the data warehouse.
  • Own the design, development, maintenance, and distribution of key data and business metrics and reports. Take a proactive approach to defining new metrics, reports, and dashboards to improve the visibility into data as well as business operations.

Required Skillset

  • At least 4 years of recent hands-on experience writing complex SQL queries on large data sets in a data warehousing environment.
  • At least 2 years working with datasets/dataframes in Python(pandas/numpy/dask).
  • Experience with columnar database table design and building data pipelines (Redshift and/or Snowflake).
  • Experience with building data pipelines with task orchestrators such as Airflow/Google Dataflow/Luigi etc.
  • Excellent verbal and written communication skills – Ability to synthesize complex ideas and communicate them in very simple ways.
  • Highly analytical and detail-oriented.
  • Ability to troubleshoot and fix issues quickly in a fast-paced environment.

Strong Plus

  • Worked with real time big data pipelines using distributed data processing systems using Apache Spark/AWS Glue/Beam
  • Worked with containerization using docker and kubernetes.
  • Financial services experience.
  • Reporting and data visualization skills.


  • Competitive salary and stock option plan. 
  • 100% paid coverage of medical, dental and vision insurance. 
  • Unlimited vacation. 
  • Learning stipend for personal growth and development. 
  • Paid parental leave.  

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.