bloomreach logo

Senior Python Software Engineer

bloomreachSlovakia


No Relocation

Posted: January 23, 2026

Job Description

We are seeking a dedicated Senior Python Backend Software Engineer to join our Analytics team and take ownership of some of our most complex and business‑critical backend systems.

You will work on features like multi‑cloud data exports, data expiration, and analytics CRUD for reports, segmentations, and funnels. These systems sit at the heart of our analytics platform, handle large volumes of customer data, and require solid computer science fundamentals, strong Python engineering skills, and real‑world experience with distributed systems.

The team operates remotely within the Central European Time Zone (CET). We are happy to meet you in Brno, Prague (Czechia) or Bratislava (Slovakia), where our headquarters are located. Salary ranges from 4,000 EUR gross/month, depending on your seniority.

Responsibilities

  1. Design, develop, maintain, and test Python backend services and modules that power expor
  2. Own and evolve multi‑cloud export pipelines that integrate with AWS S3, Google Cloud Storage, Azure Blob Storage, and SFTP, with a strong focus on resilience, observability, and performance.
  3. Design and optimize MongoDB data models and queries for high‑throughput.
  4. Work with Redis, Kafka, and Celery to build robust distributed systems: task queues, concurrency control, state machines, and error recovery mechanisms.
  5. Design and implement REST APIs using Flask (and optionally gRPC services) that expose analytical capabilities to other internal components.
  6. Ensure data privacy and compliance in analytics pipelines (e.g., PII masking, schema‑driven privacy controls, permission checks, GDPR‑aligned behavior).
  7. Diagnose and resolve production incidents in distributed environments in cooperation with SREs: debug failed exports, stuck workflows, cloud storage issues, and performance bottlenecks.
  8. Contribute to testing and quality: extend and maintain a comprehensive pytest suite, improve test coverage, and participate in code reviews.
  9. Collaborate closely with other backend engineers, SRE/DevOps, and product managers to break down complex problems, propose pragmatic solutions, and deliver incremental value.

Qualifications

Must‑have

  1. 3+ years of professional experience in Python backend development.
  2. Strong Python 3 skills, including:
    • Advanced OOP (mixins, abstract base classes, inheritance)
    • Decorators and context managers
    • Solid grasp of iterators/generators for streaming large datasets
    • Confident use of type hints
  3. MongoDB expertise:
    • Complex aggregation pipelines and projections
    • Indexing and performance tuning for large collections
    • Experience with BSON/ObjectId handling
  4. Redis experience for caching and distributed patterns:
    • TTL and expiration strategies
    • Pipelines and atomic operations
    • Using Redis to support complex expiration and configuration patterns
  5. Strong background in distributed systems:
    • Task queues and asynchronous job processing (e.g., Celery)
    • State machines and job lifecycle management
    • Error recovery, retries, and graceful degradation in the face of transient failures
  6. Solid REST API design and implementation experience:
    • Flask or a similar Python web framework
    • Pagination, filtering, JSON request/response handling
    • Building APIs for analytical/CRUD‑style operations
  7. Ability to debug complex production issues:
    • Reading logs and metrics to trace failing workflows
    • Quickly identifying root causes in multi‑component systems (Python services, cloud storage, Kafka, task queues)
  8. Strong communication skills and comfort working in a remote, highly collaborative team.

Nice‑to‑have

  1. Experience designing and operating cloud storage integrations with resumable/multipart uploads, retry logic, and failure handling for large files (tens of GB).
  2. Experience integrating with SFTP/SSH.
  3. gRPC service development with Protocol Buffers.
  4. Background in analytics systems (reports, funnels, segmentations) or similar data‑heavy products.
  5. Hands‑on experience with data privacy / GDPR implementation:
    • Schema‑driven PII detection and masking
    • Permission‑aware data access
  6. Performance optimization:
    • Streaming and transforming very large datasets (hundreds of GB or more)
    • Memory usage optimization and throughput improvements in Python services
  7. Kafka or other event streaming systems for driving asynchronous workflows.
  8. Experience with GCP and Kubernetes‑based deployments (e.g., understanding how your services run in containers, how they scale, and how they are monitored).
  9. Experience with or interest in the Go programming language.

Our Stack

You don’t need to know everything on day one, but you should be comfortable ramping up on this stack:

  • Language & Frameworks: Python 3.11+, Flask, Celery
  • Data Stores: MongoDB, Redis, Kafka
  • Cloud Storage Integrations: AWS S3, Google Cloud Storage, Azure Blob Storage, SFTP
  • APIs: REST (Flask), gRPC (Protocol Buffers)
  • Validation & Models: Schematics / Pydantic or similar
  • Testing: pytest and related tooling

You’ll work closely with other components written in Go, and familiarity with it is a plus, but not required.

Your Success Story

First 30 days

  • Get to know the team, our product, and how analytics features fit into the bigger picture.
  • Set up your local environment and run key services and test suites.
  • Familiarize yourself with core modules such as exports, data expiration, and analytics CRUD.
  • Deliver your first small changes (bugfixes or low‑risk improvements) and get them to production with support from the team.

First 90 days

  • Take ownership of medium‑sized features or improvements.
  • Design and implement changes that touch multiple components (Python services, cloud storage, task queues), with guidance from senior teammates.
  • Participate in incident investigations, learning how we debug cloud storage issues, stuck workflows, and performance regressions.
  • Contribute to improving test coverage and reliability for the modules you own.

First 180 days

  • Become a go‑to person for at least one major area (e.g., multi‑cloud exports or data expiration pipeline).
  • Lead the design and delivery of larger, multi‑milestone projects that significantly improve stability, performance, or developer experience.
  • Optionally participate in L3 rotations for the areas you own, including production incident mitigation and follow‑up improvements.
  • Help shape our technical roadmap for Python services: propose refactors, performance improvements, and new capabilities based on your experience.

If you are excited by complex Python systems, multi‑cloud integrations, and distributed systems at scale, and you want to work on critical analytics features that power real customer value, we’d love to hear from you.

#LI-KP1

Additional Content

We are seeking a dedicated Senior Python Backend Software Engineer to join our Analytics team and take ownership of some of our most complex and business‑critical backend systems.

You will work on features like multi‑cloud data exports, data expiration, and analytics CRUD for reports, segmentations, and funnels. These systems sit at the heart of our analytics platform, handle large volumes of customer data, and require solid computer science fundamentals, strong Python engineering skills, and real‑world experience with distributed systems.

The team operates remotely within the Central European Time Zone (CET). We are happy to meet you in Brno, Prague (Czechia) or Bratislava (Slovakia), where our headquarters are located. Salary ranges from 4,000 EUR gross/month, depending on your seniority.

Responsibilities

  1. Design, develop, maintain, and test Python backend services and modules that power expor
  2. Own and evolve multi‑cloud export pipelines that integrate with AWS S3, Google Cloud Storage, Azure Blob Storage, and SFTP, with a strong focus on resilience, observability, and performance.
  3. Design and optimize MongoDB data models and queries for high‑throughput.
  4. Work with Redis, Kafka, and Celery to build robust distributed systems: task queues, concurrency control, state machines, and error recovery mechanisms.
  5. Design and implement REST APIs using Flask (and optionally gRPC services) that expose analytical capabilities to other internal components.
  6. Ensure data privacy and compliance in analytics pipelines (e.g., PII masking, schema‑driven privacy controls, permission checks, GDPR‑aligned behavior).
  7. Diagnose and resolve production incidents in distributed environments in cooperation with SREs: debug failed exports, stuck workflows, cloud storage issues, and performance bottlenecks.
  8. Contribute to testing and quality: extend and maintain a comprehensive pytest suite, improve test coverage, and participate in code reviews.
  9. Collaborate closely with other backend engineers, SRE/DevOps, and product managers to break down complex problems, propose pragmatic solutions, and deliver incremental value.

Qualifications

Must‑have

  1. 3+ years of professional experience in Python backend development.
  2. Strong Python 3 skills, including:
    • Advanced OOP (mixins, abstract base classes, inheritance)
    • Decorators and context managers
    • Solid grasp of iterators/generators for streaming large datasets
    • Confident use of type hints
  3. MongoDB expertise:
    • Complex aggregation pipelines and projections
    • Indexing and performance tuning for large collections
    • Experience with BSON/ObjectId handling
  4. Redis experience for caching and distributed patterns:
    • TTL and expiration strategies
    • Pipelines and atomic operations
    • Using Redis to support complex expiration and configuration patterns
  5. Strong background in distributed systems:
    • Task queues and asynchronous job processing (e.g., Celery)
    • State machines and job lifecycle management
    • Error recovery, retries, and graceful degradation in the face of transient failures
  6. Solid REST API design and implementation experience:
    • Flask or a similar Python web framework
    • Pagination, filtering, JSON request/response handling
    • Building APIs for analytical/CRUD‑style operations
  7. Ability to debug complex production issues:
    • Reading logs and metrics to trace failing workflows
    • Quickly identifying root causes in multi‑component systems (Python services, cloud storage, Kafka, task queues)
  8. Strong communication skills and comfort working in a remote, highly collaborative team.

Nice‑to‑have

  1. Experience designing and operating cloud storage integrations with resumable/multipart uploads, retry logic, and failure handling for large files (tens of GB).
  2. Experience integrating with SFTP/SSH.
  3. gRPC service development with Protocol Buffers.
  4. Background in analytics systems (reports, funnels, segmentations) or similar data‑heavy products.
  5. Hands‑on experience with data privacy / GDPR implementation:
    • Schema‑driven PII detection and masking
    • Permission‑aware data access
  6. Performance optimization:
    • Streaming and transforming very large datasets (hundreds of GB or more)
    • Memory usage optimization and throughput improvements in Python services
  7. Kafka or other event streaming systems for driving asynchronous workflows.
  8. Experience with GCP and Kubernetes‑based deployments (e.g., understanding how your services run in containers, how they scale, and how they are monitored).
  9. Experience with or interest in the Go programming language.

Our Stack

You don’t need to know everything on day one, but you should be comfortable ramping up on this stack:

  • Language & Frameworks: Python 3.11+, Flask, Celery
  • Data Stores: MongoDB, Redis, Kafka
  • Cloud Storage Integrations: AWS S3, Google Cloud Storage, Azure Blob Storage, SFTP
  • APIs: REST (Flask), gRPC (Protocol Buffers)
  • Validation & Models: Schematics / Pydantic or similar
  • Testing: pytest and related tooling

You’ll work closely with other components written in Go, and familiarity with it is a plus, but not required.

Your Success Story

First 30 days

  • Get to know the team, our product, and how analytics features fit into the bigger picture.
  • Set up your local environment and run key services and test suites.
  • Familiarize yourself with core modules such as exports, data expiration, and analytics CRUD.
  • Deliver your first small changes (bugfixes or low‑risk improvements) and get them to production with support from the team.

First 90 days

  • Take ownership of medium‑sized features or improvements.
  • Design and implement changes that touch multiple components (Python services, cloud storage, task queues), with guidance from senior teammates.
  • Participate in incident investigations, learning how we debug cloud storage issues, stuck workflows, and performance regressions.
  • Contribute to improving test coverage and reliability for the modules you own.

First 180 days

  • Become a go‑to person for at least one major area (e.g., multi‑cloud exports or data expiration pipeline).
  • Lead the design and delivery of larger, multi‑milestone projects that significantly improve stability, performance, or developer experience.
  • Optionally participate in L3 rotations for the areas you own, including production incident mitigation and follow‑up improvements.
  • Help shape our technical roadmap for Python services: propose refactors, performance improvements, and new capabilities based on your experience.

If you are excited by complex Python systems, multi‑cloud integrations, and distributed systems at scale, and you want to work on critical analytics features that power real customer value, we’d love to hear from you.

#LI-KP1