Build the systems
that move data
at scale.
We don't hire for roles that could exist anywhere. Every position at Dapter puts you on production data infrastructure processing petabytes for real clients , embedded inside their teams, owning real outcomes. If you're serious about distributed systems and modern data engineering, read on.
Why engineers
choose to stay.
We built Dapter for engineers who are tired of synthetic exercises, ticket-driven work, and managers who have never written production code. Here's what working here actually means.
Production work from day one
No warm-up sprints. No internal demo environments. You join a client team, attend their standups, and own real infrastructure that processes real data. The first week will be intense , that's by design.
Senior-only engineering teams
Every Dapter engineer is senior by hire. There is no layer of juniors to delegate to and no project managers between you and the technical decisions. You work with peers who can challenge and improve your thinking.
Real investment in your growth
Dapter covers certification costs across AWS, GCP, Azure, Snowflake, and Databricks. You get structured time for internal R&D, conference attendance, and a personal learning roadmap reviewed quarterly with your lead.
Current
openings.
Both roles are full-time, hybrid positions based in our Cluj-Napoca office. Working from home is limited to a maximum of 2 days per week , team presence and collaborative working rhythm matter to us, and we expect engineers to be in the office at least 3 days each week. Fully remote arrangements are not available for these roles. Apply by sending your CV and a short note on what you're looking for to hello@dapter.com.
Data Engineer
AWS Big Data
Design and build large-scale batch and real-time data pipelines on AWS. You'll embed directly within client data platform teams, own production workloads in Python and PySpark, and be hands-on across the full data pipeline lifecycle , from ingestion through transformation to serving.
- Design and maintain PySpark batch jobs running on Amazon EMR, processing terabytes of structured and semi-structured data daily.
- Build and operate real-time streaming pipelines using Amazon Kinesis and Apache Kafka, feeding downstream analytics and ML feature stores.
- Architect S3 data lakes with correct partitioning strategies, Glue Data Catalog schemas, and cost-efficient storage lifecycle policies.
- Implement dbt transformation layers on top of Redshift or Athena, writing modular, tested, documented SQL models.
- Orchestrate end-to-end pipeline workflows with Apache Airflow (MWAA) or AWS Step Functions, with proper retry logic and alerting.
- Participate in architecture reviews, contribute to ADRs, and share knowledge across client and Dapter teams through documented decisions.
- Monitor pipeline health with CloudWatch, set up custom metrics and alerting, and own on-call rotation for critical pipeline SLAs.
- 3+ years of Python experience in a data engineering context , clean, idiomatic, testable code, not just scripting.
- Solid hands-on PySpark and Apache Spark knowledge: partitioning, shuffling, broadcast joins, memory management, and cluster tuning.
- Production experience on AWS: EMR, Glue, S3, Kinesis, Lambda, Redshift or Athena. You've debugged things at 3am in these systems.
- Understanding of distributed computing fundamentals , data skew, fault tolerance, idempotent processing, exactly-once semantics.
- Experience with workflow orchestration tools (Airflow preferred) and building DAGs that are maintainable, not just functional.
- Strong written English for client documentation, architecture proposals, and async communication across time zones.
- AWS Certified Data Engineer – Associate, or AWS Solutions Architect – Associate.
- Experience with Delta Lake or Apache Iceberg table formats in a lakehouse architecture.
- Familiarity with dbt Cloud, including model testing, exposures, and semantic layer configuration.
- Infrastructure-as-code using Terraform or AWS CDK for pipeline infrastructure provisioning.
- Kafka Streams or Apache Flink for stateful stream processing beyond basic consumer/producer patterns.
.NET Full Stack
Engineer
Build the data-facing applications and internal platform tooling that sit on top of the pipelines our data engineers build. You'll design clean REST APIs in ASP.NET Core, build performant React frontends, and work directly with data engineers and client stakeholders to ship features that people actually use.
- Design and build REST APIs in ASP.NET Core 8 , clean architecture, properly versioned, with OpenAPI documentation and authentication via Amazon Cognito or OAuth2.
- Build React (TypeScript) frontends that surface real-time data , dashboards, monitoring views, self-service data exploration interfaces.
- Implement SignalR or WebSocket connections for live pipeline status feeds and streaming data visualisations.
- Own data access layers using Entity Framework Core alongside optimised raw SQL queries where ORM abstraction introduces unacceptable overhead.
- Write unit and integration tests with xUnit and Playwright , reviewable code that a colleague can maintain 12 months from now without asking questions.
- Deploy to AWS Elastic Beanstalk, AWS Lambda, or containerised environments via Docker on Amazon ECS/Fargate; participate in CI/CD pipeline design with AWS CodePipeline or GitHub Actions.
- Collaborate directly with data engineers to design schemas and APIs that serve analytics results, model outputs, and operational metrics surfaced from S3, Redshift, and Athena to end users.
- 3+ years of production C# and ASP.NET Core development , not just familiarity, but real ownership of services running in production.
- Strong frontend skills in React with TypeScript: component design, state management (Zustand or Redux), and performance profiling.
- Solid SQL proficiency across SQL Server and/or PostgreSQL , including query optimisation, indexing strategy, and schema design for analytical workloads.
- Experience designing RESTful APIs with proper resource modelling, error handling, pagination, and API versioning strategies.
- Comfort with Git-based workflows, code review culture, and writing technical documentation as a first-class deliverable.
- Strong written English for async communication with international client teams and for writing architecture decision records.
- AWS Certified Developer – Associate or AWS Solutions Architect – Associate demonstrating hands-on cloud development experience.
- Experience with GraphQL APIs using Hot Chocolate or a comparable .NET implementation.
- Familiarity with data warehousing concepts , you don't need to build pipelines, but understanding what a fact table is matters here.
- Background in real-time systems using SignalR or message brokers (Amazon SQS, Amazon SNS, or RabbitMQ) for event-driven UI updates.
- Docker and AWS ECS/Fargate experience for containerised deployment of ASP.NET Core services.
We're always open to
exceptional engineers.
Dapter grows through referrals and speculative applications more than job boards. If you're a senior data engineer, platform architect, analytics engineer, or data-facing application developer who wants to work on serious infrastructure, send us a note , we read every one. Write to hello@dapter.com with a short paragraph on what you've built and what you're looking for.
No open role that fits? Tell us anyway.
Include your CV, a brief note on your specialisation, and what kind of engagement you're looking for. If nothing fits immediately, we keep strong profiles on file and reach out when something relevant opens.
Send speculative CV