Don't think outside the box. Think like there is no box.
Some of our current openings:
As a Senior Data Engineer @Dapter, you will be responsible for developing, optimizing, and maintaining the ETL data pipeline. This involves working with infrastructure built in AWS, including Spark EMR, S3, and DynamoDB. Additionally, this role will help build analytical tools, develop unit and stress tests, and create automation surrounding the orchestration of the ETL data pipeline.
As a Data Optimization Analyst @Dapter, you will have the opportunity to combine programming, algorithms, and quantitative analysis to optimize our data quality and key metrics for high-volume data pipelines, integrating multiple sources. You will be responsible for leveraging data mining approaches to systematically recognize patterns and data anomalies intended to inform data cleaning, search-ability, and identity resolution. This is a hands-on development/analytics position focused on automating processes that ensure the integrity of the data that we provide to our customers. You should be open to working with diverse types of big data and technologies such as Amazon AWS, MapReduce/Spark, Tableau, and Pentaho.
As an ETL Developer @Dapter, you will be responsible for building and maintaining a high-volume workflow for collecting, transforming, and analyzing customer behavior data. This includes running and monitoring production ETL jobs, error handling and investigation, and developing ETL processes.