Adaptiq is a technology hub specialising in building, scaling and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client — Finaloop — reshapes bookkeeping to fit the e-commerce needs, building a fully automated, real-time accounting platform that replaces traditional bookkeeping for e-commerce and DTC brands. That means handling vast volumes of financial data with precision, scale, and zero margin for error.
To support this, we’re investing in our platform’s core infrastructure, the foundation that powers real-time financial insight across thousands of businesses globally. About the Role We’re seeking an outstanding and passionate Data Platform Engineer to take part in shaping Finaloop’s data infrastructure at the forefront of Fintech and AI.
You’ll join a high-impact R&D team in a fast-paced startup environment, building scalable pipelines and robust data systems that empower eCommerce businesses to make smarter decisions.
Key Responsibilities: * Designing, building, and maintaining scalable data pipelines and ETL processes for our financial data platform * Developing and optimizing data infrastructure to support real-time analytics and reporting * Implementing data governance, security, and privacy controls to ensure data quality and compliance * Creating and maintaining documentation for data platforms and processes * Collaborating with data scientists and analysts to deliver actionable insights to our customers * Troubleshooting and resolving data infrastructure issues efficiently * Monitoring system performance and implementing optimizations * Staying current with emerging technologies and implementing innovative solutions
Required Competence and Skills: * 5+ years experience in Data Engineering or Platform Engineering roles * Strong programming skills in Python and SQL * Experience with orchestration platforms and tools (Airflow, Dagster, Temporal or similar) * Experience with MPP platforms (e.g., Snowflake, Redshift, Databricks) * Hands-on experience with cloud platforms (AWS) and their data services * Understanding of data modeling, data warehousing, and data lake concepts * Ability to optimize data infrastructure for performance and reliability * Experience working with containerization (Docker) in Kubernetes environments * Familiarity with CI/CD concepts and principles * Fluent English (written and spoken)
Nice to have skills: * Experience with big data processing frameworks (Apache Spark, Hadoop) * Experience with stream processing technologies (Flink, Kafka, Kinesis) * Knowledge of infrastructure as code (Terraform) * Experience building analytics platforms or clickstream pipelines * Familiarity with ML workflows and MLOps * Experience working in a startup environment or fintech industry
The main components of our current technology stack: * AWS Serverless, Python, Airflow, Airbyte, Temporal, PostgreSQL, Snowflake, Kubernetes, Terraform, Docker.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.