Role Advantages: * Room to grow → opportunity to expand into data architecture, streaming, and cloud optimization. * Modern environment → no legacy monoliths, focus on automation, reliability, and clean data delivery.
Requirements: * 3–5 years of experience in data engineering. * Strong hands-on experience with API integrations (REST/GraphQL, authentication, pagination, rate limits). * Proficiency in Python, SQL, and data pipeline frameworks. * Solid knowledge of Airflow, Redshift, and S3. * Experience with Kafka and event-based data ingestion. * Familiarity with CI/CD for data workflows is a plus. * Foreign language level: Intermediate.
Responsibilities: * Develop and maintain data ingestion pipelines from external APIs and streaming sources. * Implement scalable ETL/ELT workflows using Airflow. * Automate data validation and error handling processes to ensure accuracy. * Collaborate with senior data engineers and architects to align solutions with overall data architecture. * Support BI and analytics teams by ensuring availability of clean, reliable datasets.
We offer: * Competitive compensation * Remote-friendly environment * Professional growth opportunities & knowledge sharing * Modern tech stack and challenging projects
Interested? Apply now and let’s build robust data pipelines together!