We are hiring on behalf of one of our clients, a U.S.-based company with strong presence in the tech market and partnerships with several Fortune 500 software engineering firms.
They are seeking a highly skilled and experienced Data Engineer to help shape and scale their supply chain and operations analytics infrastructure. In this role, you will work closely with cross-functional teams — including Operations, Finance, and Analytics — to design, build, and monitor scalable, production-grade data pipelines. Your work will be critical to driving data-informed decisions across the business.
Important: This role requires full-time availability from 9:00 AM to 5:00 PM Central Time (CT). What You’ll Do: * Develop and maintain automated ETL pipelines using Python, Snowflake SQL, and related technologies. * Ensure robust data quality through unit testing, validation, and continuous monitoring. * Collaborate with stakeholders to ingest and transform large healthcare datasets with accuracy and efficiency. * Leverage AWS services such as S3, DynamoDB, Batch, and Step Functions for data integration and deployment. * Optimize performance for pipelines processing large-scale datasets (1GB+). * Translate business requirements into reliable, scalable data solutions.
What You Bring: * 4+ years of hands-on experience as a Data Engineer or in a similar role. * Proven expertise in Python, SQL, and Snowflake for data engineering tasks. * Strong experience building and maintaining production-grade ETL pipelines. * Solid understanding of data validation, transformation, and debugging practices. * Prior experience with healthcare or claims datasets is highly preferred. * Practical knowledge of AWS technologies: S3, DynamoDB, Batch, Step Functions. * Experience working with large datasets and complex data environments. * Excellent verbal and written English communication skills.
Interested? Then we’re looking forward to your application.