Job Description
Design & develop data pipelines to source data from multiple sources including RDBMS, Parquet files, Excel, and text files. Evaluate, debug, and modify existing complex Python, Databricks & SSIS packages in accordance with business requirements. Analyze existing ETL jobs, develop ELT pipelines & data warehouse applications or work to formulate logic for new systems and devise algorithms. Utilize current development methods/techniques including CI/CD pipelines and establish development standards (coding, documentation and testing) to ensure the quality and maintainability of automated solutions.
Position is 100% remote.
Requires 3 days of consecutive travel per quarter.
3 years of experience (OR 5 years with Bachelor’s degree) as a software/data engineer or related occupation.
Must include some experience with:
- Using ETL processes in data engineering.
- Using Azure and AWS for cloud-based data solutions.
- Using Hadoop for Big Data & Distributed Computing at large scale.
- Database Management in MS SQL Server, Azure SQL databases.
- Using Python, SQL, Pyspark for data engineering, analytics, and automation.
- Using Git, Bitbucket for version control, deployment automation, and workflow orchestration.
- Using Power BI, Tableau for data visualization and dashboard development.
Requires the following certifications: AWS Certified Cloud Practitioner and Databricks Certified Data Engineer Associate.
Master’s OR Bachelor’s degree in Computer Science or related field. Foreign equivalents accepted.
Please copy and paste your resume in the email body (do not send attachments, we cannot open them) and email it to candidates at placementservicesusa.com with reference #093915 in the subject line.
Thank you.