[W2] Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a [W2] Data Engineer contractor for 6 months, offering a pay rate of "pay rate". It requires 5+ years of ETL experience, proficiency in Python, and expertise in cloud platforms. Remote work location; familiarity with Dagster is a plus.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 22, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Analysis #dbt (data build tool) #Data Science #AWS (Amazon Web Services) #SQL (Structured Query Language) #Computer Science #Code Reviews #Metadata #Visualization #Data Engineering #Docker #"ETL (Extract #Transform #Load)" #Data Processing #Data Quality #Big Data #Azure #Kubernetes #Datasets #Data Lineage #Tableau #Data Orchestration #JSON (JavaScript Object Notation) #GCP (Google Cloud Platform) #Documentation #Cloud #Python
Role description
We are seeking a highly skilled and experienced Data ETL Engineer Contractor to join our small but impactful data processing team. You will play a crucial role in designing, building, and implementing robust and efficient ETL (Extract, Transform, Load) processes for large datasets. Due to the sensitive nature of the data we handle, access will be strictly controlled, and you may not be fully briefed on the context of the data being processed. This role requires a proven ability to design and implement ETL/ELT pipelines, and a high proficiency in Python and remote development. While a strong foundation in data science and analytics is not required, experience in these areas is a plus, as opportunities may arise to contribute to developing reporting tools and data analysis. You will work closely with our team, receiving clear technical instructions, to implement data processing pipelines, optimize existing pipelines, to help ensure the timely and accurate delivery of data to multiple internal teams. Responsibilities: β€’ Design, develop, and implement ETL processes using a suitable framework (familiarity with Dagster and other data orchestration frameworks is a plus). β€’ Collaborate with the data processing team, receiving technical instructions and specifications. β€’ Provide documentation for ETL processes, including data lineage and transformation logic. β€’ Troubleshoot and resolve data-related issues, utilizing strong analytical and problem-solving skills. β€’ Monitor and maintain the performance of ETL pipelines, identifying and resolving bottlenecks. β€’ Participate in code reviews and contribute to best practices within the team. β€’ Contribute to the improvement of our data infrastructure and processes. Qualifications: β€’ Bachelor's degree in Computer Science, Engineering, or a related field. β€’ 5+ years of experience in data engineering, with a strong focus on ETL/ELT processes. β€’ Proven experience working with large datasets. β€’ High proficiency in Python. β€’ Expertise in at least one cloud platform (e.g., AWS, Azure, GCP) and relevant services. β€’ Experience with media data formats (i.e. photos and video) and β€’ metadata formats such as CSV and JSON. β€’ Experience with Big Data formats and tooling such as Parquet, Iceberg. β€’ Experience with data quality and validation techniques. β€’ Excellent problem-solving and analytical skills. β€’ Strong communication and collaboration skills. β€’ Ability to work effectively with limited contextual information and β€’ follow detailed technical specifications. Bonus Points β€’ Experience with Dagster. β€’ Experience with dbt. β€’ Experience with containerization technologies (e.g., Docker, Kubernetes). β€’ Experience with CI/CD pipelines. β€’ Experience with data visualization tools (e.g., Tableau, Superset). β€’ Experience in SQL, data science and analytics.