Confidential Jobs

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 6-month remote contract, offering a W2 position. Requires 10+ years in ETL development, proficiency in SQL, and experience with ETL tools and cloud platforms. Familiarity with data mesh concepts is a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 6, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Integration #Kubernetes #Python #Automation #ADF (Azure Data Factory) #Batch #SQL (Structured Query Language) #Cloud #Databricks #SSIS (SQL Server Integration Services) #Docker #Scala #Informatica #Data Processing #Data Modeling #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Data Warehouse #API (Application Programming Interface) #Spark (Apache Spark) #DevOps #Data Engineering #Data Lake #Talend #Databases #Java #Azure
Role description
Role: ETL Data Engineer Duration: 6 Months of contract duration Location: Remote Note: This is a W2 position 5+ ETL or 5+ data engineers that are flexible to learn new things like data mesh, build automation etc Requirements Job Description: Required Skills & Experience • 10+ years of experience in ETL development, data engineering, or related roles. • Strong hands-on experience with ETL tools (Informatica, Talend, ADF, Pentaho, SSIS, etc.) or cloud-based ETL. • Proficiency in SQL and experience working with relational and non-relational databases. • Experience in Python/Scala/Java for data processing. • Familiarity with cloud platforms such as AWS, Azure, or GCP. • Understanding of data warehouse concepts, data modeling, and batch/real-time processing. • Ability to quickly learn and adapt to new technologies and frameworks. Good to Have • Exposure to data mesh concepts or modern data platform architecture. • Experience with Docker, Kubernetes, or containerized environments. • Knowledge of DevOps practices and CI/CD automation. • Experience with data lakes, lakehouses, or distributed processing systems (Spark, Databricks, EMR etc.). • Understanding of API-based data integration.