

BrothersTech
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a full-time, 100% remote contract lasting over 6 months, offering competitive pay. Requires 4–8+ years of experience, strong SQL skills, proficiency in Python or Java, and expertise in cloud platforms like AWS or Azure.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 17, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Scala #Computer Science #Data Modeling #Snowflake #SQL (Structured Query Language) #Datasets #Database Systems #AWS (Amazon Web Services) #Airflow #DevOps #Cloud #Data Science #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Data Warehouse #Data Lake #GCP (Google Cloud Platform) #Azure #SQL Queries #Database Performance #dbt (data build tool) #Java #Security #Redshift #BigQuery #Compliance #Data Quality #Programming #Data Pipeline #Hadoop #Data Governance #Kafka (Apache Kafka) #Monitoring #Data Processing #Spark (Apache Spark) #Big Data #Informatica #BI (Business Intelligence) #Python #Docker #Data Engineering #Databricks
Role description
Location: United States (100% Remote)
Employment Type: Full-Time / Contract (W2)
Experience Required: 4–8+ Years
Key Responsibilities
• Design, build, and maintain scalable ETL/ELT data pipelines for processing large datasets.
• Develop and manage data warehouses and data lakes to support business intelligence and analytics.
• Write efficient SQL queries and optimize database performance for large-scale data environments.
• Work with cross-functional teams including data scientists, analysts, and product teams to understand data requirements.
• Implement data quality checks, validation frameworks, and monitoring processes.
• Build and maintain cloud-based data infrastructure using platforms such as AWS, Azure, or GCP.
• Optimize data workflows and automate data processing tasks.
• Ensure data governance, security, and compliance standards are followed.
Required Skills & Qualifications
• Bachelor’s degree in Computer Science, Engineering, or related field.
• 4–8+ years of experience as a Data Engineer or in a similar role.
• Strong experience with SQL and database systems.
• Proficiency in programming languages such as Python, Java, or Scala.
• Hands-on experience with ETL tools and data pipeline frameworks (Airflow, dbt, Informatica, etc.).
• Experience with Big Data technologies such as Spark, Hadoop, or Kafka.
• Strong knowledge of data modeling, data warehousing, and distributed systems.
• Experience with cloud platforms like AWS, Azure, or Google Cloud.
Preferred Qualifications
• Experience with Snowflake, Redshift, BigQuery, or Databricks.
• Familiarity with containerization and DevOps tools (Docker, CI/CD).
• Experience working in remote or distributed teams.
• Exposure to machine learning data pipelines.
Location: United States (100% Remote)
Employment Type: Full-Time / Contract (W2)
Experience Required: 4–8+ Years
Key Responsibilities
• Design, build, and maintain scalable ETL/ELT data pipelines for processing large datasets.
• Develop and manage data warehouses and data lakes to support business intelligence and analytics.
• Write efficient SQL queries and optimize database performance for large-scale data environments.
• Work with cross-functional teams including data scientists, analysts, and product teams to understand data requirements.
• Implement data quality checks, validation frameworks, and monitoring processes.
• Build and maintain cloud-based data infrastructure using platforms such as AWS, Azure, or GCP.
• Optimize data workflows and automate data processing tasks.
• Ensure data governance, security, and compliance standards are followed.
Required Skills & Qualifications
• Bachelor’s degree in Computer Science, Engineering, or related field.
• 4–8+ years of experience as a Data Engineer or in a similar role.
• Strong experience with SQL and database systems.
• Proficiency in programming languages such as Python, Java, or Scala.
• Hands-on experience with ETL tools and data pipeline frameworks (Airflow, dbt, Informatica, etc.).
• Experience with Big Data technologies such as Spark, Hadoop, or Kafka.
• Strong knowledge of data modeling, data warehousing, and distributed systems.
• Experience with cloud platforms like AWS, Azure, or Google Cloud.
Preferred Qualifications
• Experience with Snowflake, Redshift, BigQuery, or Databricks.
• Familiarity with containerization and DevOps tools (Docker, CI/CD).
• Experience working in remote or distributed teams.
• Exposure to machine learning data pipelines.




