

ACR Technology
Data Engineer (USC & GC Only | NO H1, OPT, CPT....)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of over 6 months, offering $40.00 - $50.00 per hour. Key skills include SQL, Python, Spark, and experience with ETL processes and cloud platforms like Google BigQuery and AWS. U.S. Citizenship or Green Card required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
400
-
ποΈ - Date
February 20, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
California City, CA
-
π§ - Skills detailed
#Data Transformations #Programming #AWS (Amazon Web Services) #REST (Representational State Transfer) #Azure #Spark (Apache Spark) #Airflow #Data Quality #Big Data #Computer Science #Datasets #"ETL (Extract #Transform #Load)" #Scala #Databases #REST API #BigQuery #GCP (Google Cloud Platform) #Data Pipeline #SQL (Structured Query Language) #Python #Kafka (Apache Kafka) #SQL Queries #Apache Airflow #Data Engineering #Batch #Data Governance #PySpark #Data Modeling #Cloud #Compliance
Role description
Job Description: Data Engineer
Eligibility: Only U.S. Citizen or Green Card
Responsibilities
Design, build, and maintain scalable ETL pipelines, data workflows, and data warehousing solutions for analytics and reporting.
Develop and optimize data modeling solutions, databases, and large-scale datasets using platforms such as Google BigQuery, Hive, and cloud-native technologies.
Write and optimize complex SQL queries and implement Python, Spark, and Scala-based data transformations.
Build and maintain batch and streaming data pipelines using Spark, PySpark, Scala, and messaging systems such as Kafka.
Orchestrate workflows and automate data processes using Apache Airflow.
Integrate data from multiple sources, including third-party systems via REST APIs, ensuring data quality, consistency, governance, and compliance.
Collaborate with product, engineering, and business teams to translate requirements into scalable technical solutions.
Test, deploy, monitor, and support data solutions across cloud environments (AWS, Azure, or GCP).
Document processes, contribute to continuous improvement initiatives, and provide technical guidance to users and cross-functional teams.
Qualifications
5+ years of experience in data engineering, analytics, or related fields.
Advanced proficiency in SQL and strong programming skills in Python and/or Scala.
Hands-on experience with ETL processes, data modeling, and data warehousing concepts.
Experience working with cloud data platforms such as Google BigQuery, AWS, or Azure.
Strong experience with Spark (PySpark/Scala) and big data technologies such as Hive and Kafka.
Experience building and managing workflows using Apache Airflow.
Experience integrating systems using REST APIs.
Knowledge of data governance, data quality, and compliance best practices.
Bachelorβs degree in Computer Science, Engineering, or equivalent experience.
Job Types: Full-time, Contract, Permanent
Pay: $40.00 - $50.00 per hour
Work Location: In person
Job Description: Data Engineer
Eligibility: Only U.S. Citizen or Green Card
Responsibilities
Design, build, and maintain scalable ETL pipelines, data workflows, and data warehousing solutions for analytics and reporting.
Develop and optimize data modeling solutions, databases, and large-scale datasets using platforms such as Google BigQuery, Hive, and cloud-native technologies.
Write and optimize complex SQL queries and implement Python, Spark, and Scala-based data transformations.
Build and maintain batch and streaming data pipelines using Spark, PySpark, Scala, and messaging systems such as Kafka.
Orchestrate workflows and automate data processes using Apache Airflow.
Integrate data from multiple sources, including third-party systems via REST APIs, ensuring data quality, consistency, governance, and compliance.
Collaborate with product, engineering, and business teams to translate requirements into scalable technical solutions.
Test, deploy, monitor, and support data solutions across cloud environments (AWS, Azure, or GCP).
Document processes, contribute to continuous improvement initiatives, and provide technical guidance to users and cross-functional teams.
Qualifications
5+ years of experience in data engineering, analytics, or related fields.
Advanced proficiency in SQL and strong programming skills in Python and/or Scala.
Hands-on experience with ETL processes, data modeling, and data warehousing concepts.
Experience working with cloud data platforms such as Google BigQuery, AWS, or Azure.
Strong experience with Spark (PySpark/Scala) and big data technologies such as Hive and Kafka.
Experience building and managing workflows using Apache Airflow.
Experience integrating systems using REST APIs.
Knowledge of data governance, data quality, and compliance best practices.
Bachelorβs degree in Computer Science, Engineering, or equivalent experience.
Job Types: Full-time, Contract, Permanent
Pay: $40.00 - $50.00 per hour
Work Location: In person






