

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 3+ year background in cloud environments, offering a hybrid work model and a pay rate of "unknown." Key skills include Python, SQL, Spark, and ETL/ELT workflows. Experience in healthcare, retail, or finance is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
640
-
ποΈ - Date discovered
July 6, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Princeton, NJ
-
π§ - Skills detailed
#GIT #Data Governance #Automated Testing #Azure #GCP (Google Cloud Platform) #Kubernetes #Terraform #Documentation #Docker #Security #Cloud #Datasets #Automation #Storage #Data Science #Jenkins #AI (Artificial Intelligence) #Data Warehouse #Spark (Apache Spark) #Compliance #Schema Design #Snowflake #Scala #Data Manipulation #Data Quality #Apache Airflow #Batch #Airflow #Observability #SQL (Structured Query Language) #DevOps #Data Engineering #Data Modeling #GitHub #Python #ML (Machine Learning) #AWS (Amazon Web Services) #BigQuery #GDPR (General Data Protection Regulation) #PySpark #Infrastructure as Code (IaC) #dbt (data build tool) #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #Redshift
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
About The Opportunity
Operating at the intersection of Information Technology and Digital Transformation services, we architect and deliver cloud-native data platforms that power analytics, AI, and business automation for clients in healthcare, retail, and finance. Our mission is to turn raw, high-velocity data into trusted, actionable insights through secure, scalable, and cost-efficient engineering practices.
Role & Responsibilities
β’ Design, build, and maintain batch and streaming pipelines that ingest, transform, and curate terabyte-scale datasets for reporting and machine learning use cases.
β’ Create robust ETL/ELT workflows using Python, SQL, and Spark, orchestrated via Apache Airflow or similar schedulers.
β’ Model relational and dimensional schemas, optimizing storage, query performance, and documentation within modern data warehouses (Snowflake, Redshift, BigQuery).
β’ Implement data quality, lineage, and observability frameworks to ensure accuracy, consistency, and regulatory compliance.
β’ Collaborate cross-functionally with product managers, analysts, and data scientists to translate business requirements into scalable technical solutions.
β’ Champion DevOps best practicesβCI/CD, infrastructure as code, automated testingβto accelerate delivery and reduce operational risk.
Skills & Qualifications
Must-Have
β’ 3+ years of hands-on data engineering experience in cloud environments (AWS, Azure, or GCP).
β’ Advanced proficiency in Python and SQL for data manipulation and pipeline automation.
β’ Expertise with Spark or PySpark for distributed processing of large datasets.
β’ Proven track record designing ETL/ELT workflows and scheduling with Airflow, Prefect, or similar tools.
β’ Deep understanding of data warehousing principles, schema design, and performance tuning.
β’ Git-centric workflow with Docker or Kubernetes exposure.
Preferred
β’ Experience implementing real-time streaming solutions with Kafka or Kinesis.
β’ Knowledge of dbt, Terraform, and CI/CD pipelines in GitHub Actions or Jenkins.
β’ Background in data governance, security, and compliance frameworks (GDPR, HIPAA).
Benefits & Culture Highlights
β’ Hybrid work model offering flexible on-site collaboration and remote focus days.
β’ Access to a cutting-edge tech stack and sponsored certifications to accelerate career growth.
β’ Inclusive, results-driven culture that values innovation, autonomy, and continuous learning.
Skills: terraform,azure,kinesis,airflow,python,data modeling,sql,dbt,pyspark,etl,docker,github actions,apache airflow,spark,jenkins,git,data warehousing,kafka,elt,ci/cd,kubernetes