

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer contract position, lasting "X months" with a pay rate of "$X/hour". Key skills include Python, Java, SQL, and experience with data warehousing technologies. A Bachelor's or Master's degree in a relevant field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 12, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New York City Metropolitan Area
-
π§ - Skills detailed
#Data Processing #Scala #AI (Artificial Intelligence) #Airflow #Data Quality #Code Reviews #Docker #Java #Data Lake #BigQuery #Automated Testing #AWS (Amazon Web Services) #GCP (Google Cloud Platform) #Cloud #Monitoring #Computer Science #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #ML (Machine Learning) #Data Science #Azure #Automation #Redshift #Data Warehouse #Snowflake #Programming #dbt (data build tool) #Version Control #Deployment #Terraform #Data Engineering #Unit Testing #Spark (Apache Spark) #Python #Kubernetes #Data Pipeline #GIT
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We are seeking a Senior Data Engineer with a strong background in software development and machine learning engineering to design, build, and optimize scalable data pipelines and infrastructure. This role is critical in enabling advanced analytics, AI/ML initiatives, and data-driven decision-making across the organization. You will work closely with data scientists, ML engineers, and software developers to deliver production-grade data products and intelligent systems.
Key Responsibilities:
β’ Design and build robust, scalable, and efficient data pipelines to support analytics and machine learning workflows.
β’ Develop and maintain data models, ETL/ELT processes, and data lake/data warehouse architectures.
β’ Collaborate with ML engineers to productionize machine learning models, ensuring efficient integration with data infrastructure.
β’ Build and maintain CI/CD pipelines for data and ML workflows, ensuring automation and reproducibility.
β’ Write clean, modular, and testable code using software engineering best practices (e.g., version control, code reviews, automated testing).
β’ Monitor data systems for reliability, performance, and data quality; implement monitoring and alerting solutions.
β’ Work cross-functionally with data scientists, analysts, and product teams to understand data needs and deliver solutions.
β’ Contribute to architectural decisions, technology evaluations, and mentoring junior engineers.
Required Qualifications:
β’ Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
β’ Familiar with LLM and its ecosystem.
β’ 5+ years of experience in data engineering, software engineering, or ML engineering roles.
β’ Proficiency in programming languages such as Python, Java, or Scala.
β’ Experience with distributed data processing frameworks such as Spark, Flink, or Beam.
β’ Deep knowledge of data warehousing technologies (e.g., Snowflake, BigQuery, Redshift) and SQL.
β’ Familiarity with cloud platforms (AWS, GCP, or Azure) and tools like Airflow, dbt, Terraform.
β’ Solid understanding of machine learning workflows, including model training, versioning, deployment, and monitoring.
β’ Experience with software development practices: Git, unit testing, CI/CD, containerization (Docker, Kubernetes).
β’ Strong written and verbal communication skills.