

W2- Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a W2 Data Engineer contract position in Houston, TX, offering competitive pay. Requires 7+ years of data engineering experience, expertise in ETL/ELT workflows, AWS, Databricks, and strong SQL skills. Preferred certifications include Databricks and AWS Solutions Architect.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date discovered
July 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Houston, TX
-
π§ - Skills detailed
#Data Encryption #Computer Science #EC2 #Deployment #Apache Airflow #Spark (Apache Spark) #Storage #S3 (Amazon Simple Storage Service) #Databricks #R #Databases #Dataiku #Java #Airflow #Kafka (Apache Kafka) #Data Integration #AWS (Amazon Web Services) #Azure cloud #Scala #Data Pipeline #Data Quality #Data Processing #Data Governance #Data Engineering #Lambda (AWS Lambda) #Terraform #SQL (Structured Query Language) #RDBMS (Relational Database Management System) #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Data Science #Security #Automation #Data Lake #Azure #Programming #Model Deployment #NoSQL #Big Data #Data Access #RDS (Amazon Relational Database Service) #Data Modeling #Python #Apache Spark #Hadoop #Talend #Compliance #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Sr Data Engineer
Location: Houston, TX
Position Type: Contract position
Job Description:
As a Senior Data Engineer within Data and Analytics organization, you will play a crucial role in architecting, implementing, and managing robust, scalable data infrastructure. This position demands a blend of systems engineering, data integration, and data analytics skills to enhance data capabilities, supporting advanced analytics, machine learning projects, and real-time data processing needs.
QUALIFICATIONS
β’ Bachelor's or masterβs degree in computer science, MIS, or other business discipline or equivalent combination of education and/or experience.
β’ 7+ years of experience in data engineering, with a proven track record in designing and operating large-scale data pipelines and architectures.
β’ Expertise in developing ETL/ELT workflows.
β’ Fluent in Infrastructure-as-code paradigms (e.g. Terraform)
β’ Comprehensive knowledge of platforms and services like Databricks, Dataiku, and AWS native data offerings
β’ Solid experience with big data technologies (Databricks, Apache Spark, Hadoop, Kafka) and cloud services (AWS, Azure) related to data processing and storage.
β’ Strong experience in AWS (preferred) and Azure cloud services, with hands-on experience in integrating cloud storage and compute services with Databricks.
β’ Proficient in SQL and programming languages relevant to data engineering (Python, Java, Scala).
β’ Hands on RDBMS experience (data modeling, analysis, programming, stored procedures)
β’ Familiarity with machine learning model deployment and management practices is a plus.
β’ Fluent with CI/CD workflows & automation
β’ Strong communication skills, capable of collaborating effectively across technical and non-technical teams.
CERTIFICATES, LICENSES, REGISTRATIONS
β’ Preferred: Databricks Certified Associate Developer for Apache Spark, AWS Certified Solutions Architect, or other relevant certifications.
ESSENTIAL FUNCTIONS:
β’ Design and implement scalable and reliable data pipelines to ingest, process, and store diverse data at scale, using technologies such as Databricks, Apache Spark, Hadoop, and Kafka.
β’ Work within cloud environments like AWS (preferred) or Azure to leverage services including but not limited to EC2, RDS, S3, Lambda, and Azure Data Lake for efficient data handling and processing.
β’ Develop and optimize data models and storage solutions (SQL, NoSQL, Data Lakes) to support operational and analytical applications, ensuring data quality and accessibility.
β’ Utilize ETL tools and frameworks (e.g., Apache Airflow, Talend) to automate data workflows, ensuring efficient data integration and timely availability of data for analytics.
β’ Implement pipelines with a high degree of automation
β’ Collaborate closely with data scientists, providing the data infrastructure and tools needed for complex analytical models, leveraging Python or R for data processing scripts.
β’ Ensure compliance with data governance and security policies, implementing best practices in data encryption, masking, and access controls within a cloud environment.
β’ Monitor and troubleshoot data pipelines and databases for performance issues, applying tuning techniques to optimize data access and throughput.
β’ Stay abreast of emerging technologies and methodologies in data engineering, advocating for and implementing improvements to the data ecosystem.
Thanks!