GBIT (Global Bridge InfoTech Inc)

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a W2 contract in Dallas, TX, requiring 5+ years of experience, strong Python and SQL skills, and expertise in AWS, Azure, or GCP. Familiarity with ETL/ELT workflows and data governance is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 2, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#GCP (Google Cloud Platform) #Scala #Docker #Data Storage #Data Governance #Data Integrity #Azure #PySpark #Data Management #Data Engineering #Spark (Apache Spark) #Data Quality #Python #Kubernetes #Documentation #GitHub #Storage #AWS (Amazon Web Services) #Batch #Monitoring #Data Warehouse #Data Architecture #Data Modeling #"ETL (Extract #Transform #Load)" #Airflow #Computer Science #Security #Kafka (Apache Kafka) #Apache Spark #Data Pipeline #Data Science #SQL (Structured Query Language) #Automation #Data Processing #Programming #Metadata #Cloud
Role description
Senior Data Engineer W2 Contract Dallas, TX (Onsite) Overview We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize scalable data pipelines and analytics solutions. The ideal candidate has strong hands-on experience with Python, SQL, Spark, and modern cloud platforms (AWS, Azure, or GCP). You will collaborate with cross-functional teams to enhance data flow, ensure data quality, and support data-driven initiatives across the organization. Key Responsibilities β€’ Design, build, and maintain scalable and reliable data pipelines for batch and real-time processing. β€’ Develop and optimize ETL/ELT workflows, ensuring high performance and data integrity. β€’ Work closely with Data Scientists, Analysts, and Product teams to understand data needs and deliver robust solutions. β€’ Implement data quality checks, monitoring, and automated validation frameworks. β€’ Optimize data storage and processing across cloud platforms (AWS/Azure/GCP). β€’ Develop reusable components, frameworks, and documentation to support data operations. β€’ Troubleshoot, tune, and improve existing pipelines for performance, efficiency, and cost control. β€’ Ensure proper governance, metadata management, and security best practices. Required Qualifications β€’ Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field. β€’ 5+ years of experience as a Data Engineer or similar role. β€’ Strong programming skills in Python (data processing, automation, APIs). β€’ Advanced proficiency in SQL and experience working with data warehouses/lakes. β€’ Hands-on experience with Apache Spark (PySpark preferred). β€’ Expertise working with any major cloud platform: AWS, Azure, or GCP. β€’ Experience building and optimizing ETL/ELT pipelines and workflows. β€’ Solid understanding of data modeling, distributed systems, and data architecture. Preferred Qualifications β€’ Experience with modern orchestration tools (e.g., Airflow, Dagster, Prefect). β€’ Familiarity with containerization and CI/CD (e.g., Docker, Kubernetes, GitHub Actions). β€’ Exposure to streaming technologies (e.g., Kafka, Kinesis, Pub/Sub). β€’ Knowledge of data governance, lineage, and cataloging tools.