Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a long-term contract, hybrid position in Dallas, TX or Miramar, FL, offering a competitive pay rate. Key skills include Python, AWS, Databricks, and ETL development, with 5+ years of relevant experience required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
584
-
🗓️ - Date discovered
September 26, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Miami-Fort Lauderdale Area
-
🧠 - Skills detailed
#Deployment #Security #SageMaker #Spark (Apache Spark) #dbt (data build tool) #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Databricks #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Lambda (AWS Lambda) #DataOps #Data Ingestion #Cloud #Monitoring #SQL (Structured Query Language) #Python #Fivetran #PySpark #Delta Lake #Data Pipeline #Leadership #Scala #Kafka (Apache Kafka) #Computer Science #Programming #Redshift #Documentation #Data Engineering #Amazon Redshift #Apache Spark
Role description
We are seeking a highly skilled Sr Data Platform Engineer to join our team in Long-Term Contract, Hybrid role in Dallas, TX or Miramar, FL. Are you passionate about building and supporting modern data platforms in the cloud? We’re looking for a Sr. Data Platform Engineer who thrives in a hybrid role—60% administration and 40% development/support—to help us scale our data and DataOps infrastructure. You’ll work with cutting-edge technologies like Databricks, Apache Spark, Delta Lake, and AWS CloudOps, Cloud Security, while supporting mission-critical data pipelines and integrations. If you’re a hands-on engineer with strong Python skills, deep AWS experience, and a knack for solving complex data challenges, we want to hear from you. Responsibilities: • Design, develop, and maintain scalable ETL pipelines and integration frameworks. • Administer and optimize Databricks and Apache Spark environments for data engineering workloads. • Build and manage data workflows using AWS services such as Lambda, Glue, Redshift, SageMaker, and S3. • Support and troubleshoot DataOps pipelines, ensuring reliability and performance across environments. • Automate platform operations using Python, PySpark, and infrastructure-as-code tools. • Collaborate with cross-functional teams to support data ingestion, transformation, and deployment. • Provide technical leadership and mentorship to junior developers and third-party teams. • Create and maintain technical documentation and training materials. • Troubleshoot recurring issues and implement long-term resolutions. Requirements: • Bachelor’s or Master’s degree in Computer Science or related field. • 5+ years of experience in data engineering or platform administration. • 3+ years of experience in integration framework development with a strong emphasis on Databricks, AWS, and ETL. • Strong AWS: fundamental understanding of cloud trail, cloud watch, S3, and ML platform experience (Glue, Lambda, AWS console management) • Experience managing Databricks on AWS platform and integrating with Redshift and Databricks • Strong programming skills in Python and PySpark. • Expertise in Databricks, Apache Spark, and Delta Lake. • Proficiency in AWS CloudOps, Cloud Security, including configuration, deployment, and monitoring. • Strong SQL skills and hands-on experience with Amazon Redshift. • Experience with ETL development, data transformation, and orchestration tools. • Kafka for real-time data streaming and integration. • Fivetran and DBT for data ingestion and transformation. • Familiarity with DataOps practices and open-source data tooling. • Experience with integration tools such as Apache Camel and MuleSoft. • Understanding of RESTful APIs, message queuing, and event-driven architectures.