Role: Senior Data Engineer - Only W2 Candidates, (H4EAD, L2EAD and OPT EAD)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Plano, TX, for a long-term contract at a pay rate of "TBD." Candidates must have 7+ years of experience, expertise in AWS Kinesis, Spark Streaming, and cloud environments, and strong skills in SQL, Python, and Scala.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 21, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Irving, TX
-
🧠 - Skills detailed
#Datasets #ADF (Azure Data Factory) #Data Engineering #Apache Kafka #"ETL (Extract #Transform #Load)" #Storage #Monitoring #Synapse #NoSQL #JSON (JavaScript Object Notation) #Apache Airflow #SQL (Structured Query Language) #AWS (Amazon Web Services) #Migration #Python #Data Processing #Visualization #Data Lake #Azure cloud #Databricks #Cloud #Azure Data Factory #Airflow #Data Storage #BI (Business Intelligence) #Spark (Apache Spark) #Kafka (Apache Kafka) #AWS Kinesis #Microsoft Power BI #Azure #Automation #Scala #Data Transformations #Data Management #Data Pipeline #Apache Spark #Delta Lake #Kubernetes
Role description
Role: Senior Data Engineer - Only W2 Candidates, (H4EAD, L2EAD and OPT EAD) Location: Plano TX (Onsite) Duration: Long term We are looking for a highly skilled Senior Data Engineer to design, develop, and optimize streaming data pipelines and cloud-based solutions. The ideal candidate will have extensive experience with AWS Kinesis,AWS ECS, Spark Streaming, Databricks, Apache Kafka, and cloud environments, enabling seamless real-time data processing and analytics. Key Responsibilities: Design, develop, and maintain streaming data pipelines using AWS Kinesis, ECS,Spark Streaming, and Databricks (Scala/Python, SQL) in a cloud environment. Implement orchestration workflows using Apache Airflow for managing end-to-end data processing. Deploy Apache Spark workloads on Kubernetes (K8s) or AWS ECS, optimizing for scalability and cost efficiency. Work with Azure cloud services such as Azure Data Factory (ADF), Synapse Analytics, Fabric, Event Hubs, and Stream Analytics, as well as AWS services like Kinesis, Firehose, and ECS to manage large datasets. Optimize data storage strategies using Azure Data Lake, Delta Lake, JSON, Parquet, SQL, and NoSQL stores for efficient data management. Implement real-time data transformations and structured streaming for low-latency applications. Develop dashboards and monitoring solutions using Sumologic, AWS QuickSight, Databricks Dashboards, Azure Power BI, Azure Monitor, and Log Analytics for enhanced data visualization and tracking. Required Skills & Qualifications: • 7+ years of experience in data engineering or a related field. • Experience in Databricks migration across major cloud providers. • Strong proficiency in SQL, Python, and Scala for data processing. • Expertise in streaming data technologies like Kafka, Spark Streaming, and Kinesis. • Hands-on experience with cloud platforms (Azure, AWS) and their data services. • Knowledge of containerization and orchestration (Kubernetes, AWS ECS). • Deep understanding of ETL workflows and data pipeline automation. • Experience with real-time analytics and structured streaming. Preferred Qualifications: • Strong problem-solving and communication skills. • Ability to work collaboratively across teams to support business intelligence initiatives.