Sira Consulting

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer in Plano, TX, offering a 12+ month contract at $50/HR on W2. Requires 8+ years of experience, expertise in Azure, AWS, GCP, SQL, Python, and strong knowledge of data security and governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
400
-
🗓️ - Date
February 18, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Plano, TX
-
🧠 - Skills detailed
#Datasets #Data Security #ML (Machine Learning) #Redshift #SQL (Structured Query Language) #Lambda (AWS Lambda) #Scala #AWS S3 (Amazon Simple Storage Service) #Data Modeling #Data Ingestion #Data Warehouse #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Airflow #BigQuery #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Data Architecture #Azure Data Factory #Cloud #ADF (Azure Data Factory) #Dataflow #Data Quality #Kubernetes #Synapse #Azure #Data Engineering #Data Science #Databricks #Security #Data Pipeline #DevOps #Schema Design #Storage #Docker #Compliance #Data Lake #Data Processing #GCP (Google Cloud Platform) #Spark (Apache Spark) #Deployment #Python
Role description
Job Title: Sr. Data Engineer Location: Plano, TX (Onsite) Duration: 12+ Months Contract (W2) Hourly Pay: $50/HR on W2 Job Summary We are seeking an experienced Data Engineer with strong hands-on expertise in cloud-based data platforms across Azure, AWS, and GCP. The ideal candidate will design, develop, and maintain scalable data pipelines, optimize data architecture, and support analytics and reporting solutions for enterprise applications. Key Responsibilities • Design, build, and maintain scalable ETL/ELT data pipelines across Azure, AWS, and GCP environments. • Develop and manage data ingestion frameworks for structured and unstructured data. • Implement data lake and data warehouse architectures using modern cloud-native tools. • Ensure data quality, governance, security, and compliance across platforms. • Optimize query performance, storage, and compute costs in cloud environments. • Collaborate with data scientists, analysts, and application teams to deliver reliable datasets. • Automate deployments using CI/CD pipelines and Infrastructure as Code. • Monitor and troubleshoot data workflows, job failures, and performance issues. Required Skills & Qualifications • 8+ years of experience in Data Engineering or related roles. • Strong hands-on experience with: • Azure: Data Factory, Synapse, Data Lake, Databricks. • AWS: S3, Glue, Redshift, Lambda, EMR. • GCP: BigQuery, Dataflow, Pub/Sub, Cloud Storage. • Expertise in SQL, Python, and Spark for large-scale data processing. • Experience with ETL/ELT tools, workflow orchestration (Airflow), and streaming pipelines. • Knowledge of data modeling, schema design, and performance tuning. • Familiarity with DevOps, CI/CD, and containerization (Docker/Kubernetes). • Strong understanding of data security, governance, and compliance. Preferred Qualifications • Experience with real-time data processing and event-driven architectures. • Exposure to machine learning data pipelines. • Cloud certifications in Azure, AWS, or GCP. • Strong communication and stakeholder management skills.