

My3Tech
Sr. Lead Data Engineer - ONLY W2
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Lead Data Engineer on a 9+ month W2 contract, preferably based in Huntsville, TX. Requires 10+ years in data engineering, expertise in SQL, Python, cloud platforms, and data warehousing tools, with strong leadership experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 11, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Texas, United States
-
π§ - Skills detailed
#Security #NoSQL #ML (Machine Learning) #Data Lineage #BigQuery #Spark (Apache Spark) #Databricks #Kubernetes #Kappa Architecture #Computer Science #Redshift #Data Architecture #"ETL (Extract #Transform #Load)" #HDFS (Hadoop Distributed File System) #Metadata #Data Quality #Data Engineering #SQL (Structured Query Language) #GDPR (General Data Protection Regulation) #ADLS (Azure Data Lake Storage) #AWS (Amazon Web Services) #Data Processing #Storage #Data Governance #Cloud #Data Ingestion #Databases #Python #GCP (Google Cloud Platform) #Data Lifecycle #Data Management #Scala #Snowflake #S3 (Amazon Simple Storage Service) #Graph Databases #Azure #Kafka (Apache Kafka) #Airflow #Visualization #MLflow #Data Science #Data Warehouse #Docker #Compliance #Data Privacy #Data Modeling #Lambda (AWS Lambda) #Big Data #Batch #Data Lake
Role description
Job Title: Sr. Lead Data Engineer (Only W2)
Duration: 9+ Months Contract with possibility to extension
LOCATION:
β’
β’ PREFERRED: Home office in Huntsville, TX. May work remotely, but would need the capability to report to the office with advanced notice.
β’
β’ Job Description:
We are seeking a highly skilled and experienced professional to lead the design, implementation, and management of end-to-end enterprise-grade data solutions. This role involves expertise in building and optimizing data warehouses, data lakes, and lakehouse platforms, with a strong emphasis on data engineering, data science, and machine learning. You will work closely with cross-functional teams to create scalable and robust architectures that support advanced analytics and machine learning use cases while adhering to industry standards and best practices.
β’ Education: Bachelorβs Computer Science, Data Science, Engineering, or a related field.
β’ Experience: Minimum 10 years in data engineering, data architecture, or a similar role, with at least 3 years in a lead capacity.
Responsibilities Include:
β’ Architect, design, and manage the entire data lifecycle from data ingestion,
β’ transformation, storage, and processing to advanced analytics and machine learning databases and large-scale processing systems.
β’ Implement robust data governance frameworks, including metadata management, lineage tracking, security, compliance, and business glossary development.
β’ Identify, design, and implement internal process improvements, including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual
β’ processes.
β’ Ensure high data quality and reliability through automated data validation and testing and provide high quality clean, and usable data from data sets of varying states of disorder.
β’ Develop and enforce architecture standards, patterns, and reference models for large-scale data platforms.
β’ Architect and implement Lambda and Kappa architectures for real-time and batch data processing workflows along with strong data modeling capabilities.
β’ Ability to identify and implement the most appropriate data management system and enable integration capabilities for external tools to perform ingestion, compilation, analytics and visualization.
REQUIRED SKILLS:
β’ Proficient in SQL, Python, and big data processing frameworks (e.g., Spark, Flink).
β’ Strong experience with cloud platforms (AWS, Azure, GCP) and related data services.
β’ Hands-on experience with data warehousing tools (e.g., Snowflake, Redshift, BigQuery), Databricks running on multiple cloud platforms (AWS, Azure and GCP) and data lake technologies (e.g., S3, ADLS, HDFS).
β’ Expertise in containerization and orchestration tools like Docker and Kubernetes.
β’ Knowledge of MLOps frameworks and tools (e.g., MLflow, Kubeflow, Airflow).
β’ Experience with real-time streaming architectures (e.g., Kafka, Kinesis).
β’ Familiarity with Lambda and Kappa architectures for data processing.
β’ Enable integration capabilities for external tools to perform ingestion, compilation, analytics and visualization.
PREFERRED SKILLS:
β’ Certifications in cloud platforms or data-related technologies.
β’ Familiarity with graph databases, NoSQL, or time-series databases.
β’ Knowledge of data privacy regulations (e.g., GDPR, CCPA) and compliance requirements.
β’ Experience in implementing and managing business glossaries, data governance rules, metadata lineage, and ensuring data quality.
β’ Highly experienced with AWS cloud platform and Databricks Lakehouse.
Job Title: Sr. Lead Data Engineer (Only W2)
Duration: 9+ Months Contract with possibility to extension
LOCATION:
β’
β’ PREFERRED: Home office in Huntsville, TX. May work remotely, but would need the capability to report to the office with advanced notice.
β’
β’ Job Description:
We are seeking a highly skilled and experienced professional to lead the design, implementation, and management of end-to-end enterprise-grade data solutions. This role involves expertise in building and optimizing data warehouses, data lakes, and lakehouse platforms, with a strong emphasis on data engineering, data science, and machine learning. You will work closely with cross-functional teams to create scalable and robust architectures that support advanced analytics and machine learning use cases while adhering to industry standards and best practices.
β’ Education: Bachelorβs Computer Science, Data Science, Engineering, or a related field.
β’ Experience: Minimum 10 years in data engineering, data architecture, or a similar role, with at least 3 years in a lead capacity.
Responsibilities Include:
β’ Architect, design, and manage the entire data lifecycle from data ingestion,
β’ transformation, storage, and processing to advanced analytics and machine learning databases and large-scale processing systems.
β’ Implement robust data governance frameworks, including metadata management, lineage tracking, security, compliance, and business glossary development.
β’ Identify, design, and implement internal process improvements, including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual
β’ processes.
β’ Ensure high data quality and reliability through automated data validation and testing and provide high quality clean, and usable data from data sets of varying states of disorder.
β’ Develop and enforce architecture standards, patterns, and reference models for large-scale data platforms.
β’ Architect and implement Lambda and Kappa architectures for real-time and batch data processing workflows along with strong data modeling capabilities.
β’ Ability to identify and implement the most appropriate data management system and enable integration capabilities for external tools to perform ingestion, compilation, analytics and visualization.
REQUIRED SKILLS:
β’ Proficient in SQL, Python, and big data processing frameworks (e.g., Spark, Flink).
β’ Strong experience with cloud platforms (AWS, Azure, GCP) and related data services.
β’ Hands-on experience with data warehousing tools (e.g., Snowflake, Redshift, BigQuery), Databricks running on multiple cloud platforms (AWS, Azure and GCP) and data lake technologies (e.g., S3, ADLS, HDFS).
β’ Expertise in containerization and orchestration tools like Docker and Kubernetes.
β’ Knowledge of MLOps frameworks and tools (e.g., MLflow, Kubeflow, Airflow).
β’ Experience with real-time streaming architectures (e.g., Kafka, Kinesis).
β’ Familiarity with Lambda and Kappa architectures for data processing.
β’ Enable integration capabilities for external tools to perform ingestion, compilation, analytics and visualization.
PREFERRED SKILLS:
β’ Certifications in cloud platforms or data-related technologies.
β’ Familiarity with graph databases, NoSQL, or time-series databases.
β’ Knowledge of data privacy regulations (e.g., GDPR, CCPA) and compliance requirements.
β’ Experience in implementing and managing business glossaries, data governance rules, metadata lineage, and ensuring data quality.
β’ Highly experienced with AWS cloud platform and Databricks Lakehouse.






