Chelsoft Solutions Co.

Lead Data Engineer / Data Architect_only on W2

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer/Data Architect on a W2 contract, requiring 10+ years of experience in data engineering/architecture, with 3+ years in a lead role. Essential skills include SQL, Python, cloud platforms, and big data frameworks.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 11, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Huntsville, TX
-
🧠 - Skills detailed
#ML (Machine Learning) #BigQuery #Spark (Apache Spark) #Databricks #Kubernetes #Kappa Architecture #Computer Science #Redshift #Data Architecture #HDFS (Hadoop Distributed File System) #SQL (Structured Query Language) #Data Quality #Data Engineering #ADLS (Azure Data Lake Storage) #AWS (Amazon Web Services) #Data Processing #Data Governance #Cloud #Python #GCP (Google Cloud Platform) #Data Lake #Scala #Snowflake #S3 (Amazon Simple Storage Service) #Azure #Kafka (Apache Kafka) #Data Pipeline #Data Science #Docker #Data Modeling #Lambda (AWS Lambda) #Big Data #Batch #Data Warehouse
Role description
Position: Lead Data Engineer / Data Architect\_only on W2 Overview We are seeking an experienced professional to lead the design, implementation, and management of enterprise-grade data solutions. The ideal candidate will have deep expertise in data engineering, data architecture, and cloud-based data platforms, enabling scalable analytics and machine learning solutions. Education & Experience β€’ Bachelor’s degree in Computer Science, Data Science, Engineering, or related field. β€’ Minimum 10 years in data engineering or architecture roles, with at least 3 years in a lead capacity. Must-Have Skills & Expertise β€’ Proficiency in SQL and Python. β€’ Strong experience with cloud platforms (AWS, Azure, or GCP) and associated data services. β€’ Hands-on experience with data warehouses (Snowflake, Redshift, BigQuery), Databricks, and data lakes (S3, ADLS, HDFS). β€’ Expertise in big data processing frameworks (Spark, Flink). β€’ Knowledge of real-time streaming architectures (Kafka, Kinesis) and Lambda/Kappa architectures. β€’ Experience in data modeling, data governance, and ensuring high data quality. β€’ Hands-on experience with containerization and orchestration (Docker, Kubernetes). β€’ Ability to design and implement end-to-end data pipelines for batch and real-time processing supporting analytics and ML. Key Responsibilities β€’ Lead the architecture, design, and management of enterprise data platforms. β€’ Ensure reliable, clean, and usable data across the organization. β€’ Implement scalable data workflows and enforce data governance standards. β€’ Collaborate with cross-functional teams to enable analytics and machine learning initiatives.