

SoftStandard Solutions
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position lasting 9–12+ months, located in TX, CA, VA, NJ, or AZ. Requires expertise in Python, SQL, Apache Spark, and cloud platforms (AWS, Azure, GCP). Onsite work and relocation are mandatory.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Azure #Synapse #Batch #Apache Spark #Data Science #Kafka (Apache Kafka) #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Data Ingestion #Compliance #Snowflake #Data Management #Python #AWS (Amazon Web Services) #Agile #Delta Lake #Metadata #Kubernetes #Terraform #MySQL #MLflow #Databricks #Airflow #PostgreSQL #Data Pipeline #MongoDB #Data Warehouse #Scala #Spark (Apache Spark) #BigQuery #PySpark #AI (Artificial Intelligence) #Docker #Data Quality #Automation #ML (Machine Learning) #NoSQL #Storage #Redshift #S3 (Amazon Simple Storage Service) #Security #Dataflow #Data Lake #Apache Kafka #"ETL (Extract #Transform #Load)" #Data Engineering
Role description
Job Title: Data Engineer
Location: TX, CA, VA, NJ, AZ
Experience: 9–12+ Years
Employment Type: Contract / W2
Roles & Responsibilities:
• Design, build, and maintain scalable, high-performance data pipelines for structured and unstructured data
• Develop and optimize ETL/ELT workflows to support analytics, AI, and machine learning use cases
• Work closely with Data Scientists and ML Engineers to productionize AI/ML models
• Design and manage data lakes, data warehouses, and lakehouse architectures
• Ensure data quality, reliability, governance, and security across platforms
• Optimize data ingestion, transformation, and storage for large-scale, real-time, and batch processing
• Implement streaming data pipelines for near-real-time analytics
• Enable feature engineering and feature stores for AI/ML workflows
• Collaborate with Product, Analytics, and Engineering teams in an Agile environment
• Mentor junior data engineers and drive data engineering best practices
• Support CI/CD pipelines and infrastructure automation for data platforms
Tools, Skills & Technologies:
• Python, SQL, Scala, Apache Spark, PySpark, Databricks, Airflow, Prefect, Dagster
• Apache Kafka, Spark Streaming, Flink, Kinesis, Pub/Sub, real-time & batch processing
• AWS, Azure, GCP (S3, Glue, EMR, Redshift, Data Factory, Synapse, Data Lake, BigQuery, Dataflow)
• Data Lakes, Lakehouse Architecture, Delta Lake, Iceberg, Hudi, Snowflake, Redshift, BigQuery
• SQL & NoSQL, PostgreSQL, MySQL, MongoDB, Cassandra
• Feature Engineering, Feature Stores (Feast, Databricks), ML Pipelines (MLflow, Kubeflow), LLM & GenAI Data Preparation
• CI/CD, Docker, Kubernetes, Terraform, Data Quality, Metadata Management, Security & Compliance
Note:
• Candidate must be willing to work onsite and relocate as per client requirements.
• Interested candidates can apply by sending their resume to garima.patankar@softstandard.com
Job Title: Data Engineer
Location: TX, CA, VA, NJ, AZ
Experience: 9–12+ Years
Employment Type: Contract / W2
Roles & Responsibilities:
• Design, build, and maintain scalable, high-performance data pipelines for structured and unstructured data
• Develop and optimize ETL/ELT workflows to support analytics, AI, and machine learning use cases
• Work closely with Data Scientists and ML Engineers to productionize AI/ML models
• Design and manage data lakes, data warehouses, and lakehouse architectures
• Ensure data quality, reliability, governance, and security across platforms
• Optimize data ingestion, transformation, and storage for large-scale, real-time, and batch processing
• Implement streaming data pipelines for near-real-time analytics
• Enable feature engineering and feature stores for AI/ML workflows
• Collaborate with Product, Analytics, and Engineering teams in an Agile environment
• Mentor junior data engineers and drive data engineering best practices
• Support CI/CD pipelines and infrastructure automation for data platforms
Tools, Skills & Technologies:
• Python, SQL, Scala, Apache Spark, PySpark, Databricks, Airflow, Prefect, Dagster
• Apache Kafka, Spark Streaming, Flink, Kinesis, Pub/Sub, real-time & batch processing
• AWS, Azure, GCP (S3, Glue, EMR, Redshift, Data Factory, Synapse, Data Lake, BigQuery, Dataflow)
• Data Lakes, Lakehouse Architecture, Delta Lake, Iceberg, Hudi, Snowflake, Redshift, BigQuery
• SQL & NoSQL, PostgreSQL, MySQL, MongoDB, Cassandra
• Feature Engineering, Feature Stores (Feast, Databricks), ML Pipelines (MLflow, Kubeflow), LLM & GenAI Data Preparation
• CI/CD, Docker, Kubernetes, Terraform, Data Quality, Metadata Management, Security & Compliance
Note:
• Candidate must be willing to work onsite and relocate as per client requirements.
• Interested candidates can apply by sending their resume to garima.patankar@softstandard.com




