

Saransh Inc
Senior Data Engineer - Remote (US) - Only W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a W2 contract, remote in the US, with a pay rate of "pay rate". Requires 6-10 years of experience, expertise in Hadoop, Apache Spark, Airflow, and AWS, plus a Bachelor's degree in a related field.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Athena #Data Lake #Hadoop #Datasets #Batch #Data Processing #Data Pipeline #Apache Spark #Redshift #Version Control #Airflow #Agile #Cloud #Big Data #Spark (Apache Spark) #Data Engineering #Data Orchestration #Scrum #DevOps #AWS (Amazon Web Services) #Data Architecture #Kafka (Apache Kafka) #Data Quality #Apache Airflow #Scala #Computer Science #PySpark #Data Catalog #Data Warehouse #SQL (Structured Query Language)
Role description
Role: Senior Data Engineer
Remote (US) - Anywhere in the US (EST and CST preferred)
Job Type: W2 Contract
Note: Visa Independent candidates are highly preferred (Third party or C2C is not accepted)
Required Skills (Primary)
Hadoop, Apache Spark, Apache Airflow, CI/CD, Kafka
Short Overview Of The Role
• Data Engineer with 6 10 years of experience in building and maintaining scalable, high-performance data pipelines and processing frameworks.
• The ideal candidate will have strong hands-on expertise in orchestration using Apache Airflow and distributed data processing with Apache Spark (EMR).
• This role requires a solid understanding of big data architecture, data engineering best practices, and a commitment to delivering efficient, reliable, and maintainable data solutions that align with business and technical requirements.
Qualifications Required
• Bachelor s degree in Computer Science, Information Systems, Engineering, or a related field.
• 6 10 years of experience in data engineering or related roles.
• Strong experience with Apache Airflow for data orchestration and workflow management.
• Proven expertise in building and tuning distributed data processing applications using Apache Spark (PySpark), including both Structured Streaming and Batch.
• Experience with AWS cloud-based data ecosystems, particularly Athena and Redshift.
• Proficient in SQL (Athena version) and experienced in working with large datasets from various sources (structured and unstructured).
• Experience with data lakes, data warehouses, and batch/streaming data architectures.
• Familiarity with CI/CD pipelines, version control, and DevOps practices in a data engineering context.
Preferred Qualifications
• Experience working in Agile/Scrum environments.
• Knowledge of data quality frameworks and validation engines.
• Experience with data catalog tools.
Role: Senior Data Engineer
Remote (US) - Anywhere in the US (EST and CST preferred)
Job Type: W2 Contract
Note: Visa Independent candidates are highly preferred (Third party or C2C is not accepted)
Required Skills (Primary)
Hadoop, Apache Spark, Apache Airflow, CI/CD, Kafka
Short Overview Of The Role
• Data Engineer with 6 10 years of experience in building and maintaining scalable, high-performance data pipelines and processing frameworks.
• The ideal candidate will have strong hands-on expertise in orchestration using Apache Airflow and distributed data processing with Apache Spark (EMR).
• This role requires a solid understanding of big data architecture, data engineering best practices, and a commitment to delivering efficient, reliable, and maintainable data solutions that align with business and technical requirements.
Qualifications Required
• Bachelor s degree in Computer Science, Information Systems, Engineering, or a related field.
• 6 10 years of experience in data engineering or related roles.
• Strong experience with Apache Airflow for data orchestration and workflow management.
• Proven expertise in building and tuning distributed data processing applications using Apache Spark (PySpark), including both Structured Streaming and Batch.
• Experience with AWS cloud-based data ecosystems, particularly Athena and Redshift.
• Proficient in SQL (Athena version) and experienced in working with large datasets from various sources (structured and unstructured).
• Experience with data lakes, data warehouses, and batch/streaming data architectures.
• Familiarity with CI/CD pipelines, version control, and DevOps practices in a data engineering context.
Preferred Qualifications
• Experience working in Agile/Scrum environments.
• Knowledge of data quality frameworks and validation engines.
• Experience with data catalog tools.






