

Data Engineer - REMOTE - W2 Contract
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer / PySpark Developer on a 12+ month W2 contract, offering a pay rate of "pay rate". It requires 3-8 years of experience in PySpark, AWS services, and strong skills in Python and SQL.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 10, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
California, United States
-
π§ - Skills detailed
#PostgreSQL #Airflow #Data Engineering #Data Ingestion #"ETL (Extract #Transform #Load)" #Data Manipulation #Scala #NiFi (Apache NiFi) #Spark (Apache Spark) #Consulting #Data Lake #Aurora PostgreSQL #Python #Data Pipeline #PySpark #AWS (Amazon Web Services) #SQL (Structured Query Language) #Aurora #S3 (Amazon Simple Storage Service) #Apache NiFi
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Dice is the leading career destination for tech experts at every stage of their careers. Our client, CaritaTech LLC., is seeking the following. Apply via Dice today!
Hi,
Greetings from Caritatech
Title: Data Engineer / Pyspark Developer
Duration: 12+ months
Contract type: W2 contract (NO C2C/C2H)
Visa Status: Any visa
Experience range - 3+ to 8 years
β’ Expertise in PySpark with a strong background in building scalable data pipelines
β’ Solid hands-on experience with Spark Streaming for real-time data ingestion and processing
β’ Practical knowledge of AWS services including EMR, EKS, and Airflow for orchestrating and managing data workflows
β’ Proficient in using Apache NiFi for efficient data ingestion, routing, and transformation
β’ Hands-on experience with Iceberg tables and managing S3-based data lake architectures
β’ Familiar with AWS Aurora PostgreSQL and capable of integrating with external systems via APIs
β’ Strong command over Python, SQL, and T-SQL for data manipulation and querying
β’ Proven track record in optimizing and enhancing performance of distributed data jobs
β’ Previous experience working in client-facing or consulting roles, with an ability to understand and address customer needs
β’ Excellent interpersonal and communication skills, with the ability to explain complex technical concepts clearly
β’ A self-motivated individual who is enthusiastic about innovation and continuous improvement in data engineering practices
β’ Strong analytical mindset with a keen eye for problem-solving and detail orientation