

Senior PySpark ETL Developer @ Onsite_Only W2
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior PySpark ETL Developer in Charlotte, NC, for 12 months+ on a W2 contract. Requires strong PySpark, ETL, and DW/BI experience, proficiency in Informatica and SQL, and familiarity with cloud platforms like Azure or GCP.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 18, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Deployment #S3 (Amazon Simple Storage Service) #Cloud #Version Control #Data Pipeline #Data Processing #Python #Agile #Informatica #Scrum #Data Ingestion #Snowflake #Oracle #Azure #Code Reviews #Spark (Apache Spark) #Data Lake #GIT #Data Extraction #GCP (Google Cloud Platform) #Batch #Teradata #Data Analysis #SQL (Structured Query Language) #Unix #PySpark #Dremio #Data Quality #Slowly Changing Dimensions #Data Aggregation #"ETL (Extract #Transform #Load)" #Data Warehouse #BI (Business Intelligence)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Senior PySpark ETL Developer
Location: Charlotte, NC
Duration: 12 Months+
Please note: Position is strictly w2 and no sponsorship available:
Strong expertise in PySpark, ETL development, and Data Warehousing/Business Intelligence (DW/BI) projects. Resource will be responsible for end-to-end development covering Financial Attribution, SCD, Booking and Referring Agreements, Data Aggregations, and SOR Onboarding
Key Responsibilities:
β’ Design, develop, and optimize ETL pipelines using PYSPARK, S3 and Dremio
β’ Working on ProfitView Modernization which requires Pyspark, Python, Dremio, ETL, Financial exp.
β’ Work with large-scale structured and unstructured data from various sources.
β’ Implement data ingestion, transformation, and loading processes into data lakes and data warehouses.
β’ Collaborate with BI developers, data analysts, and business stakeholders to understand data requirements.
β’ Ensure data quality, integrity, and governance across all data pipelines.
β’ Monitor and troubleshoot performance issues.
β’ Participate in code reviews, testing, and deployment processes.
β’ Document technical solutions, data flows, and architecture.
Required Skills & Qualifications:
β’ Strong hands-on experience with PySpark for data processing and transformation.
β’ Proficiency in ETL tools Informatica, Oracle PL/SQL , Teradata
β’ Experience in enterprise frameworks, UNIX script writing
β’ Experience in job scheduling, batch process, data analysis, defect resolution
β’ Solid understanding of Data Warehousing concepts (e.g., star/snowflake schema, slowly changing dimensions).
β’ Experience with cloud platforms (Azure, or GCP) and services like S3
β’ Strong SQL skills for data extraction, transformation, and analysis.
β’ Experience with version control systems (e.g., Git) and CI/CD pipelines.
β’ Excellent problem-solving and communication skills.
β’ Agile/Scrum knowledge is a plus.