

Pyspark Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior PySpark Engineer with a contract length of "unknown" and a pay rate of "unknown." Candidates must have 5+ years of experience in data engineering, strong skills in PySpark and SQL, and preferably experience in the financial services industry.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 14, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#SQL (Structured Query Language) #Deployment #Spark (Apache Spark) #Data Pipeline #Data Architecture #PySpark #Compliance #Python #Code Reviews #SQL Queries #Data Engineering #Spark SQL #Cloud #Data Extraction #Data Quality #Database Systems #"ETL (Extract #Transform #Load)" #Scala #Big Data
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description:
Our client is seeking a highly skilled and experienced Senior PySpark Developer to join our dynamic data engineering team. This role is ideal for a motivated professional with a strong background in big data technologies, ETL processes, and database systems. The successful candidate will play a key role in designing, developing, and optimizing data pipelines and analytics solutions that support critical business operations.
π Note: Only candidates eligible to work on a W2 basis will be considered.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using PySpark and Spark SQL.
β’ Collaborate with data architects, analysts, and business stakeholders to understand data requirements and deliver robust solutions.
β’ Optimize and troubleshoot complex ETL workflows and data transformation processes.
β’ Write efficient SQL queries for data extraction, transformation, and loading.
β’ Ensure data quality, integrity, and compliance with enterprise standards.
β’ Participate in code reviews, testing, and deployment activities.
β’ Monitor and maintain production workflows, including job scheduling and performance tuning.
Required Skills & Qualifications:
β’ 5+ years of experience in data engineering or software development.
β’ Strong proficiency in PySpark, Python, and Spark SQL.
β’ Solid experience with SQL and relational database systems.
β’ Proven expertise in ETL development and data pipeline orchestration.
β’ Familiarity with job scheduling tools; Autosys experience is a plus.
β’ Excellent problem-solving skills and attention to detail.
β’ Strong communication and collaboration abilities.
Preferred Qualifications:
β’ Experience working in the financial services industry.
β’ Familiarity with cloud platforms and big data ecosystems.