

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, a 12+ month contract position, remote, paying up to $70/hr. Requires 15+ years of experience, expertise in Python, PySpark, AWS, GIS, Palantir Foundry, and strong data engineering skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date discovered
July 14, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Remote
-
π§ - Skills detailed
#Data Engineering #GitLab #AWS (Amazon Web Services) #Security #Automation #Jenkins #Version Control #Compliance #Data Processing #Data Governance #Data Pipeline #GIT #Linux #PySpark #Spark (Apache Spark) #SQL (Structured Query Language) #BitBucket #Agile #Monitoring #Palantir Foundry #Scala #Python #DevOps #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Lead Data Engineer (15+ Years Experience)Job Type: 12+ Months ContractLocation: RemoteRate: Competitive / hr. W2 or C2C
Role Synopsis:
We are seeking a highly skilled Lead Data Engineer with over 15 years of experience and deep expertise in Python (Data Structures and Algorithms), PySpark, AWS, GIS, and Palantir Foundry. The ideal candidate will be a subject matter expert in Python development and data engineering, with a proven track record of building scalable data pipelines and collaborating with cross-functional teams. This role involves enhancing and maintaining data processing systems while leveraging open-source technologies and cloud platforms to drive innovation and operational efficiency.
Must-Have Requirements:
5+ years of hands-on experience in Python, PySpark, and SQL in large-scale distributed data environments.
Strong command of data structures and algorithms with ability to optimize complex data workflows.
Proficiency in working in a Linux environment.
Hands-on experience with AWS services and automation tools like GitLab, Jenkins/CodeBuild, and CodePipeline.
Prior experience with Palantir Foundry and GIS platforms.
Strong experience in platform monitoring, metrics, and alerting tools.
Background in supporting and collaborating with cross-functional product and engineering teams.
Key Responsibilities:
Design, develop, and optimize data-processing, orchestration, and monitoring solutions using open-source tools and AWS.
Partner with product and tech teams to validate and evolve the capabilities of the data platform.
Implement process improvements to automate manual tasks, enhance usability, and improve scalability.
Provide technical support and guidance to users of the platformβs services.
Lead the development of metrics, monitoring, and alerting mechanisms to ensure robust visibility into production systems.
Desired Qualifications (Optional):
Experience with Git, Bitbucket, and other version control systems.
Familiarity with data governance, security, and compliance best practices in cloud environments.
Prior experience working in agile teams and DevOps cultures.
Job Type: Contract
Pay: Up to $70.00 per hour
Schedule:
8 hour shift
People with a criminal record are encouraged to apply
Work Location: Remote