Lorven Technologies Inc.

Python Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Python, PySpark, Java) on a long-term remote contract. Requires 10-15 years of experience, including 5+ years in data engineering, with expertise in PySpark, Python, and familiarity with Energy and Utility projects.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
California, United States
-
🧠 - Skills detailed
#Pandas #API (Application Programming Interface) #Scala #Data Management #Infrastructure as Code (IaC) #Spark (Apache Spark) #Data Quality #Agile #PySpark #Python #Metadata #Scrum #Data Access #Leadership #TypeScript #Java #Palantir Foundry #Database Design #"ETL (Extract #Transform #Load)" #Automation #Data Engineering #Computer Science
Role description
Hi Our client is looking for a Data Engineer (Python, Pyspak, Java) with a Long-Term Contract project in Remote below is the detailed requirement. Role: Data Engineer (Python, Pyspak, Java) Location: Remote Duration: Long term Contract Job description: MS or equivalent experience in Computer Science, MIS, or related technical fields. 10–15+ years of overall experience, including 5+ years in data engineering/ETL ecosystems using PySpark, Python, and Java. Experience in Energy and Utility projects is a significant advantage Translate business requirements into technical solutions using PySpark and Python frameworks. Lead data engineering initiatives addressing moderately complex to highly complex data and analytics challenges. Plan and execute tasks to meet shared objectives, maintain progress tracking, and document work following best practices. Identify and implement internal process improvements, including scalable infrastructure design, optimized data distribution, and automation of manual workflows. Participate actively in Agile/Scrum ceremonies such as stand ups, sprint planning, and retrospectives. Contribute to the evolution of data systems and architecture, recommending enhancements to pipelines and frameworks. Provide technical guidance to team members on complex challenges spanning multiple functional and technical domains. Build infrastructure that supports large scale data access and analysis, ensuring data quality and proper metadata management. Collaborate with leadership to strengthen data driven decision making through demos, mentorship, and best practice sharing. Minimum Qualifications Required Skills Strong expertise in PySpark and Python. Experience with Pandas, APIs, and Spark Streaming. Solid understanding of database design fundamentals. Familiarity with CI/CD tools and infrastructure as code frameworks. Experience writing production grade code. Experience with unit tests, integration tests, schema validations, and health checks. Knowledge of Palantir Foundry (Ontology modeling, API configuration, Foundry Typescript) is a strong plus.