

Enexus Global Inc.
Jr. Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Jr. Data Engineer, lasting 12 months+, remote. Required skills include strong Python, PySpark, and data structures. Candidates should have 2-4 years of experience in building data pipelines and proficiency in Linux and SQL.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 2, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Linux #GitLab #Spark (Apache Spark) #Monitoring #Scala #PySpark #Data Pipeline #Jenkins #Python #Automation #BitBucket #SQL (Structured Query Language) #GIT #Data Engineering
Role description
Jr Data Engineer.
Duration: 12 Months+
Location: Remote
W2/C2C
Skills: Strong Python and Pyspark and Good with Data Structures and algorithms
Experience: 2 to 4 years
Responsibilities
β’ Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation.
β’ Collaborate with product and technology teams to design and validate the capabilities of the data platform
β’ Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability
β’ Provide technical support and usage guidance to the users of our platformβs services.
β’ Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services.
Qualifications
β’ Experience building and optimizing data pipelines in a distributed environment
β’ Experience supporting and working with cross-functional teams..
β’ Proficiency working in Linux environment
β’ working knowledge of SQL, Python, and PySpark
β’ Git/Bitbucket, Jenkins/CodeBuild, CodePipeline
Jr Data Engineer.
Duration: 12 Months+
Location: Remote
W2/C2C
Skills: Strong Python and Pyspark and Good with Data Structures and algorithms
Experience: 2 to 4 years
Responsibilities
β’ Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation.
β’ Collaborate with product and technology teams to design and validate the capabilities of the data platform
β’ Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability
β’ Provide technical support and usage guidance to the users of our platformβs services.
β’ Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services.
Qualifications
β’ Experience building and optimizing data pipelines in a distributed environment
β’ Experience supporting and working with cross-functional teams..
β’ Proficiency working in Linux environment
β’ working knowledge of SQL, Python, and PySpark
β’ Git/Bitbucket, Jenkins/CodeBuild, CodePipeline