S3 Connections LLC

Data Engineer- Python

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Python in Pasadena, CA, offering a hybrid work model. Contract length is unspecified, with a pay rate of "unknown." Requires 7+ years of experience, expertise in Python, PySpark, Databricks, and PostgreSQL.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 16, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Pasadena, CA
-
🧠 - Skills detailed
#DevOps #Data Engineering #Debugging #Data Bricks #Data Processing #Cloud #PySpark #Data Pipeline #Scala #Databricks #AWS (Amazon Web Services) #Databases #API (Application Programming Interface) #Microservices #Python #Spark (Apache Spark) #PostgreSQL #Azure #GCP (Google Cloud Platform)
Role description
Role: Python Developer / Data Engineer Location : Pasadena- CA Onsite/ Hybrid Need locals in California (in-person Interview required) Skills: Python , Pyspark , Data Bricks Key Responsibilities: • Design, develop, and maintain backend services and high-performance data pipelines using Python and PySpark. • Architect and implement scalable APIs and microservices to support business-critical applications and integrations. • Design and optimize PostgreSQL data models and queries for performance and reliability. • Collaborate with cross-functional teams including data engineers, architects, and DevOps to ensure system robustness and scalability. • Lead technical design discussions, mentor junior engineers, and enforce best practices in backend development. • Participate in performance tuning, debugging, and production support. • Integrate with external systems such as OpenText (experience helpful but not required). Required Skills and Experience • 7+ years of professional software development experience, with a focus on backend systems. • Expert-level proficiency in Python and PySpark for backend and data processing workloads. • Experience with Databricks • Strong understanding of backend architecture, distributed systems, and API design. • Experience with PostgreSQL or other relational databases. • Familiarity with cloud-based environments (AWS, Azure, or GCP) and CI/CD pipelines. • Strong problem-solving skills and ability to work independently on complex projects.