

Optomi
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown", offering a pay rate of "unknown", and requires expertise in data engineering, SQL, Python/Java/Scala, Spark, and AWS. Experience with data pipelines, APIs, and Agile methodologies is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
624
-
🗓️ - Date
December 10, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Los Angeles, CA
-
🧠 - Skills detailed
#Kubernetes #Documentation #Agile #Python #Programming #Infrastructure as Code (IaC) #Spark (Apache Spark) #API (Application Programming Interface) #Java #Data Modeling #Data Quality #Data Engineering #Cloud #Delta Lake #Airflow #Databricks #Data Science #Datasets #GraphQL #SQL (Structured Query Language) #Scala #AWS (Amazon Web Services) #Scrum #Data Governance #Data Processing #Data Pipeline #"ETL (Extract #Transform #Load)"
Role description
Description:
• As a Senior Data Engineer, you will play a pivotal role in the transformation of data into actionable insights. Collaborate with our dynamic team of technologists to develop cutting-edge data solutions that drive innovation and fuel business growth. Your responsibilities will include managing complex data structures and delivering scalable and efficient data solutions. Your expertise in data engineering will be crucial in optimizing our data-driven decision-making processes. If you're passionate about leveraging data to make a tangible impact, we welcome you to join us in shaping the future of our organization.
Qualifications
• 5+ years of data engineering experience developing large data pipelines.
• Proficiency in at least one major programming language (e.g. Python,Java, Scala) Strong SQL skills and ability to create queries to analyze complex datasets.
• Hands-on production environment experience with distributed processing systems such as Spark.
• Experience with interacting and ingesting data from API data sources efficiently. Experience coding with Spark Dataframe API to create data engineering workflows in Databricks
• Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
• Experience in developing APIs with GraphQL; Deep Understanding of AWS or other cloud providers as well as infrastructure as code.
• Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices.
• Strong algorithmic problem-solving expertise.
• Excellent written and verbal communication.
• Advance understanding of OLTP vs OLAP environments.
• Willingness and ability to learn and pick up new skill sets.
• Self-starting problem solver with an eye for detail and excellent analytical and communication skills.
• Strong background in at least one of the following: distributed data processing or software engineering of data services, or data modeling.
• Familiar with Scrum and Agile methodologies.
Key Responsibilities:
• Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
• Build tools and services to support data discovery, lineage, governance, and privacy Collaborate with other software/data engineers and cross-functional teams
• Tech stack includes Airflow, Spark, Databricks, Delta Lake, Kubernetes and AWS Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
• Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
• Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
• Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
• Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
• Maintain detailed documentation of your work and changes to support data quality and data governance requirements
Description:
• As a Senior Data Engineer, you will play a pivotal role in the transformation of data into actionable insights. Collaborate with our dynamic team of technologists to develop cutting-edge data solutions that drive innovation and fuel business growth. Your responsibilities will include managing complex data structures and delivering scalable and efficient data solutions. Your expertise in data engineering will be crucial in optimizing our data-driven decision-making processes. If you're passionate about leveraging data to make a tangible impact, we welcome you to join us in shaping the future of our organization.
Qualifications
• 5+ years of data engineering experience developing large data pipelines.
• Proficiency in at least one major programming language (e.g. Python,Java, Scala) Strong SQL skills and ability to create queries to analyze complex datasets.
• Hands-on production environment experience with distributed processing systems such as Spark.
• Experience with interacting and ingesting data from API data sources efficiently. Experience coding with Spark Dataframe API to create data engineering workflows in Databricks
• Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
• Experience in developing APIs with GraphQL; Deep Understanding of AWS or other cloud providers as well as infrastructure as code.
• Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices.
• Strong algorithmic problem-solving expertise.
• Excellent written and verbal communication.
• Advance understanding of OLTP vs OLAP environments.
• Willingness and ability to learn and pick up new skill sets.
• Self-starting problem solver with an eye for detail and excellent analytical and communication skills.
• Strong background in at least one of the following: distributed data processing or software engineering of data services, or data modeling.
• Familiar with Scrum and Agile methodologies.
Key Responsibilities:
• Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
• Build tools and services to support data discovery, lineage, governance, and privacy Collaborate with other software/data engineers and cross-functional teams
• Tech stack includes Airflow, Spark, Databricks, Delta Lake, Kubernetes and AWS Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
• Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
• Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
• Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
• Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
• Maintain detailed documentation of your work and changes to support data quality and data governance requirements






