

Professional Search Group
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, offering a contract of unspecified length at a pay rate of "unknown". Key skills include Python, Scala, SQL, and experience with Airflow and Spark. Preferred candidates have clickstream data experience and strong problem-solving abilities.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Delta Lake #Libraries #Snowflake #Data Science #Python #Cloud #AWS (Amazon Web Services) #Datasets #S3 (Amazon Simple Storage Service) #Scrum #Data Engineering #Scripting #Databricks #Programming #Documentation #Data Governance #Spark (Apache Spark) #SQL (Structured Query Language) #Airflow #Data Pipeline #Agile #Scala #Data Quality
Role description
The Data Solutions team is seeking to grow their team of world-class Data Engineers that share their charisma and enthusiasm for making a positive impact.
Responsibilities:
• Contribute to maintaining, updating, and expanding existing data pipelines in Python / Spark while maintaining strict uptime SLAs
• Architect, design, and code shared libraries in Scala and Python that abstract complex business logic to allow consistent functionality across all data pipelines
• Tech stack includes Airflow, Spark, Databricks, Delta Lake, Snowflake, Scala, Python
• Collaborate with product managers, architects, and other engineers to drive the success of the Product Performance Data and key business stakeholders
• Contribute to developing and documenting both internal and external standards for pipeline configurations, naming conventions, partitioning strategies, and more
• Ensure high operational efficiency and quality of datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our partners (Engineering, Data Science, Operations, and Analytics teams)
• Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
• Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
• • Maintain detailed documentation of your work and changes to support data quality and data governance requirements
Basic Qualifications
• At least 5 years of data engineering experience developing large data pipelines
• Strong algorithmic problem-solving expertise
• Strong fundamental Python programming skills
• Basic understanding of AWS or other cloud provider resources (S3)
• Strong SQL skills and ability to create queries to analyze complex datasets
• Hands-on production environment experience with distributed processing systems such as Spark
• Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
• Some scripting language experience
• Willingness and ability to learn and pick up new skills
• Self-starting problem solver with an eye for detail and excellent analytical and communication skills
Preferred Qualifications
• Candidates with Click stream, user browse data are highly preferred
We need an expert in SQL, extensive experience with Scala, a proven self-starter (expected to discover the outcome, and then chase after it), not only able to speak technical but clearly articulate that info to the business as well.
NO C2C
The Data Solutions team is seeking to grow their team of world-class Data Engineers that share their charisma and enthusiasm for making a positive impact.
Responsibilities:
• Contribute to maintaining, updating, and expanding existing data pipelines in Python / Spark while maintaining strict uptime SLAs
• Architect, design, and code shared libraries in Scala and Python that abstract complex business logic to allow consistent functionality across all data pipelines
• Tech stack includes Airflow, Spark, Databricks, Delta Lake, Snowflake, Scala, Python
• Collaborate with product managers, architects, and other engineers to drive the success of the Product Performance Data and key business stakeholders
• Contribute to developing and documenting both internal and external standards for pipeline configurations, naming conventions, partitioning strategies, and more
• Ensure high operational efficiency and quality of datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our partners (Engineering, Data Science, Operations, and Analytics teams)
• Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
• Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
• • Maintain detailed documentation of your work and changes to support data quality and data governance requirements
Basic Qualifications
• At least 5 years of data engineering experience developing large data pipelines
• Strong algorithmic problem-solving expertise
• Strong fundamental Python programming skills
• Basic understanding of AWS or other cloud provider resources (S3)
• Strong SQL skills and ability to create queries to analyze complex datasets
• Hands-on production environment experience with distributed processing systems such as Spark
• Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
• Some scripting language experience
• Willingness and ability to learn and pick up new skills
• Self-starting problem solver with an eye for detail and excellent analytical and communication skills
Preferred Qualifications
• Candidates with Click stream, user browse data are highly preferred
We need an expert in SQL, extensive experience with Scala, a proven self-starter (expected to discover the outcome, and then chase after it), not only able to speak technical but clearly articulate that info to the business as well.
NO C2C






