

Compunnel Inc.
Sr. Data Engineer -- MAZDC5731489
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include generative AI models, SQL, Python, and experience with Snowflake or PostgreSQL. A bachelor's degree in a related field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 30, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Orlando, FL
-
π§ - Skills detailed
#Knowledge Graph #Computer Science #Kubernetes #Docker #DevOps #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Python #Airflow #GitLab #Data Engineering #AI (Artificial Intelligence) #Databases #Deployment #Mathematics #GitHub #Data Pipeline #Apache Airflow #PostgreSQL #Snowflake
Role description
Sales Representative -- Anindya Mazumdar
Important: Experience with generative AI models
Job Description:
Responsibilities/What You Will Do:
Planning and leading research and development related to advanced analytic data solutions, leveraging GenAI.
Work with business and technology leaders to understand scope and requirements, business needs, and data from across the company in order to design and deliver the data pipelines necessary for best solutions.
Consult and collaborate with project team members, lead design reviews, do hands-on development, and communicate with colleagues and leaders.
Requirements:
7+ years experience in data engineering development using multiple environments (Dev, QA, Prod, etc.) and DevOps procedures for code deployment
Experience with a variety of GenAI models, tools, and concepts
5+ years of shown experience and expertise using SQL and Python
3+ years using, designing and building relational databases (preferably Snowflake or PostgreSQL)
Experience translating project scope and high-level requirements into technical data engineering tasks
Experience defining solutions to sophisticated data engineering problems in support of advanced analytic processes
Understanding of Knowledge Graphs, Data Mesh, and other data sharing platforms
3+ years of experience leading and deploying code using a source control product such as GitLab/GitHub
2+ years of experience with job scheduling software like Apache Airflow, Amazon MWAA, GitLab Runners or UC4
Multiple years of experience with ELT/ETL data pipeline development and maintenance
Experience using containerization technologies such as Docker or Kubernetes
Education:
Bachelorβs degree (Computer Science, Mathematics, Software Engineering or related field, or equivalent experience)
Sales Representative -- Anindya Mazumdar
Important: Experience with generative AI models
Job Description:
Responsibilities/What You Will Do:
Planning and leading research and development related to advanced analytic data solutions, leveraging GenAI.
Work with business and technology leaders to understand scope and requirements, business needs, and data from across the company in order to design and deliver the data pipelines necessary for best solutions.
Consult and collaborate with project team members, lead design reviews, do hands-on development, and communicate with colleagues and leaders.
Requirements:
7+ years experience in data engineering development using multiple environments (Dev, QA, Prod, etc.) and DevOps procedures for code deployment
Experience with a variety of GenAI models, tools, and concepts
5+ years of shown experience and expertise using SQL and Python
3+ years using, designing and building relational databases (preferably Snowflake or PostgreSQL)
Experience translating project scope and high-level requirements into technical data engineering tasks
Experience defining solutions to sophisticated data engineering problems in support of advanced analytic processes
Understanding of Knowledge Graphs, Data Mesh, and other data sharing platforms
3+ years of experience leading and deploying code using a source control product such as GitLab/GitHub
2+ years of experience with job scheduling software like Apache Airflow, Amazon MWAA, GitLab Runners or UC4
Multiple years of experience with ELT/ETL data pipeline development and maintenance
Experience using containerization technologies such as Docker or Kubernetes
Education:
Bachelorβs degree (Computer Science, Mathematics, Software Engineering or related field, or equivalent experience)






