

The Intersect Group
Data Engineer (Mid-level)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Mid-Level Data Engineer on a 12-month contract, paying competitive rates. Candidates need 3-5 years of data engineering experience, proficiency in Python, SQL, and cloud platforms, and must work onsite five days a week.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date
March 24, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Irving, TX
-
π§ - Skills detailed
#Big Data #Data Lake #Visualization #Azure #Data Modeling #Python #Microsoft Power BI #Scala #Cloud #Database Design #Data Pipeline #Batch #"ETL (Extract #Transform #Load)" #Documentation #AWS (Amazon Web Services) #Tableau #BI (Business Intelligence) #Computer Science #Mathematics #Data Engineering #SQL (Structured Query Language) #Airflow #PySpark #Databricks #Spark (Apache Spark)
Role description
Mid-Data Engineer
Duration: 12 month contract, will most likely be extended
Schedule: 5 days onsite
Interview process: 1st round is virtual; 2nd round is one hour onsite, possibly 3rd round
At The Intersect Group, we are partnering with a nationally recognized retail and consumer services organization that is investing heavily in modern data engineering, MarTech platforms, and customer experience innovation. The company values experimentation, technical rigor, and data driven decision making while building next generation capabilities that support millions of daily users. Their teams operate in a fast paced, highly collaborative environment where engineers are empowered to own meaningful solutions end to end.
Summary
We are seeking a Mid-Level Data Engineer to join the MarTech team responsible for building the data pipelines and platform capabilities that power customer personalization, marketing intelligence, and digital engagement. This role is highly technical and requires strong experience in designing, developing, and optimizing data pipelines within cloud and big data environments.
You will collaborate closely with senior engineers, analysts, and product leaders to support new customer facing data capabilities, build modern data workflows, and ensure reliable, scalable integrations. This role requires strong communication skills, advanced analytical thinking, and the ability to translate business needs into technical solutions.
Needs:
β’ Bachelorβs degree in Computer Science Mathematics Engineering or a related quantitative field
β’ 3 to 5 years of experience in data engineering with strong background in data modeling and analytics
β’ Strong proficiency in Python SQL Spark or PySpark and experience with cloud platforms AWS, Azure, or similar
β’ Deep understanding of ETL concepts data lakes data warehousing streaming technologies and database design
β’ Hands-on experience supporting analytics workflows and business intelligence teams
β’ Experience with orchestration tools such as Airflow and processing frameworks such as Spark or Databricks
β’ Familiarity with visualization platforms such as Power BI or Tableau
β’ Strong communication skills and ability to translate business requirements into technical designs
β’ Ability to work five days on site at the client location
β’ Experience with Databricks is a strong plus
β’ Experience with customer data platforms such as HighTouch is a plus
β’ Candidates must be willing and able to work a W2 contract with no sponsorship for this opportunity
Duties:
β’ Design and develop data pipelines, ETL workflows, and data engineering capabilities aligned to MarTech initiatives
β’ Optimize existing pipelines for performance, scalability, and cost efficiency
β’ Build and maintain data infrastructure supporting analytics, metrics development, and business intelligence
β’ Collaborate with product managers, architects, and analysts to translate requirements into technical implementations
β’ Work with senior engineers to apply best practices in data modeling, governance, and cloud development
β’ Support both batch and streaming data workflows across cloud environments
β’ Participate in cross functional projects to deliver high quality data solutions for customer experience and marketing use cases
β’ Communicate effectively with technical and non-technical partners to articulate design choices and integration plans
β’ Stay current with industry trends and contribute ideas for new tools, frameworks, and engineering approaches
β’ Follow and contribute to data engineering processes, standards, and documentation
Mid-Data Engineer
Duration: 12 month contract, will most likely be extended
Schedule: 5 days onsite
Interview process: 1st round is virtual; 2nd round is one hour onsite, possibly 3rd round
At The Intersect Group, we are partnering with a nationally recognized retail and consumer services organization that is investing heavily in modern data engineering, MarTech platforms, and customer experience innovation. The company values experimentation, technical rigor, and data driven decision making while building next generation capabilities that support millions of daily users. Their teams operate in a fast paced, highly collaborative environment where engineers are empowered to own meaningful solutions end to end.
Summary
We are seeking a Mid-Level Data Engineer to join the MarTech team responsible for building the data pipelines and platform capabilities that power customer personalization, marketing intelligence, and digital engagement. This role is highly technical and requires strong experience in designing, developing, and optimizing data pipelines within cloud and big data environments.
You will collaborate closely with senior engineers, analysts, and product leaders to support new customer facing data capabilities, build modern data workflows, and ensure reliable, scalable integrations. This role requires strong communication skills, advanced analytical thinking, and the ability to translate business needs into technical solutions.
Needs:
β’ Bachelorβs degree in Computer Science Mathematics Engineering or a related quantitative field
β’ 3 to 5 years of experience in data engineering with strong background in data modeling and analytics
β’ Strong proficiency in Python SQL Spark or PySpark and experience with cloud platforms AWS, Azure, or similar
β’ Deep understanding of ETL concepts data lakes data warehousing streaming technologies and database design
β’ Hands-on experience supporting analytics workflows and business intelligence teams
β’ Experience with orchestration tools such as Airflow and processing frameworks such as Spark or Databricks
β’ Familiarity with visualization platforms such as Power BI or Tableau
β’ Strong communication skills and ability to translate business requirements into technical designs
β’ Ability to work five days on site at the client location
β’ Experience with Databricks is a strong plus
β’ Experience with customer data platforms such as HighTouch is a plus
β’ Candidates must be willing and able to work a W2 contract with no sponsorship for this opportunity
Duties:
β’ Design and develop data pipelines, ETL workflows, and data engineering capabilities aligned to MarTech initiatives
β’ Optimize existing pipelines for performance, scalability, and cost efficiency
β’ Build and maintain data infrastructure supporting analytics, metrics development, and business intelligence
β’ Collaborate with product managers, architects, and analysts to translate requirements into technical implementations
β’ Work with senior engineers to apply best practices in data modeling, governance, and cloud development
β’ Support both batch and streaming data workflows across cloud environments
β’ Participate in cross functional projects to deliver high quality data solutions for customer experience and marketing use cases
β’ Communicate effectively with technical and non-technical partners to articulate design choices and integration plans
β’ Stay current with industry trends and contribute ideas for new tools, frameworks, and engineering approaches
β’ Follow and contribute to data engineering processes, standards, and documentation






