Motion Recruitment

Data Pipeline Engineer / Irving / Charlotte / Hybrid

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Pipeline Engineer in Irving, TX or Charlotte, NC (Hybrid) for an 18-month contract, offering expertise in SQL, Python, Spark, and data orchestration tools. Experience with cloud platforms, particularly GCP, is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 15, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Irving, TX
-
🧠 - Skills detailed
#Airflow #SAS #Cloud #Databases #Data Pipeline #Data Processing #SSIS (SQL Server Integration Services) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Migration #Python #SQL (Structured Query Language) #Data Analysis #Scala #Ab Initio #Data Engineering #Data Quality #GCP (Google Cloud Platform) #Data Lake
Role description
Outstanding long-term contract opportunity! A well-known Financial Services Company is looking for a Data Pipeline Engineer in Irving TX or Charlotte NC (Hybrid). We are seeking an experienced Data Pipeline Engineer to design, architect, and maintain scalable data pipelines that support reporting and downstream applications. The ideal candidate will bring strong expertise in cloud-based and open-source data technologies, modern data lake architectures, and data engineering best practices. Contract Duration: 18 Months Required Skills & Experience • Expertise in SQL for data analysis, transformation, and performance tuning. • Strong hands-on experience with Python and Spark for large-scale data processing. • Experience with data pipelining and orchestration tools (e.g., Airflow, Cloud Composer, or equivalent). • Solid understanding of databases, data warehousing, and data lake architectures. • Proven experience designing and architecting data pipelines for analytics and reporting. • Experience working with legacy ETL tools such as SSIS, Ab Initio, SAS, or equivalent. • Strong analytical, critical thinking, and problem-solving skills. • Ability to adapt quickly to evolving technologies and business requirements. Desired Skills & Experience • Expertise in cloud platforms, with Google Cloud Platform (GCP) strongly preferred. • Experience building cloud-native data lake solutions using open-source technologies. What You Will Be Doing • Design, architect, and implement scalable data pipelines for reporting and downstream applications using open-source tools and cloud platforms. • Build and support cloud-based data lake architectures for both operational and analytical data stores. • Apply strong database, SQL, and reporting concepts to design efficient, high-performance data solutions. • Develop data processing and transformation logic using Python and Spark. • Work with and interpret legacy ETL code and workflows from tools such as SSIS, Ab Initio, SAS, and similar technologies to support modernization and migration initiatives. • Utilize modern data pipelining and orchestration tools to automate, monitor, and optimize data workflows. • Troubleshoot data pipeline issues, ensure data quality, and optimize performance and reliability. • Demonstrate critical thinking, adaptability to change, and strong problem-solving skills. Posted By: Rachel LeClair