

IMCS Group
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Ft. Worth, TX, for 6+ months at a competitive pay rate. Requires 10+ years of experience in Apache Spark, Scala, and Azure Databricks, with strong data engineering and analytics skills. Hybrid work model.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 28, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Fort Worth, TX
-
🧠 - Skills detailed
#Data Quality #Agile #Data Wrangling #Business Analysis #Visualization #Strategy #Spark (Apache Spark) #Data Modeling #Cloud #Apache Spark #Computer Science #Databricks #Data Strategy #Azure Databricks #Data Engineering #Leadership #Compliance #Scala #Azure #Data Pipeline #SQL (Structured Query Language)
Role description
Job Title: Sr. Data Engineer
Location – Ft. Worth, TX 76155
Onsite – Hybrid for 3 days
Duration – 06+ months
Must Have Skills:
• Apache Spark (10+ yrs of exp)
• Scala (10+ yrs of exp)
• Azure Databricks (10+ yrs of exp)
Nice to Have Skills:
• SQL (5+ yrs of exp)
• CICD (5+ yrs of exp)
Job Description:
This list is intended to reflect the current job but there may be additional essential functions (and certainly non-essential job functions) that are not referenced. Management will modify the job or require other tasks be performed whenever it is deemed appropriate to do so, observing, of course, any legal obligations including any collective bargaining obligations.
Provide product technical leadership for teams to both analyze and take charge of long-term opportunities, designing a cutting-edge data solution and help with core development when needed.
⦁ Act as Product Technical Lead.
⦁ Act as subject matter expert for data domain within the IT organization and work with business to author self-service data products
⦁ Collaborate with leaders, business analysts, project managers, IT architects, technical leads, and other developers, along with internal customers and cross functional teams to implement data strategy
⦁ Design and build data engineering pipeline frameworks while ensuring these are reusable, scalable, efficient, maintainable, gracefully recover from failures, reprocessing the data should be easy.
⦁ Drive data quality, best practices, coding standards, Test Driven Development, identifying single source of truth for data across systems and Quality Analytics (Mean Time to Recover, Mean Time between Failures, Patterns causing failures).
⦁ Utilize data pipelines to provide actionable insights into data quality and product performance.
⦁ Identify, design, and implement internal process improvements such as automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
⦁ Contribute to the continuous improvement of data engineering across the enterprise by researching industry best practices and determining best usage of specific cloud services and tools.
⦁ Work with data squads to ensure data products are designed with privacy and compliance baked in (Privacy by design).
⦁ Work with product teams to help prioritize team objectives and initiatives/team features.
⦁ Conduct road shows on the data products across the organization.
⦁ Advocate the agile process and test-driven development, using data engineering development tools to analyze, model, design, construct and test reusable
Minimum Qualifications – Education & Prior Job Experience
⦁ Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training
⦁ 9+ years of full Software Development Life Cycle (SDLC) experience designing, developing, and implementing large-scale applications in data analytics, warehousing, and data engineering.
⦁ Working experience in data analytics (data wrangling, mining, integration, analysis, visualization, data modeling, analysis/analytics, and reporting).
⦁ Should have at least 5 years of experience in optimizing Spark jobs for performance and cost-efficiency using advanced techniques such as partitioning, caching, cluster configuration tuning, and troubleshooting bottlenecks.
⦁ A great candidate must have extensive hands-on and tuning experiences and able to independent research to solve challenge problem. Proactive approach
Minimum Years of Experience:
⦁ 10+ years
Interview Process (Is face to face required?)
⦁ 2 Onsite interviews with coding
Job Title: Sr. Data Engineer
Location – Ft. Worth, TX 76155
Onsite – Hybrid for 3 days
Duration – 06+ months
Must Have Skills:
• Apache Spark (10+ yrs of exp)
• Scala (10+ yrs of exp)
• Azure Databricks (10+ yrs of exp)
Nice to Have Skills:
• SQL (5+ yrs of exp)
• CICD (5+ yrs of exp)
Job Description:
This list is intended to reflect the current job but there may be additional essential functions (and certainly non-essential job functions) that are not referenced. Management will modify the job or require other tasks be performed whenever it is deemed appropriate to do so, observing, of course, any legal obligations including any collective bargaining obligations.
Provide product technical leadership for teams to both analyze and take charge of long-term opportunities, designing a cutting-edge data solution and help with core development when needed.
⦁ Act as Product Technical Lead.
⦁ Act as subject matter expert for data domain within the IT organization and work with business to author self-service data products
⦁ Collaborate with leaders, business analysts, project managers, IT architects, technical leads, and other developers, along with internal customers and cross functional teams to implement data strategy
⦁ Design and build data engineering pipeline frameworks while ensuring these are reusable, scalable, efficient, maintainable, gracefully recover from failures, reprocessing the data should be easy.
⦁ Drive data quality, best practices, coding standards, Test Driven Development, identifying single source of truth for data across systems and Quality Analytics (Mean Time to Recover, Mean Time between Failures, Patterns causing failures).
⦁ Utilize data pipelines to provide actionable insights into data quality and product performance.
⦁ Identify, design, and implement internal process improvements such as automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
⦁ Contribute to the continuous improvement of data engineering across the enterprise by researching industry best practices and determining best usage of specific cloud services and tools.
⦁ Work with data squads to ensure data products are designed with privacy and compliance baked in (Privacy by design).
⦁ Work with product teams to help prioritize team objectives and initiatives/team features.
⦁ Conduct road shows on the data products across the organization.
⦁ Advocate the agile process and test-driven development, using data engineering development tools to analyze, model, design, construct and test reusable
Minimum Qualifications – Education & Prior Job Experience
⦁ Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training
⦁ 9+ years of full Software Development Life Cycle (SDLC) experience designing, developing, and implementing large-scale applications in data analytics, warehousing, and data engineering.
⦁ Working experience in data analytics (data wrangling, mining, integration, analysis, visualization, data modeling, analysis/analytics, and reporting).
⦁ Should have at least 5 years of experience in optimizing Spark jobs for performance and cost-efficiency using advanced techniques such as partitioning, caching, cluster configuration tuning, and troubleshooting bottlenecks.
⦁ A great candidate must have extensive hands-on and tuning experiences and able to independent research to solve challenge problem. Proactive approach
Minimum Years of Experience:
⦁ 10+ years
Interview Process (Is face to face required?)
⦁ 2 Onsite interviews with coding





