

Senior Data Engineer – Databricks & PySpark Expert
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – Databricks & PySpark Expert, offering a 6+ month contract in Irving, TX (Hybrid). Requires 8+ years in Python, Databricks, PySpark, big data tech, and CI/CD pipelines. Agile experience is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date discovered
June 27, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Irving, TX
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Code Reviews #Data Engineering #Kafka (Apache Kafka) #SQL Server #Automation #Infrastructure as Code (IaC) #Azure Data Factory #Data Pipeline #.Net #Scala #Hadoop #Databricks #PySpark #ADF (Azure Data Factory) #Stories #Data Lake #Terraform #Spark (Apache Spark) #Big Data #Azure #Python #Delta Lake #Agile #MS SQL (Microsoft SQL Server)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
🚀 Hiring Now: Senior Data Engineer – Databricks & PySpark Expert
📍 Location: Irving, TX (Hybrid – 3 days onsite)
⏳ Duration: 6+ Months Contract
🎯 Interview: In-Person (Single Round)
We’re seeking an experienced Data Engineer who excels in Databricks, PySpark, and big data technologies. This is a hands-on role for someone who can design and deliver scalable data solutions and enjoys working in a fast-paced Agile environment.
🔍 Key Responsibilities:
• Design, develop, and test scalable data solutions.
• Work on large and complex stories across multiple technology stacks.
• Collaborate in requirement gathering, design sessions, and sprint ceremonies.
• Write clean, maintainable code and conduct thorough code reviews.
• Ensure timely delivery of project milestones.
• Promote coding best practices and participate in knowledge-sharing sessions.
• Develop solutions with millions of records and optimize data pipelines.
• Contribute to CI/CD pipeline development and infrastructure automation.
✅ Must-Have Qualifications:
• 8+ years of hands-on experience in Python, MS SQL Server, and T-SQL.
• Strong experience with Databricks, PySpark, and Azure Data Factory.
• Proficiency with big data tech: Hadoop, Spark, Kafka.
• Solid understanding of data lake/delta lake architectures.
• Experience with CI/CD pipelines and Infrastructure as Code (Terraform, Pulumi).
• Agile development background with strong communication skills.
➕ Nice to Have:
• Exposure to or knowledge of .NET technologies.
📩 Ready to build enterprise-grade data pipelines?
Apply now or DM for more details!
#DataEngineer #Databricks #PySpark #Azure #BigData #SQL #DataLake #Kafka #Spark #HiringNow #IrvingJobs #HybridJobs #ContractJobs