

Pearlsoft Solutions Inc.
Sr. Data Engineer with Databricks (Only Local to Dallas)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with Databricks, requiring 11+ years of IT experience, strong SQL and Python skills, and expertise in data pipelines, AWS services, and Kafka. Location: Dallas, TX; hybrid work with 3-4 days onsite.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 30, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Dallas-Fort Worth Metroplex
-
π§ - Skills detailed
#Python #Data Pipeline #Datasets #Debugging #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Lambda (AWS Lambda) #Data Engineering #Automation #Databricks #"ETL (Extract #Transform #Load)" #Redshift #SQL (Structured Query Language) #Kafka (Apache Kafka) #Athena
Role description
Job Title: Sr. Data Engineer with Databricks (Only local to Dallas)
Location: Dallas, TX (need only locals) Hybrid
Mode of Interview: Client round will be Face to Face
Mode of Job: 3-4 days onsite in a week must
No Third Party
Required Qualifications
β’ 11+ years of Over all IT experience required
β’ Strong handsβon experience with SQL for complex data validation and analysis.
β’ Proficiency in Python for test automation and data validation.
β’ Experience testing data pipelines and ETL/ELT workflows.
β’ Handsβon experience with Kafka or other streaming platforms.
β’ Solid understanding of AWS data services (S3, Glue, Redshift, Lambda, Athena, etc.).
β’ Experience working with large datasets and distributed systems.
β’ Strong debugging, analytical, and problemβsolving skills.
Job Title: Sr. Data Engineer with Databricks (Only local to Dallas)
Location: Dallas, TX (need only locals) Hybrid
Mode of Interview: Client round will be Face to Face
Mode of Job: 3-4 days onsite in a week must
No Third Party
Required Qualifications
β’ 11+ years of Over all IT experience required
β’ Strong handsβon experience with SQL for complex data validation and analysis.
β’ Proficiency in Python for test automation and data validation.
β’ Experience testing data pipelines and ETL/ELT workflows.
β’ Handsβon experience with Kafka or other streaming platforms.
β’ Solid understanding of AWS data services (S3, Glue, Redshift, Lambda, Athena, etc.).
β’ Experience working with large datasets and distributed systems.
β’ Strong debugging, analytical, and problemβsolving skills.






