

My3Tech
Databricks Certified - Data Engineer W/ State/Federal Client Exp
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer 2 with state client experience, offering a contract in Austin, TX (remote within TX). Pay rate is "unknown." Key skills include Databricks, ETL/ELT workflows, Python, Azure, and strong SQL. Databricks certification preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 16, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Austin, TX
-
π§ - Skills detailed
#Data Lake #Data Storage #Data Warehouse #TensorFlow #Compliance #Scala #Data Management #Deployment #Apache Spark #Metadata #Cloud #Data Pipeline #Spark (Apache Spark) #Data Quality #Delta Lake #Python #Azure ADLS (Azure Data Lake Storage) #Spark SQL #Version Control #Databricks #Azure Data Factory #Debugging #DevOps #Data Engineering #ML (Machine Learning) #Data Lineage #"ETL (Extract #Transform #Load)" #Libraries #Security #Storage #Data Manipulation #Azure #Data Science #MLflow #Programming #Big Data #SQL (Structured Query Language) #Agile #ADLS (Azure Data Lake Storage) #Data Security #Azure cloud #Data Processing #R #Data Catalog #ADF (Azure Data Factory) #Data Governance
Role description
Hello,
Position: Data Engineer 2
Location: Austin, TX 78751 (Remote within TX)
This position is with State client.
Seeking Data Engineer 2 who will be responsible for developing, maintaining, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role supports data engineering, machine learning, and analytics initiatives within this organization that relies on large-scale data processing.
Duties include:
β’ Designing and developing scalable data pipelines
β’ Implementing ETL/ELT workflows
β’ Optimizing Spark jobs
β’ Integrating with Azure Data Factory
β’ Automating deployments
Candidate Skills and Qualifications:
4 - Required - Implement ETL/ELT workflows for both structured and unstructured data
4 - Required - Automate deployments using CI/CD tools
4 - Required - Collaborate with cross-functional teams including data scientists, analysts, and stakeholders
4 - Required - Design and maintain data models, schemas, and database structures to support analytical and operational use cases
4 - Required - Evaluate and implement appropriate data storage solutions, including data lakes (Azure Data Lake Storage) and data warehouses.
4 - Required - Implement data validation and quality checks to ensure accuracy and consistency
4 - Required - Contribute to data governance initiatives, including metadata management, data lineage, and data cataloging
4 - Required - Implement data security measures, including encryption, access controls, and auditing; ensure compliance with regulations and best practices
4 - Required - Proficiency in Python and R programming languages
4 - Required - Strong SQL querying and data manipulation skills
4 - Required - Experience with Azure cloud platform
4 - Required - Experience with DevOps, CI/CD pipelines, and version control systems
4 - Required - Working in agile, multicultural environments
4 - Required - Strong troubleshooting and debugging capabilities
3 - Required - Design and develop scalable data pipelines using Apache Spark on Databricks
3 - Required - Optimize Spark jobs for performance and cost-efficiency
3 - Required - Integrate Databricks solutions with cloud services (Azure Data Factory)
3 - Required - Ensure data quality, governance, and security using Unity Catalog or Delta Lake
3 - Required - Deep understanding of Apache Spark architecture, RDDs, DataFrames, and Spark SQL
3 - Required - Hands-on experience with Databricks notebooks, clusters, jobs, and Delta Lake
1 - Preferred - Knowledge of ML libraries (MLflow, Scikit-learn, TensorFlow)
1 - Preferred - Databricks Certified Associate Developer for Apache Spark
1 - Preferred - Azure Data Engineer Associate
---
Thanks
Srujana
Email: srujana.b@my3tech.com
Hello,
Position: Data Engineer 2
Location: Austin, TX 78751 (Remote within TX)
This position is with State client.
Seeking Data Engineer 2 who will be responsible for developing, maintaining, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role supports data engineering, machine learning, and analytics initiatives within this organization that relies on large-scale data processing.
Duties include:
β’ Designing and developing scalable data pipelines
β’ Implementing ETL/ELT workflows
β’ Optimizing Spark jobs
β’ Integrating with Azure Data Factory
β’ Automating deployments
Candidate Skills and Qualifications:
4 - Required - Implement ETL/ELT workflows for both structured and unstructured data
4 - Required - Automate deployments using CI/CD tools
4 - Required - Collaborate with cross-functional teams including data scientists, analysts, and stakeholders
4 - Required - Design and maintain data models, schemas, and database structures to support analytical and operational use cases
4 - Required - Evaluate and implement appropriate data storage solutions, including data lakes (Azure Data Lake Storage) and data warehouses.
4 - Required - Implement data validation and quality checks to ensure accuracy and consistency
4 - Required - Contribute to data governance initiatives, including metadata management, data lineage, and data cataloging
4 - Required - Implement data security measures, including encryption, access controls, and auditing; ensure compliance with regulations and best practices
4 - Required - Proficiency in Python and R programming languages
4 - Required - Strong SQL querying and data manipulation skills
4 - Required - Experience with Azure cloud platform
4 - Required - Experience with DevOps, CI/CD pipelines, and version control systems
4 - Required - Working in agile, multicultural environments
4 - Required - Strong troubleshooting and debugging capabilities
3 - Required - Design and develop scalable data pipelines using Apache Spark on Databricks
3 - Required - Optimize Spark jobs for performance and cost-efficiency
3 - Required - Integrate Databricks solutions with cloud services (Azure Data Factory)
3 - Required - Ensure data quality, governance, and security using Unity Catalog or Delta Lake
3 - Required - Deep understanding of Apache Spark architecture, RDDs, DataFrames, and Spark SQL
3 - Required - Hands-on experience with Databricks notebooks, clusters, jobs, and Delta Lake
1 - Preferred - Knowledge of ML libraries (MLflow, Scikit-learn, TensorFlow)
1 - Preferred - Databricks Certified Associate Developer for Apache Spark
1 - Preferred - Azure Data Engineer Associate
---
Thanks
Srujana
Email: srujana.b@my3tech.com