

Ampstek
Azure Data Brick Engineer Visa GC USC TN
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Brick Engineer on a contract basis, requiring 8-9 years of experience, strong proficiency in Databricks, DLT, and PySpark. Location is Houston with remote work and monthly travel. Pay rate is "unknown."
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 10, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Houston, TX
-
π§ - Skills detailed
#Hadoop #Data Science #Kubernetes #Azure #Data Warehouse #Luigi #Data Governance #Spark (Apache Spark) #Redshift #Scala #NoSQL #Python #Compliance #Snowflake #Cloud #Apache Airflow #"ETL (Extract #Transform #Load)" #Data Architecture #Databricks #GCP (Google Cloud Platform) #Java #dbt (data build tool) #Deployment #Data Quality #AWS Glue #Data Lake #Data Analysis #PySpark #Airflow #Monitoring #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Storage #Data Security #PostgreSQL #Data Engineering #Data Pipeline #Data Processing #Security #SQL (Structured Query Language) #Computer Science #Docker #Big Data #SQL Queries #Data Modeling #BigQuery #Databases
Role description
Title: Azure Data brick Engineer
Location: Houston, Remote but candidate need to travel once in alternate month
Job Type: Contract
Visa-: GC,USC,TN
Relevant experience to be more than 8-9 years, Strong and proficient in Databricks, DLT (Delta Live Tables) framework and Pyspark, need excellent communication
Key Responsibilities:
β’ Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data.
β’ Build and optimize data warehouses, data lakes, and analytical data models.
β’ Implement data quality, validation, and governance processes.
β’ Collaborate with data analysts, data scientists, and business teams to ensure data availability and accuracy.
β’ Optimize performance of SQL queries, data pipelines, and storage systems.
β’ Develop and maintain CI/CD workflows for data pipeline deployment and monitoring.
β’ Work with cloud platforms (AWS, Azure, or GCP) for scalable and secure data solutions.
β’ Ensure compliance with data security and privacy regulations.
β’ Mentor junior engineers and contribute to the data engineering best practices and architecture roadmap.
Required Skills & Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, or related field.
β’ 5+ years of hands-on experience in data engineering or data architecture.
β’ Strong proficiency in SQL and experience with relational and NoSQL databases (e.g., PostgreSQL, Snowflake, Redshift, BigQuery, Cassandra).
β’ Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, AWS Glue, dbt).
β’ Proficiency in Python, Scala, or Java for data processing.
β’ Strong experience with big data technologies (e.g., Spark, Hadoop, Kafka).
β’ Experience with cloud data platforms (AWS, Azure, or GCP).
β’ Familiarity with containerization and orchestration (Docker, Kubernetes).
β’ Strong understanding of data modeling, warehousing concepts, and data governance.
Title: Azure Data brick Engineer
Location: Houston, Remote but candidate need to travel once in alternate month
Job Type: Contract
Visa-: GC,USC,TN
Relevant experience to be more than 8-9 years, Strong and proficient in Databricks, DLT (Delta Live Tables) framework and Pyspark, need excellent communication
Key Responsibilities:
β’ Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data.
β’ Build and optimize data warehouses, data lakes, and analytical data models.
β’ Implement data quality, validation, and governance processes.
β’ Collaborate with data analysts, data scientists, and business teams to ensure data availability and accuracy.
β’ Optimize performance of SQL queries, data pipelines, and storage systems.
β’ Develop and maintain CI/CD workflows for data pipeline deployment and monitoring.
β’ Work with cloud platforms (AWS, Azure, or GCP) for scalable and secure data solutions.
β’ Ensure compliance with data security and privacy regulations.
β’ Mentor junior engineers and contribute to the data engineering best practices and architecture roadmap.
Required Skills & Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, or related field.
β’ 5+ years of hands-on experience in data engineering or data architecture.
β’ Strong proficiency in SQL and experience with relational and NoSQL databases (e.g., PostgreSQL, Snowflake, Redshift, BigQuery, Cassandra).
β’ Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, AWS Glue, dbt).
β’ Proficiency in Python, Scala, or Java for data processing.
β’ Strong experience with big data technologies (e.g., Spark, Hadoop, Kafka).
β’ Experience with cloud data platforms (AWS, Azure, or GCP).
β’ Familiarity with containerization and orchestration (Docker, Kubernetes).
β’ Strong understanding of data modeling, warehousing concepts, and data governance.