

Data Engineer
Data Engineer (Contract)
Dunwoody, GA, United States (Remote)
Contract (8 months 11 days)
Published 4 hours ago
Snowflake
Azure Data Factory (ADF)
SQL
databricks
ETL/ELT
Azure cloud
We are seeking a highly motivated and experienced Data Engineer to join our growing data team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable and reliable data pipelines and data warehousing solutions on the Azure cloud platform. You will work closely with data scientists, analysts, and other stakeholders to transform raw data into valuable insights that drive business decisions. The ideal candidate possesses a strong understanding of data engineering principles, proficiency in Azure cloud technologies, and hands-on experience with ADF, Databricks, Snowflake, and Control-M.
Responsibilities:
Design, develop, and maintain robust and scalable data pipelines using Azure Data Factory (ADF).
Build and optimize data warehousing solutions on Snowflake, ensuring performance, scalability, and data quality.
Leverage Databricks for data processing, transformation, and advanced analytics using Spark and Python/Scala.
Implement and manage data orchestration and scheduling using Control-M.
Collaborate with data scientists and analysts to understand their data requirements and provide efficient data solutions.
Monitor and troubleshoot data pipelines and data warehouse performance, ensuring data integrity and reliability.
Implement data quality checks and validation processes to ensure accuracy and consistency of data.
Develop and maintain data models and schemas for optimal data storage and retrieval.
Implement and maintain data security and governance policies.
Stay up-to-date with the latest advancements in Azure cloud technologies and data engineering best practices.
Document data pipelines, data models, and ETL processes.
Participate in code reviews and contribute to the development of data engineering standards and best practices.
Troubleshoot and resolve data-related issues in a timely manner.
Required Technical Skills:
Azure Cloud: Deep understanding of Azure cloud services, particularly in the data and analytics domain.
Azure Data Factory (ADF): Proven experience in designing, building, deploying, and managing complex data pipelines using ADF, including data flows and mapping data flows.
Databricks: Hands-on experience with Databricks, including Spark programming (Python or Scala), Delta Lake, and optimizing Spark jobs for performance.
Snowflake: Extensive experience in designing, developing, and administering data warehouses on Snowflake, including data modeling, SQL development, performance tuning, and security features.
Control-M: Experience in using Control-M for workflow orchestration, scheduling, monitoring, and managing batch processes.
SQL: Strong proficiency in SQL for data querying, manipulation, and analysis across different database systems.
Data Modeling: Solid understanding of different data modeling techniques (e.g., relational, dimensional).
ETL/ELT Concepts: Comprehensive understanding of ETL and ELT principles and best practices.
Scripting: Proficiency in at least one scripting language such as Python for automation and data manipulation.
Version Control: Experience with Git and related version control workflows.
Data Quality: Understanding of data quality principles and experience implementing data quality checks and processes.
Desired Technical Skills:
Experience with other Azure data services such as Azure Synapse Analytics, Azure Data Lake Storage (ADLS), Azure Event Hubs, and Azure Functions.
Knowledge of data governance frameworks and tools.
Experience with CI/CD pipelines for data engineering deployments.
Familiarity with agile development methodologies.
Experience with data visualization tools (e.g., Power BI, Tableau).
Knowledge of NoSQL databases.
Technical Certifications (Preferred):
Microsoft Certified: Azure Data Engineer Associate
Snowflake SnowPro Core Certification
Databricks Certified Associate Developer for Apache Spark
Control-M certification
The pay range that the employer in good faith reasonably expects to pay for this position is $32.36/hour - $50.56/hour. Our benefits include medical, dental, vision and retirement benefits. Applications will be accepted on an ongoing basis.
Tundra Technical Solutions is among North America’s leading providers of Staffing and Consulting Services. Our success and our clients’ success are built on a foundation of service excellence. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Unincorporated LA County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: client provided property, including hardware (both of which may include data) entrusted to you from theft, loss or damage; return all portable client computer hardware in your possession (including the data contained therein) upon completion of the assignment, and; maintain the confidentiality of client proprietary, confidential, or non-public information. In addition, job duties require access to secure and protected client information technology systems and related data security obligations.
Data Engineer (Contract)
Dunwoody, GA, United States (Remote)
Contract (8 months 11 days)
Published 4 hours ago
Snowflake
Azure Data Factory (ADF)
SQL
databricks
ETL/ELT
Azure cloud
We are seeking a highly motivated and experienced Data Engineer to join our growing data team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable and reliable data pipelines and data warehousing solutions on the Azure cloud platform. You will work closely with data scientists, analysts, and other stakeholders to transform raw data into valuable insights that drive business decisions. The ideal candidate possesses a strong understanding of data engineering principles, proficiency in Azure cloud technologies, and hands-on experience with ADF, Databricks, Snowflake, and Control-M.
Responsibilities:
Design, develop, and maintain robust and scalable data pipelines using Azure Data Factory (ADF).
Build and optimize data warehousing solutions on Snowflake, ensuring performance, scalability, and data quality.
Leverage Databricks for data processing, transformation, and advanced analytics using Spark and Python/Scala.
Implement and manage data orchestration and scheduling using Control-M.
Collaborate with data scientists and analysts to understand their data requirements and provide efficient data solutions.
Monitor and troubleshoot data pipelines and data warehouse performance, ensuring data integrity and reliability.
Implement data quality checks and validation processes to ensure accuracy and consistency of data.
Develop and maintain data models and schemas for optimal data storage and retrieval.
Implement and maintain data security and governance policies.
Stay up-to-date with the latest advancements in Azure cloud technologies and data engineering best practices.
Document data pipelines, data models, and ETL processes.
Participate in code reviews and contribute to the development of data engineering standards and best practices.
Troubleshoot and resolve data-related issues in a timely manner.
Required Technical Skills:
Azure Cloud: Deep understanding of Azure cloud services, particularly in the data and analytics domain.
Azure Data Factory (ADF): Proven experience in designing, building, deploying, and managing complex data pipelines using ADF, including data flows and mapping data flows.
Databricks: Hands-on experience with Databricks, including Spark programming (Python or Scala), Delta Lake, and optimizing Spark jobs for performance.
Snowflake: Extensive experience in designing, developing, and administering data warehouses on Snowflake, including data modeling, SQL development, performance tuning, and security features.
Control-M: Experience in using Control-M for workflow orchestration, scheduling, monitoring, and managing batch processes.
SQL: Strong proficiency in SQL for data querying, manipulation, and analysis across different database systems.
Data Modeling: Solid understanding of different data modeling techniques (e.g., relational, dimensional).
ETL/ELT Concepts: Comprehensive understanding of ETL and ELT principles and best practices.
Scripting: Proficiency in at least one scripting language such as Python for automation and data manipulation.
Version Control: Experience with Git and related version control workflows.
Data Quality: Understanding of data quality principles and experience implementing data quality checks and processes.
Desired Technical Skills:
Experience with other Azure data services such as Azure Synapse Analytics, Azure Data Lake Storage (ADLS), Azure Event Hubs, and Azure Functions.
Knowledge of data governance frameworks and tools.
Experience with CI/CD pipelines for data engineering deployments.
Familiarity with agile development methodologies.
Experience with data visualization tools (e.g., Power BI, Tableau).
Knowledge of NoSQL databases.
Technical Certifications (Preferred):
Microsoft Certified: Azure Data Engineer Associate
Snowflake SnowPro Core Certification
Databricks Certified Associate Developer for Apache Spark
Control-M certification
The pay range that the employer in good faith reasonably expects to pay for this position is $32.36/hour - $50.56/hour. Our benefits include medical, dental, vision and retirement benefits. Applications will be accepted on an ongoing basis.
Tundra Technical Solutions is among North America’s leading providers of Staffing and Consulting Services. Our success and our clients’ success are built on a foundation of service excellence. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Unincorporated LA County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: client provided property, including hardware (both of which may include data) entrusted to you from theft, loss or damage; return all portable client computer hardware in your possession (including the data contained therein) upon completion of the assignment, and; maintain the confidentiality of client proprietary, confidential, or non-public information. In addition, job duties require access to secure and protected client information technology systems and related data security obligations.