

Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown", offering a pay rate of "unknown". Key skills include Databricks, Azure, AWS, Snowflake, Python, SQL, and Apache Spark. A Bachelor’s degree and relevant experience are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
June 20, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Snowflake #Computer Science #Apache Spark #DevOps #Automation #Database Design #AWS (Amazon Web Services) #Data Engineering #GIT #ML (Machine Learning) #Python #BigQuery #Data Manipulation #Mathematics #Cloud #Data Pipeline #Data Modeling #SQL (Structured Query Language) #Big Data #Data Governance #Spark (Apache Spark) #Azure DevOps #Databricks #Data Processing #Datasets #Redshift #Security #Azure #Terraform
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Required Skills & Experience:
• Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field.
• Proven experience with Databricks for big data processing and analytics.
• Strong experience with cloud platforms, particularly Azure and AWS, and their data services.
• Hands-on experience with Snowflake, including data loading, transformation, and optimization.
• Proficiency in Python for data manipulation, pipeline development, and automation.
• Strong knowledge of SQL for querying and optimizing large datasets.
• Experience with Apache Spark for distributed data processing and optimization.
• Familiarity with ETL/ELT processes, data warehousing, and data pipeline orchestration.
• Experience with data modeling and database design to support large-scale analytics.
• Strong problem-solving skills and attention to detail.
Preferred Skills:
• Experience with other cloud-based data platforms (e.g., Google Cloud, AWS Redshift, BigQuery).
• Familiarity with DevOps practices and automation using tools like Azure DevOps, Terraform, or Git.
• Knowledge of machine learning and its integration with data pipelines is a plus.
• Experience in data governance and security practices.