Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Dallas, TX, hybrid, with a contract length of unspecified duration. The pay rate is also unspecified. Candidates should have 8-12+ years of experience in data engineering, particularly in MarTech and CDP, and proficiency in Snowflake, SQL, and Python.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
520
-
πŸ—“οΈ - Date discovered
September 13, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Cloud #Azure #Data Governance #Data Quality #NoSQL #SQL (Structured Query Language) #Apache Airflow #Data Ingestion #Scala #Batch #Computer Science #Data Engineering #Snowflake #Delta Lake #MongoDB #Spark (Apache Spark) #Python #Programming #Automation #Data Warehouse #Data Architecture #DynamoDB #Data Lake #Data Processing #Data Integration #Redis #Airflow #Security #Azure cloud #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #Azure Event Hubs #Data Pipeline #Big Data
Role description
Tittle : Data Engineer Location: Dallas TX-Hybrid-Onsite 8- 12+ years of experience in data engineering, with at least 3-5 years focused on MarTech, CDP, and data warehousing. Job Requirements: 1. Bachelor’s or master’s degree in computer science, Information Systems, or related field 1. Hands-on experience with Snowflake cloud data platform, including data ingestion, transformation, and orchestration. 1. Strong background in building and maintaining data warehouse solutions on Snowflake. 1. Proficiency in SQL, Python, or other programming languages for data processing and automation 1. Experience with ETL/ELT tools, data pipeline development, and Apache Airflow workflow management 1. Proficiency in real-time data processing (Spark Streaming, Flink, Kafka Streams). 1. Experience with cloud data warehouses, Snowflake, and data lakes (Delta Lake, Iceberg) 1. Familiarity with NoSQL (MongoDB, Cassandra) and key-value stores (Redis, DynamoDB) is highly desirable. 1. Experience with batch & streaming pipelines (Kafka, Kinesis, Pub/Sub). 1. Experience with Azure cloud platforms, Azure Event Hubs and their integration with Snowflake 1. Understanding of marketing technologies, customer data platforms, and data integration challenges 1. Knowledge of data quality, data governance, and security practices in data engineering 1. Strong problem-solving skills and ability to optimize data processes for performance and scalability 1. Good communication and teamwork skills to collaborate with data architects, analysts, and marketing teams 1. Relevant certifications (e.g., Snowflake, Azure Cloud, and Big Data) are a plus