

SVT IT INFOTECH
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience, focusing on scalable ETL/ELT pipelines and cloud platforms (AWS, Azure, GCP). Key skills include Python, SQL, Kafka, and data modeling. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 11, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Cloud #Terraform #GIT #AWS Glue #Redshift #SQL (Structured Query Language) #Data Science #PySpark #Python #DynamoDB #NoSQL #Data Architecture #Version Control #Azure #Snowflake #Informatica #Databases #BigQuery #Data Analysis #Data Engineering #Automation #Batch #Compliance #dbt (data build tool) #Security #Scala #DevOps #AWS (Amazon Web Services) #GCP (Google Cloud Platform) #Kafka (Apache Kafka) #Spark (Apache Spark) #Data Modeling #Data Pipeline #Data Governance #"ETL (Extract #Transform #Load)" #Airflow #Data Quality #Hadoop
Role description
We are seeking a highly skilled Data Engineer with 5+ years of hands-on experience to design, build, and optimize scalable data pipelines and modern data platforms. The ideal candidate will have strong expertise in cloud data engineering, ETL/ELT development, real-time streaming, and data modeling, with a solid understanding of distributed systems and best engineering practices.
• Design, develop, and maintain scalable ETL/ELT pipelines for ingestion, transformation, and processing of structured and unstructured data.
• Build real-time and batch data pipelines using tools such as Kafka, Spark, AWS Glue, Kinesis, or similar technologies.
• Develop and optimize data models, warehouse layers, and high-performance data architectures.
• Implement data quality checks, data validation frameworks, and ensure data reliability and consistency across systems.
• Collaborate with Data Analysts, Data Scientists, and cross-functional teams to deliver efficient and accessible data solutions.
• Deploy and manage data infrastructure using AWS / Azure / GCP cloud services.
• Write clean, efficient, and reusable code in Python/Scala/SQL.
• Monitor pipeline performance, troubleshoot issues, and drive continuous improvement.
• Implement CI/CD pipelines, version control, and automation for production workloads.
• Ensure data governance, security, and compliance in all engineering workflows.
Required Skills & Qualifications
• 5+ years of experience as a Data Engineer in a production environment.
• Strong proficiency in Python, SQL, and distributed processing frameworks (Spark, PySpark, Hadoop).
• Hands-on experience with cloud platforms: AWS, Azure, or GCP.
• Experience with streaming technologies: Kafka, Kinesis, Spark Streaming, Flink (any).
• Strong understanding of data warehousing concepts (Star/Snowflake schemas, dimensional modeling).
• Experience with ETL/ELT tools (Glue, Airflow, DBT, Informatica, etc.).
• Solid understanding of DevOps practices: Git, CI/CD, Terraform/CloudFormation (bonus).
• Experience working with relational and NoSQL databases (Redshift, Snowflake, BigQuery, DynamoDB, etc.).
• Excellent problem-solving, communication, and analytical skills.
We are seeking a highly skilled Data Engineer with 5+ years of hands-on experience to design, build, and optimize scalable data pipelines and modern data platforms. The ideal candidate will have strong expertise in cloud data engineering, ETL/ELT development, real-time streaming, and data modeling, with a solid understanding of distributed systems and best engineering practices.
• Design, develop, and maintain scalable ETL/ELT pipelines for ingestion, transformation, and processing of structured and unstructured data.
• Build real-time and batch data pipelines using tools such as Kafka, Spark, AWS Glue, Kinesis, or similar technologies.
• Develop and optimize data models, warehouse layers, and high-performance data architectures.
• Implement data quality checks, data validation frameworks, and ensure data reliability and consistency across systems.
• Collaborate with Data Analysts, Data Scientists, and cross-functional teams to deliver efficient and accessible data solutions.
• Deploy and manage data infrastructure using AWS / Azure / GCP cloud services.
• Write clean, efficient, and reusable code in Python/Scala/SQL.
• Monitor pipeline performance, troubleshoot issues, and drive continuous improvement.
• Implement CI/CD pipelines, version control, and automation for production workloads.
• Ensure data governance, security, and compliance in all engineering workflows.
Required Skills & Qualifications
• 5+ years of experience as a Data Engineer in a production environment.
• Strong proficiency in Python, SQL, and distributed processing frameworks (Spark, PySpark, Hadoop).
• Hands-on experience with cloud platforms: AWS, Azure, or GCP.
• Experience with streaming technologies: Kafka, Kinesis, Spark Streaming, Flink (any).
• Strong understanding of data warehousing concepts (Star/Snowflake schemas, dimensional modeling).
• Experience with ETL/ELT tools (Glue, Airflow, DBT, Informatica, etc.).
• Solid understanding of DevOps practices: Git, CI/CD, Terraform/CloudFormation (bonus).
• Experience working with relational and NoSQL databases (Redshift, Snowflake, BigQuery, DynamoDB, etc.).
• Excellent problem-solving, communication, and analytical skills.





