Niktor Inc

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 4-8 years of experience, offering a remote contract in the U.S. at a competitive pay rate. Key skills include SQL, Python, Snowflake, and cloud platforms. Certifications in AWS, Azure, or GCP are preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 24, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Warehouse #Data Engineering #Compliance #ML (Machine Learning) #Data Modeling #dbt (data build tool) #Apache Spark #Data Governance #AWS (Amazon Web Services) #Infrastructure as Code (IaC) #Kubernetes #Azure #Terraform #GCP (Google Cloud Platform) #Delta Lake #Data Science #Snowflake #Databricks #Data Integrity #Kafka (Apache Kafka) #Scala #Data Analysis #Python #Schema Design #Spark (Apache Spark) #Data Quality #Security #SQL (Structured Query Language) #Version Control #"ETL (Extract #Transform #Load)" #Azure Data Factory #GitHub #Cloud #GIT #ADF (Azure Data Factory) #Docker #Redshift #Documentation #Airflow #Data Pipeline #GDPR (General Data Protection Regulation)
Role description
Job Title: Data Engineer Location: Remote (United States) Experience: 4 – 8 Years About the Role: We are seeking a highly skilled Data Engineer to design, develop, and optimize data pipelines and architectures that enable advanced analytics and real-time data insights. The ideal candidate should be passionate about data quality, scalability, and performanceβ€”and bring innovative solutions to complex data challenges. Key Responsibilities: β€’ Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data using modern tools and cloud platforms. β€’ Develop and manage data warehouse and lakehouse solutions using Snowflake, Redshift, or Databricks. β€’ Implement Data Mesh architecture principles to decentralize data ownership and empower domain-based data teams. β€’ Optimize data models for analytics and reporting, ensuring data integrity, lineage, and performance. β€’ Work closely with data analysts, data scientists, and product teams to support data-driven decision-making. β€’ Integrate streaming data using Kafka, Spark Streaming, or Kinesis for near-real-time insights. β€’ Automate workflows and CI/CD for data pipelines using Airflow, GitHub Actions, or Azure Data Factory. β€’ Apply DBT (Data Build Tool) best practices for transformations, documentation, and testing. β€’ Ensure compliance with data governance, security, and privacy policies (GDPR, HIPAA, etc.). Required Skills: β€’ 4–8 years of experience as a Data Engineer or in a similar data-intensive role. β€’ Strong expertise in SQL, Python, and at least one cloud platform (AWS / Azure / GCP). β€’ Proven experience with Snowflake and DBT (with performance optimization experience). β€’ Hands-on experience with Apache Spark, Kafka, or Flink. β€’ Solid understanding of data modeling, data warehousing, and schema design. β€’ Familiarity with containerization (Docker/Kubernetes) and version control (Git). β€’ Exposure to Data Mesh Architecture or domain-oriented data ownership models (rare but preferred). β€’ Excellent communication, problem-solving, and analytical thinking skills. Preferred Qualifications: β€’ Experience with Delta Lake or Iceberg tables. β€’ Familiarity with Terraform or CloudFormation for infrastructure as code. β€’ Knowledge of Machine Learning data pipelines or feature store design. β€’ Certification in AWS Data Analytics, Azure Data Engineer, or GCP Professional Data Engineer.