Niktor Inc

Data Engineer

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 4-8 years of experience, offering a remote contract in the U.S. at a competitive pay rate. Key skills include SQL, Python, Snowflake, and cloud platforms. Certifications in AWS, Azure, or GCP are preferred.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
Unknown
-
๐Ÿ—“๏ธ - Date
October 24, 2025
๐Ÿ•’ - Duration
Unknown
-
๐Ÿ๏ธ - Location
Remote
-
๐Ÿ“„ - Contract
Unknown
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
United States
-
๐Ÿง  - Skills detailed
#Data Warehouse #Data Engineering #Compliance #ML (Machine Learning) #Data Modeling #dbt (data build tool) #Apache Spark #Data Governance #AWS (Amazon Web Services) #Infrastructure as Code (IaC) #Kubernetes #Azure #Terraform #GCP (Google Cloud Platform) #Delta Lake #Data Science #Snowflake #Databricks #Data Integrity #Kafka (Apache Kafka) #Scala #Data Analysis #Python #Schema Design #Spark (Apache Spark) #Data Quality #Security #SQL (Structured Query Language) #Version Control #"ETL (Extract #Transform #Load)" #Azure Data Factory #GitHub #Cloud #GIT #ADF (Azure Data Factory) #Docker #Redshift #Documentation #Airflow #Data Pipeline #GDPR (General Data Protection Regulation)
Role description
Job Title: Data Engineer Location: Remote (United States) Experience: 4 โ€“ 8 Years About the Role: We are seeking a highly skilled Data Engineer to design, develop, and optimize data pipelines and architectures that enable advanced analytics and real-time data insights. The ideal candidate should be passionate about data quality, scalability, and performanceโ€”and bring innovative solutions to complex data challenges. Key Responsibilities: โ€ข Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data using modern tools and cloud platforms. โ€ข Develop and manage data warehouse and lakehouse solutions using Snowflake, Redshift, or Databricks. โ€ข Implement Data Mesh architecture principles to decentralize data ownership and empower domain-based data teams. โ€ข Optimize data models for analytics and reporting, ensuring data integrity, lineage, and performance. โ€ข Work closely with data analysts, data scientists, and product teams to support data-driven decision-making. โ€ข Integrate streaming data using Kafka, Spark Streaming, or Kinesis for near-real-time insights. โ€ข Automate workflows and CI/CD for data pipelines using Airflow, GitHub Actions, or Azure Data Factory. โ€ข Apply DBT (Data Build Tool) best practices for transformations, documentation, and testing. โ€ข Ensure compliance with data governance, security, and privacy policies (GDPR, HIPAA, etc.). Required Skills: โ€ข 4โ€“8 years of experience as a Data Engineer or in a similar data-intensive role. โ€ข Strong expertise in SQL, Python, and at least one cloud platform (AWS / Azure / GCP). โ€ข Proven experience with Snowflake and DBT (with performance optimization experience). โ€ข Hands-on experience with Apache Spark, Kafka, or Flink. โ€ข Solid understanding of data modeling, data warehousing, and schema design. โ€ข Familiarity with containerization (Docker/Kubernetes) and version control (Git). โ€ข Exposure to Data Mesh Architecture or domain-oriented data ownership models (rare but preferred). โ€ข Excellent communication, problem-solving, and analytical thinking skills. Preferred Qualifications: โ€ข Experience with Delta Lake or Iceberg tables. โ€ข Familiarity with Terraform or CloudFormation for infrastructure as code. โ€ข Knowledge of Machine Learning data pipelines or feature store design. โ€ข Certification in AWS Data Analytics, Azure Data Engineer, or GCP Professional Data Engineer.