

Niktor Inc
Data Engineer
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 4-8 years of experience, offering a remote contract in the U.S. at a competitive pay rate. Key skills include SQL, Python, Snowflake, and cloud platforms. Certifications in AWS, Azure, or GCP are preferred.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
Unknown
-
๐๏ธ - Date
October 24, 2025
๐ - Duration
Unknown
-
๐๏ธ - Location
Remote
-
๐ - Contract
Unknown
-
๐ - Security
Unknown
-
๐ - Location detailed
United States
-
๐ง - Skills detailed
#Data Warehouse #Data Engineering #Compliance #ML (Machine Learning) #Data Modeling #dbt (data build tool) #Apache Spark #Data Governance #AWS (Amazon Web Services) #Infrastructure as Code (IaC) #Kubernetes #Azure #Terraform #GCP (Google Cloud Platform) #Delta Lake #Data Science #Snowflake #Databricks #Data Integrity #Kafka (Apache Kafka) #Scala #Data Analysis #Python #Schema Design #Spark (Apache Spark) #Data Quality #Security #SQL (Structured Query Language) #Version Control #"ETL (Extract #Transform #Load)" #Azure Data Factory #GitHub #Cloud #GIT #ADF (Azure Data Factory) #Docker #Redshift #Documentation #Airflow #Data Pipeline #GDPR (General Data Protection Regulation)
Role description
Job Title: Data Engineer
Location: Remote (United States)
Experience: 4 โ 8 Years
About the Role:
We are seeking a highly skilled Data Engineer to design, develop, and optimize data pipelines and architectures that enable advanced analytics and real-time data insights. The ideal candidate should be passionate about data quality, scalability, and performanceโand bring innovative solutions to complex data challenges.
Key Responsibilities:
โข Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data using modern tools and cloud platforms.
โข Develop and manage data warehouse and lakehouse solutions using Snowflake, Redshift, or Databricks.
โข Implement Data Mesh architecture principles to decentralize data ownership and empower domain-based data teams.
โข Optimize data models for analytics and reporting, ensuring data integrity, lineage, and performance.
โข Work closely with data analysts, data scientists, and product teams to support data-driven decision-making.
โข Integrate streaming data using Kafka, Spark Streaming, or Kinesis for near-real-time insights.
โข Automate workflows and CI/CD for data pipelines using Airflow, GitHub Actions, or Azure Data Factory.
โข Apply DBT (Data Build Tool) best practices for transformations, documentation, and testing.
โข Ensure compliance with data governance, security, and privacy policies (GDPR, HIPAA, etc.).
Required Skills:
โข 4โ8 years of experience as a Data Engineer or in a similar data-intensive role.
โข Strong expertise in SQL, Python, and at least one cloud platform (AWS / Azure / GCP).
โข Proven experience with Snowflake and DBT (with performance optimization experience).
โข Hands-on experience with Apache Spark, Kafka, or Flink.
โข Solid understanding of data modeling, data warehousing, and schema design.
โข Familiarity with containerization (Docker/Kubernetes) and version control (Git).
โข Exposure to Data Mesh Architecture or domain-oriented data ownership models (rare but preferred).
โข Excellent communication, problem-solving, and analytical thinking skills.
Preferred Qualifications:
โข Experience with Delta Lake or Iceberg tables.
โข Familiarity with Terraform or CloudFormation for infrastructure as code.
โข Knowledge of Machine Learning data pipelines or feature store design.
โข Certification in AWS Data Analytics, Azure Data Engineer, or GCP Professional Data Engineer.
Job Title: Data Engineer
Location: Remote (United States)
Experience: 4 โ 8 Years
About the Role:
We are seeking a highly skilled Data Engineer to design, develop, and optimize data pipelines and architectures that enable advanced analytics and real-time data insights. The ideal candidate should be passionate about data quality, scalability, and performanceโand bring innovative solutions to complex data challenges.
Key Responsibilities:
โข Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data using modern tools and cloud platforms.
โข Develop and manage data warehouse and lakehouse solutions using Snowflake, Redshift, or Databricks.
โข Implement Data Mesh architecture principles to decentralize data ownership and empower domain-based data teams.
โข Optimize data models for analytics and reporting, ensuring data integrity, lineage, and performance.
โข Work closely with data analysts, data scientists, and product teams to support data-driven decision-making.
โข Integrate streaming data using Kafka, Spark Streaming, or Kinesis for near-real-time insights.
โข Automate workflows and CI/CD for data pipelines using Airflow, GitHub Actions, or Azure Data Factory.
โข Apply DBT (Data Build Tool) best practices for transformations, documentation, and testing.
โข Ensure compliance with data governance, security, and privacy policies (GDPR, HIPAA, etc.).
Required Skills:
โข 4โ8 years of experience as a Data Engineer or in a similar data-intensive role.
โข Strong expertise in SQL, Python, and at least one cloud platform (AWS / Azure / GCP).
โข Proven experience with Snowflake and DBT (with performance optimization experience).
โข Hands-on experience with Apache Spark, Kafka, or Flink.
โข Solid understanding of data modeling, data warehousing, and schema design.
โข Familiarity with containerization (Docker/Kubernetes) and version control (Git).
โข Exposure to Data Mesh Architecture or domain-oriented data ownership models (rare but preferred).
โข Excellent communication, problem-solving, and analytical thinking skills.
Preferred Qualifications:
โข Experience with Delta Lake or Iceberg tables.
โข Familiarity with Terraform or CloudFormation for infrastructure as code.
โข Knowledge of Machine Learning data pipelines or feature store design.
โข Certification in AWS Data Analytics, Azure Data Engineer, or GCP Professional Data Engineer.






