

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8-10 years of experience in AWS, SQL databases, and Python. Contract length is unspecified, with a competitive pay rate. Key skills include ETL frameworks, Tableau, and CI/CD pipelines.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 19, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Newark, NJ
-
π§ - Skills detailed
#Security #Complex Queries #Data Pipeline #GitHub #Programming #Version Control #Kubernetes #PostgreSQL #S3 (Amazon Simple Storage Service) #Tableau #Data Engineering #GIT #BI (Business Intelligence) #Oracle #AWS (Amazon Web Services) #API (Application Programming Interface) #Docker #AWS Lambda #MySQL #Redshift #Compliance #Databases #SQS (Simple Queue Service) #Datasets #Deployment #Agile #Scala #Automation #Scripting #Data Migration #Data Governance #Computer Science #Cloud #Data Modeling #Python #"ETL (Extract #Transform #Load)" #SNS (Simple Notification Service) #Lambda (AWS Lambda) #SQL (Structured Query Language) #Migration #Bash
Role description
Experienced Data Engineer with 8 to 10 years in designing and optimizing data pipelines, cloud infrastructure, and BI solutions. Strong expertise in AWS (Lambda, Glue, S3, EMR, Redshift), SQL/relational databases (Oracle, MySQL, PostgreSQL), and Python for data engineering and automation. Skilled in Salesforce data models, Tableau dashboards, and implementing ETL frameworks, CI/CD pipelines, and containerization (Docker/Kubernetes). Proven track record of delivering scalable, secure, and business-focused data solutions in Agile environments.
Skills & Expertise
β’ A robust background in AWS services such as Lambda, Glue, S3, EMR, SNS, SQS, CloudWatch, Redshift, and Bedrock, with hands-on experience in designing and managing cloud-based data workflows.
β’ Strong expertise in SQL and relational databases including Oracle, MySQL, and PostgreSQL, with proven ability to write complex queries, optimize performance, and handle large datasets.
β’ Familiarity with the Salesforce platform, including data models, standard and custom objects, and integration methods for data migration and reporting.
β’ Proficiency in Python programming for data engineering tasks such as ETL pipeline development, data validation, and API integration.
β’ Skilled in scripting (Bash, Shell, Python) to automate repetitive tasks and streamline processes across data environments.
β’ Hands-on experience with Tableau dashboards, including design, customization, and troubleshooting performance issues for better business insights.
β’ Knowledge of data modeling and ETL frameworks, with experience in building scalable pipelines for structured and unstructured data.
β’ Experience with containerization and orchestration tools such as Docker and Kubernetes for deploying and managing applications in cloud environments.
β’ Strong understanding of CI/CD pipelines and version control (Git/GitHub) to ensure efficient and reliable deployment of code and infrastructure changes.
β’ Familiarity with data governance, quality checks, and security best practices, ensuring compliance and reliability across sensitive datasets
Education:
Bachelors in Computer Science
Experienced Data Engineer with 8 to 10 years in designing and optimizing data pipelines, cloud infrastructure, and BI solutions. Strong expertise in AWS (Lambda, Glue, S3, EMR, Redshift), SQL/relational databases (Oracle, MySQL, PostgreSQL), and Python for data engineering and automation. Skilled in Salesforce data models, Tableau dashboards, and implementing ETL frameworks, CI/CD pipelines, and containerization (Docker/Kubernetes). Proven track record of delivering scalable, secure, and business-focused data solutions in Agile environments.
Skills & Expertise
β’ A robust background in AWS services such as Lambda, Glue, S3, EMR, SNS, SQS, CloudWatch, Redshift, and Bedrock, with hands-on experience in designing and managing cloud-based data workflows.
β’ Strong expertise in SQL and relational databases including Oracle, MySQL, and PostgreSQL, with proven ability to write complex queries, optimize performance, and handle large datasets.
β’ Familiarity with the Salesforce platform, including data models, standard and custom objects, and integration methods for data migration and reporting.
β’ Proficiency in Python programming for data engineering tasks such as ETL pipeline development, data validation, and API integration.
β’ Skilled in scripting (Bash, Shell, Python) to automate repetitive tasks and streamline processes across data environments.
β’ Hands-on experience with Tableau dashboards, including design, customization, and troubleshooting performance issues for better business insights.
β’ Knowledge of data modeling and ETL frameworks, with experience in building scalable pipelines for structured and unstructured data.
β’ Experience with containerization and orchestration tools such as Docker and Kubernetes for deploying and managing applications in cloud environments.
β’ Strong understanding of CI/CD pipelines and version control (Git/GitHub) to ensure efficient and reliable deployment of code and infrastructure changes.
β’ Familiarity with data governance, quality checks, and security best practices, ensuring compliance and reliability across sensitive datasets
Education:
Bachelors in Computer Science