

SPECTRAFORCE
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Python, Spark/PySpark, AWS services, and complex SQL queries in Snowflake. Experience in data engineering and troubleshooting is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 24, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Virginia, United States
-
🧠 - Skills detailed
#Lambda (AWS Lambda) #IAM (Identity and Access Management) #Datasets #AWS Glue #Security #Cloud #Automated Testing #S3 (Amazon Simple Storage Service) #Scripting #EC2 #Data Engineering #Python #"ETL (Extract #Transform #Load)" #DevOps #Data Science #Automation #Data Pipeline #Data Quality #Data Access #Snowflake #PySpark #Compliance #Scala #Data Processing #Java #Spark (Apache Spark) #Agile #SQL Queries #AWS (Amazon Web Services) #Debugging #GIT #Code Reviews #SQL (Structured Query Language) #Golang
Role description
Key Responsibilities:
• Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services.
• Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets.
• Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning.
• Develop backend and automation tools using Golang and/or Python as needed.
• Implement scalable, secure, and high-quality data solutions across AWS servies such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch.
• Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges.
• Perform root-cause analysis and implement automation to prevent recurring issues.
• Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access.
• Ensure compliance with enterprise governance, data quality, and cloud security standards.
• Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality.
Key Requirements and Technology Experience:
• Skills-Data Engineer- Python , Spark/PySpark, AWS, Golang, Able to write complex SQL queries against Snowflake tables / Troubleshoot issues, Java/Python, AWS (Glue, EC2, Lambda).
• Proficiency in Python with experience building scalable data pipelines or ETL processes.
• Strong hands-on experience with Spark/PySpark for distributed data processing.
• Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning.
• Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM).
• Experience with Golang for scripting, backend services, or performance-critical processes.
• Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems.
• Familiarity with CI/CD workflows, Git, and automated testing.
Key Responsibilities:
• Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services.
• Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets.
• Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning.
• Develop backend and automation tools using Golang and/or Python as needed.
• Implement scalable, secure, and high-quality data solutions across AWS servies such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch.
• Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges.
• Perform root-cause analysis and implement automation to prevent recurring issues.
• Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access.
• Ensure compliance with enterprise governance, data quality, and cloud security standards.
• Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality.
Key Requirements and Technology Experience:
• Skills-Data Engineer- Python , Spark/PySpark, AWS, Golang, Able to write complex SQL queries against Snowflake tables / Troubleshoot issues, Java/Python, AWS (Glue, EC2, Lambda).
• Proficiency in Python with experience building scalable data pipelines or ETL processes.
• Strong hands-on experience with Spark/PySpark for distributed data processing.
• Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning.
• Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM).
• Experience with Golang for scripting, backend services, or performance-critical processes.
• Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems.
• Familiarity with CI/CD workflows, Git, and automated testing.






