

The Fountain Group
Data Engineer - REMOTE
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 1-year contract, paying $75-80 per hour. It requires 5+ years of experience, advanced SQL and PySpark skills, and knowledge of AWS services, Snowflake, and ETL processes. Remote work during EST hours is expected.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
October 23, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Airflow #Data Engineering #PySpark #QlikView #Deployment #SQL (Structured Query Language) #Data Management #"ETL (Extract #Transform #Load)" #Statistics #Monitoring #AWS (Amazon Web Services) #Data Architecture #Python #Datasets #Spark (Apache Spark) #Hadoop #Snowflake #AWS Glue #Storage #Visualization #Cloud #Mathematics #Computer Science #Data Modeling #Big Data #Lambda (AWS Lambda) #S3 (Amazon Simple Storage Service) #Tableau
Role description
Pay: $75-80. W2 ONLY
REMOTE - EST business hours
Duration:1 year
Qualifications:
• 5+ years of Data Engineering experience is required.
• Advanced SQL + PySpark (so Python also required) are top 2 primary skills required
• AWS + Glue, Lambda, S3 + Snowflake are also required. (Data modeling / ETL beyond Admin level would be great)
• Problem solving skills, beyond just coding - domain knowledge in HR / People / Finance related a plus.
• Candidates should be hands-on coders, not just managing - they will write the ETL logic that powers the data ecosystem.
Technologies:
• PySpark & SQL: For crunching and transforming big datasets.
• AWS Glue / Lambda / S3: For automating data movement and storage.
• Snowflake / Hadoop / Hive: For hosting and processing massive data.
• Airflow: For scheduling and monitoring data jobs.
• Tableau / QlikView: For visualization and reporting.
Responsibilities:
You will be a core member of the team responsible for extracting large quantities data from enterprise source systems, developing efficient ETL and data management processes, and building architectures for rapid ingestion and dissemination of key data.
You will be exposed to an enormous variety of topics across many different industries such as, a world-leading bank figuring out how to best do its customer care across all of its channels, a railway determining optimal crew deployment, or a medical care provider understanding the main drivers for delays in surgery scheduling. In this role, you will be the point person in data architecture and management for our cloud platform and other high-tech initiatives.
Education:
Bachelor's degree in quantitative field like Computer Science, Engineering, Statistics, Mathematics or related field required. Advanced degree is a strong plus
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at Privacy Policy
Pay: $75-80. W2 ONLY
REMOTE - EST business hours
Duration:1 year
Qualifications:
• 5+ years of Data Engineering experience is required.
• Advanced SQL + PySpark (so Python also required) are top 2 primary skills required
• AWS + Glue, Lambda, S3 + Snowflake are also required. (Data modeling / ETL beyond Admin level would be great)
• Problem solving skills, beyond just coding - domain knowledge in HR / People / Finance related a plus.
• Candidates should be hands-on coders, not just managing - they will write the ETL logic that powers the data ecosystem.
Technologies:
• PySpark & SQL: For crunching and transforming big datasets.
• AWS Glue / Lambda / S3: For automating data movement and storage.
• Snowflake / Hadoop / Hive: For hosting and processing massive data.
• Airflow: For scheduling and monitoring data jobs.
• Tableau / QlikView: For visualization and reporting.
Responsibilities:
You will be a core member of the team responsible for extracting large quantities data from enterprise source systems, developing efficient ETL and data management processes, and building architectures for rapid ingestion and dissemination of key data.
You will be exposed to an enormous variety of topics across many different industries such as, a world-leading bank figuring out how to best do its customer care across all of its channels, a railway determining optimal crew deployment, or a medical care provider understanding the main drivers for delays in surgery scheduling. In this role, you will be the point person in data architecture and management for our cloud platform and other high-tech initiatives.
Education:
Bachelor's degree in quantitative field like Computer Science, Engineering, Statistics, Mathematics or related field required. Advanced degree is a strong plus
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at Privacy Policy






