

Brooksource
Data Engineer III
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer III with a contract length of over 6 months, paying $60/hr. Located in Denver, CO (hybrid), it requires 5+ years of experience, strong Python and SQL skills, and AWS proficiency.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
March 13, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Denver, CO
-
🧠 - Skills detailed
#RDS (Amazon Relational Database Service) #Apache Spark #AWS (Amazon Web Services) #Database Design #Documentation #Terraform #Storage #IAM (Identity and Access Management) #Data Processing #AWS IAM (AWS Identity and Access Management) #SQL (Structured Query Language) #Data Governance #"ETL (Extract #Transform #Load)" #Data Architecture #Data Engineering #GIT #Programming #S3 (Amazon Simple Storage Service) #Database Management #Athena #Compliance #DevOps #EC2 #Data Lake #GitLab #Indexing #Lambda (AWS Lambda) #Pandas #Python #Spark (Apache Spark) #Data Storage #Scripting #Libraries #GCP (Google Cloud Platform) #Logging #Docker #Infrastructure as Code (IaC) #Cloud #Monitoring #Automation #NumPy #PySpark #Code Reviews #Version Control #BigQuery #Databases #Data Pipeline #Security #Business Analysis
Role description
We are seeking an experienced Data Engineer to join our Fortune 100 clients team, to support data engineering and infrastructure optimization. Will work alongside the Lead Data Developer on the Data team to build and maintain data pipelines, harden existing infrastructure, and establish cross-cloud connectivity. This role requires strong technical skills in Python and SQL, combined with excellent communication, adaptability, and collaborative problem-solving abilities.
Required Skills
Core Technical Competencies (Required)
• Python (Critical)
• Strong proficiency in Python for data engineering and automation
• Experience with data processing libraries (pandas, PySpark, NumPy)
• Script development for ETL/ELT workflows
• Object-oriented programming and code optimization
• Error handling and logging best practices
• SQL (Critical)
• Advanced SQL skills for data transformation and analysis
• Experience with SQL databases
• Complex query development and optimization
• Performance tuning and execution plan analysis
• Stored procedures, functions, and database scripting
• Understanding of indexing and database design
• AWS Services (Highly Recommended)
• Hands-on experience with core AWS services:
• S3 (data storage and lifecycle management)
• RDS (relational database management)
• Lambda (serverless computing)
• EC2 (compute instances)
• Glue (ETL service)
• Athena (query service)
• AWS IAM for security and access management
• CloudWatch for monitoring and logging
• Understanding of AWS cost optimization strategies
Data Engineering (Required)
• Strong ETL/ELT pipeline development and maintenance
• Apache Spark and distributed data processing (PySpark)
• Experience with medallion architecture or similar data patterns
• Data lake and warehouse concepts
• Understanding of data governance, quality, and security
• Ability to learn and work within existing data architectures
Soft Skills & Professional Competencies (Critical)
• Communication
• Clear and concise written and verbal communication
• Ability to explain technical concepts to non-technical stakeholders
• Proactive in providing status updates and raising concerns
• Strong documentation skills
• Collaboration & Teamwork
• Works effectively with distributed teams (onshore and offshore)
• Receptive to feedback and code reviews
• Willing to mentor junior team members
• Contributes to team knowledge sharing
• Adaptability & Learning
• Quick learner who can understand existing systems and processes
• Comfortable working in ambiguous situations
• Flexible and open to changing priorities
• Self-directed with minimal supervision
• Problem-Solving
• Analytical thinking and troubleshooting skills
• Proactive in identifying and addressing issues
• Balances quick fixes with long-term solutions
• Focus on root cause analysis
DevOps & Infrastructure (Nice to Have)
• CI/CD pipeline experience
• Infrastructure as Code (Terraform)
• Git workflows and version control (Gitlab)
• Container technologies (Docker)
• GCP services (BigQuery, Cloud Storage)
Responsibilities
Technical Execution
• Learn and understand existing AWS infrastructure and data pipelines
• Build and maintain Spark-based ETL/ELT pipelines using Python and SQL
• Optimize existing data processing workflows for performance and cost
• Harden existing infrastructure and improve operational stability
• Refactor and optimize current processes for better maintainability
• Troubleshoot data pipeline and connectivity issues
• Ensure security and compliance across cloud platforms
Collaboration & Communication
• Collaborate with Lead Data Developer on architecture decisions
• Work closely with Data team, DevOps team, Frontend team, and Business Analysts
• Participate in code reviews and knowledge sharing sessions
• Communicate progress, blockers, and technical decisions clearly
• Support offshore team members and mentor junior team members
Documentation & Best Practices
• Document processes, patterns, and best practices
• Create runbooks and troubleshooting guides
• Maintain clear and up-to-date technical documentation
• Contribute to team standards and coding guidelines
Experience Level
• 5+ years in software engineering, data engineering, or related technical role
• 3+ years hands-on Python development (required)
• 3+ years advanced SQL experience (required)
• 2+ years working with AWS services (highly recommended)
• 2+ years with Apache Spark (recommended)
• 2+ years of ETL pipeline experience
• Proven ability to learn and adapt to existing systems quickly
• Previous work with infrastructure optimization and technical debt reduction
• Experience working in collaborative, cross-functional teams
Team Structure
• Reports to: Lead Data Developer
• Team: Data Team
• Collaborates with: DevOps Team, Frontend Team, Business Analysts
• Team size: 4-5 data engineers and analysts
Additional Details:
• Pay: $60/hr
• Location: Denver, CO
• Hybrid position (4 days/week on site)
• W2 contract position with potential for extension or full-time conversion
• W2 Only - We are unable to provide sponsorship currently (US Citizen & GC holder only)
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local la
We are seeking an experienced Data Engineer to join our Fortune 100 clients team, to support data engineering and infrastructure optimization. Will work alongside the Lead Data Developer on the Data team to build and maintain data pipelines, harden existing infrastructure, and establish cross-cloud connectivity. This role requires strong technical skills in Python and SQL, combined with excellent communication, adaptability, and collaborative problem-solving abilities.
Required Skills
Core Technical Competencies (Required)
• Python (Critical)
• Strong proficiency in Python for data engineering and automation
• Experience with data processing libraries (pandas, PySpark, NumPy)
• Script development for ETL/ELT workflows
• Object-oriented programming and code optimization
• Error handling and logging best practices
• SQL (Critical)
• Advanced SQL skills for data transformation and analysis
• Experience with SQL databases
• Complex query development and optimization
• Performance tuning and execution plan analysis
• Stored procedures, functions, and database scripting
• Understanding of indexing and database design
• AWS Services (Highly Recommended)
• Hands-on experience with core AWS services:
• S3 (data storage and lifecycle management)
• RDS (relational database management)
• Lambda (serverless computing)
• EC2 (compute instances)
• Glue (ETL service)
• Athena (query service)
• AWS IAM for security and access management
• CloudWatch for monitoring and logging
• Understanding of AWS cost optimization strategies
Data Engineering (Required)
• Strong ETL/ELT pipeline development and maintenance
• Apache Spark and distributed data processing (PySpark)
• Experience with medallion architecture or similar data patterns
• Data lake and warehouse concepts
• Understanding of data governance, quality, and security
• Ability to learn and work within existing data architectures
Soft Skills & Professional Competencies (Critical)
• Communication
• Clear and concise written and verbal communication
• Ability to explain technical concepts to non-technical stakeholders
• Proactive in providing status updates and raising concerns
• Strong documentation skills
• Collaboration & Teamwork
• Works effectively with distributed teams (onshore and offshore)
• Receptive to feedback and code reviews
• Willing to mentor junior team members
• Contributes to team knowledge sharing
• Adaptability & Learning
• Quick learner who can understand existing systems and processes
• Comfortable working in ambiguous situations
• Flexible and open to changing priorities
• Self-directed with minimal supervision
• Problem-Solving
• Analytical thinking and troubleshooting skills
• Proactive in identifying and addressing issues
• Balances quick fixes with long-term solutions
• Focus on root cause analysis
DevOps & Infrastructure (Nice to Have)
• CI/CD pipeline experience
• Infrastructure as Code (Terraform)
• Git workflows and version control (Gitlab)
• Container technologies (Docker)
• GCP services (BigQuery, Cloud Storage)
Responsibilities
Technical Execution
• Learn and understand existing AWS infrastructure and data pipelines
• Build and maintain Spark-based ETL/ELT pipelines using Python and SQL
• Optimize existing data processing workflows for performance and cost
• Harden existing infrastructure and improve operational stability
• Refactor and optimize current processes for better maintainability
• Troubleshoot data pipeline and connectivity issues
• Ensure security and compliance across cloud platforms
Collaboration & Communication
• Collaborate with Lead Data Developer on architecture decisions
• Work closely with Data team, DevOps team, Frontend team, and Business Analysts
• Participate in code reviews and knowledge sharing sessions
• Communicate progress, blockers, and technical decisions clearly
• Support offshore team members and mentor junior team members
Documentation & Best Practices
• Document processes, patterns, and best practices
• Create runbooks and troubleshooting guides
• Maintain clear and up-to-date technical documentation
• Contribute to team standards and coding guidelines
Experience Level
• 5+ years in software engineering, data engineering, or related technical role
• 3+ years hands-on Python development (required)
• 3+ years advanced SQL experience (required)
• 2+ years working with AWS services (highly recommended)
• 2+ years with Apache Spark (recommended)
• 2+ years of ETL pipeline experience
• Proven ability to learn and adapt to existing systems quickly
• Previous work with infrastructure optimization and technical debt reduction
• Experience working in collaborative, cross-functional teams
Team Structure
• Reports to: Lead Data Developer
• Team: Data Team
• Collaborates with: DevOps Team, Frontend Team, Business Analysts
• Team size: 4-5 data engineers and analysts
Additional Details:
• Pay: $60/hr
• Location: Denver, CO
• Hybrid position (4 days/week on site)
• W2 contract position with potential for extension or full-time conversion
• W2 Only - We are unable to provide sponsorship currently (US Citizen & GC holder only)
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local la






