

Lead Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer in Greenwood Village, CO, with a contract length of over 6 months and a pay rate of "X". Key skills include 7+ years in data engineering, strong Spark/PySpark, AWS services, and JSON messaging.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
616
-
ποΈ - Date discovered
July 2, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Englewood, CO
-
π§ - Skills detailed
#Spark (Apache Spark) #AWS (Amazon Web Services) #Data Pipeline #GitLab #S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #Terraform #AWS EMR (Amazon Elastic MapReduce) #Data Engineering #ML (Machine Learning) #Airflow #Python #"ETL (Extract #Transform #Load)" #Leadership #PySpark #AI (Artificial Intelligence) #EC2 #Deployment #SQL (Structured Query Language) #DevOps #JSON (JavaScript Object Notation)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Responsibilities
Kforce has a client in Greenwood Village, CO that is seeking a Lead Data Engineer to drive the technical planning and delivery roadmap for a data engineering initiative centered around AWS infrastructure, Spark-based transformations, and orchestration tools. This role requires a hands-on leader who can manage workflow across a team of engineers while contributing technically to build out pipelines, endpoints, and ETL workflows. The ideal candidate has a background in corporate data engineering, strong Spark/PySpark expertise, and experience working with JSON-based messaging systems and AWS-native services. Responsibilities:
β’ Lead and plan the data engineering delivery roadmap (approx. 70% leadership, 30% execution)
β’ Manage a team of engineers, providing technical guidance and coaching
β’ Build and optimize data pipelines to process JSON messages using PySpark and SparkSQL
β’ Create and manage new endpoints with specific schemas, orchestrated through RabbitMQ and Kafka
β’ Work with JSON objects, parse messages, and send MDK messages via Kafka to S3
β’ Execute transformations from JSON to RDDs using Spark on AWS EMR/EC2
β’ Support orchestration through AWS Step Functions, with future transition to Airflow
β’ Query data into dashboards using SQL; participate in AI engineering workflows
β’ Collaborate with DevOps to maintain GitLab CI/CD pipelines; manage code branches, testing, deployment, and destruction workflows via Terraform
β’ Host ETL code in GitLab and coordinate delivery through their existing CI/CD structure
β’ Work closely with other CICD experts to align on best practices and delivery timelines
Requirements
β’ 7+ years of experience in data engineering, with recent experience in a leadership or team lead capacity
β’ Demonstrated experience with Spark and PySpark, including working with SparkSQL
β’ Strong Python and SQL skills
β’ Experience working with AWS services, particularly EMR, EC2, and S3
β’ Proven ability to work with JSON-based messaging systems
β’ Familiarity with Kafka, MSK, or similar messaging technologies
β’ Hands-on experience interacting with GitLab and Terraform in a CI/CD environment
β’ Ability to parse complex JSON objects and transform data as needed
β’ Comfortable working onsite 2-3 days per week (flexible based on team needs)
β’ Must be local and open to conversion to full-time
Preferred Skills
β’ Prior experience working in corporate/enterprise data engineering teams
β’ Experience orchestrating workflows with Airflow
β’ Background in building AI/ML or advanced analytics pipelines
β’ Understanding of end-to-end ETL code lifecycle management, including staging, deployment, and destruction phases
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking βApply Todayβ you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.