

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," located in Greenwood Village, CO. The pay rate is "unknown." Key skills include AWS, Spark, PySpark, SQL, and experience with JSON messaging systems. A minimum of 5 years in data engineering is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date discovered
July 31, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Englewood, CO
-
π§ - Skills detailed
#Data Transformations #Kafka (Apache Kafka) #AWS EMR (Amazon Elastic MapReduce) #PySpark #Python #AI (Artificial Intelligence) #Spark (Apache Spark) #AWS (Amazon Web Services) #Programming #GitLab #Airflow #Scala #Deployment #Data Pipeline #Cloud #"ETL (Extract #Transform #Load)" #Data Engineering #EC2 #DevOps #Terraform #Monitoring #ML (Machine Learning) #Data Processing #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #JSON (JavaScript Object Notation)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Responsibilities
Kforce has a client in Greenwood Village, CO that is seeking a skilled Data Engineer to support a high-impact data engineering initiative focused on AWS infrastructure, Spark-based transformations, and orchestration tools. This individual contributor role requires hands-on experience building data pipelines, processing large-scale JSON messages, and deploying solutions in cloud environments. The ideal candidate has a solid foundation in data engineering within enterprise environments, strong Spark/PySpark expertise, and experience with AWS-native services and modern orchestration tools. Responsibilities:
β’ Build and optimize scalable data pipelines to process and transform structured and semi-structured data using PySpark and SparkSQL
β’ Work with JSON objects to parse messages and send MDK messages via Kafka to S3
β’ Create and manage data endpoints with defined schemas, orchestrated through RabbitMQ and Kafka
β’ Execute data transformations from JSON to RDDs using Spark on AWS EMR/EC2
β’ Support orchestration through AWS Step Functions with a future transition to Airflow
β’ Use SQL to extract, transform, and load data into reporting and dashboarding tools
β’ Collaborate with DevOps and CI/CD teams to maintain GitLab pipelines and automate infrastructure deployment using Terraform
β’ Maintain version-controlled ETL code in GitLab and participate in testing, deployment, and monitoring workflows
β’ Participate in cross-functional engineering discussions, aligning with best practices and timelines
Requirements
β’ 5+ years of experience in data engineering with strong hands-on development skills
β’ Proficiency in Spark and PySpark, including SparkSQL for distributed data processing
β’ Strong programming experience in Python and solid SQL query skills
β’ Experience working with AWS services such as EMR, EC2, and S3
β’ Familiarity with JSON-based messaging systems, message parsing, and stream processing
β’ Exposure to Kafka, MSK, or RabbitMQ for message queueing and data delivery
β’ Experience working with GitLab and Terraform in a CI/CD environment
β’ Ability to work independently and contribute effectively in a fast-paced team setting
Preferred Skills
β’ Background in enterprise-level data engineering environments
β’ Experience with Airflow for workflow orchestration
β’ Familiarity with AI/ML pipelines or advanced analytics systems
β’ Understanding of ETL lifecycle management, including staging, deployment, and teardown processes
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking βApply Todayβ you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.