

Kforce Inc
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "$X - $Y". It is a hybrid position located in Pittsburgh, PA, requiring strong experience with AWS services, SQL, Python, and ETL processes.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date
April 25, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Wexford, PA
-
π§ - Skills detailed
#Jira #Scala #Data Quality #Data Catalog #Data Accuracy #Data Lake #GIT #Amazon Redshift #Linux #"ETL (Extract #Transform #Load)" #Data Ingestion #AWS EMR (Amazon Elastic MapReduce) #Metadata #Storage #Data Engineering #Python #AWS (Amazon Web Services) #Documentation #AI (Artificial Intelligence) #Data Access #Logging #Data Management #Monitoring #Version Control #SQL (Structured Query Language) #Data Modeling #Redshift #Spark (Apache Spark) #Data Pipeline #Agile #S3 (Amazon Simple Storage Service) #Data Storage #Datasets #AWS Glue
Role description
Responsibilities
Kforce has a client that is seeking a Data Engineer that will design, build, and maintain scalable data pipelines and core data workflows using AWS-native services including Redshift, Glue, S3, Step Functions, and the Glue Data Catalog. This is a hybrid role in Pittsburgh, PA. Summary: You will own data ingestion, transformation, and validation processes and will be responsible for delivering reliable, well-modeled datasets that support analytics and application use cases. You will collaborate closely with engineering, product, and business stakeholders to ensure data is accurate, accessible, and actionable, while helping shape the future state of the data platform. Duties:
β’ Design, build, and maintain scalable ETL pipelines using AWS Glue, Crawlers, Apache Hudi, and Step Functions
β’ Manage and optimize data storage and organization in Amazon S3
β’ Participate in data modeling and transformation in Amazon Redshift to support analytics, reporting, and downstream applications
β’ Own metadata management and schema discovery using AWS Glue Data Catalog
β’ Ensure data quality, integrity, and consistency across all pipelines and datasets
β’ Develop data access controls and governance practices to ensure secure and appropriate data use
β’ Implement monitoring, logging, and alerting for data workflows
β’ Partner with engineering and business teams to define data requirements and deliver trusted datasets
β’ Enable analytics and reporting through well-documented, well-modeled, and reliable data assets
β’ Establish best practices for data engineering, documentation, naming conventions, and governance
β’ Contribute to the roadmap and evolution of the data platform as the organization scales
β’ Perform other data-related responsibilities as needed
Requirements
β’ Strong experience with AWS data services including Redshift, Spark, Glue ETL, Glue Crawlers, S3, Step Functions, and Glue Data Catalog
β’ Solid understanding of data modeling, ETL/ELT patterns, and downstream consumption needs
β’ Proficiency in SQL and Python with experience working with large, complex datasets
β’ Experience building and supporting production-grade data pipelines
β’ Strong analytical skills with the ability to translate business needs into scalable data solutions
β’ Ability to work independently, take ownership, and build systems from scratch
β’ High attention to detail with a strong focus on data accuracy and reliability
β’ Experience with Git-based workflows and version control
β’ Clear communication skills and the ability to collaborate across technical and non-technical teams
β’ Structured, methodical approach to problem solving
β’ A builder mindset and comfort operating without an established playbook
Preferred Qualifications
β’ Relevant technical certifications
β’ Experience with AWS EMR, data lakes, and Lake Formation
β’ Experience coaching engineers or acting in a technical lead capacity
β’ Background or exposure to data analytics
β’ Experience working in small or agile teams
β’ Familiarity with Linux-based environments
β’ Hands-on experience with Jira and Confluence
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking βApply Todayβ you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.
Responsibilities
Kforce has a client that is seeking a Data Engineer that will design, build, and maintain scalable data pipelines and core data workflows using AWS-native services including Redshift, Glue, S3, Step Functions, and the Glue Data Catalog. This is a hybrid role in Pittsburgh, PA. Summary: You will own data ingestion, transformation, and validation processes and will be responsible for delivering reliable, well-modeled datasets that support analytics and application use cases. You will collaborate closely with engineering, product, and business stakeholders to ensure data is accurate, accessible, and actionable, while helping shape the future state of the data platform. Duties:
β’ Design, build, and maintain scalable ETL pipelines using AWS Glue, Crawlers, Apache Hudi, and Step Functions
β’ Manage and optimize data storage and organization in Amazon S3
β’ Participate in data modeling and transformation in Amazon Redshift to support analytics, reporting, and downstream applications
β’ Own metadata management and schema discovery using AWS Glue Data Catalog
β’ Ensure data quality, integrity, and consistency across all pipelines and datasets
β’ Develop data access controls and governance practices to ensure secure and appropriate data use
β’ Implement monitoring, logging, and alerting for data workflows
β’ Partner with engineering and business teams to define data requirements and deliver trusted datasets
β’ Enable analytics and reporting through well-documented, well-modeled, and reliable data assets
β’ Establish best practices for data engineering, documentation, naming conventions, and governance
β’ Contribute to the roadmap and evolution of the data platform as the organization scales
β’ Perform other data-related responsibilities as needed
Requirements
β’ Strong experience with AWS data services including Redshift, Spark, Glue ETL, Glue Crawlers, S3, Step Functions, and Glue Data Catalog
β’ Solid understanding of data modeling, ETL/ELT patterns, and downstream consumption needs
β’ Proficiency in SQL and Python with experience working with large, complex datasets
β’ Experience building and supporting production-grade data pipelines
β’ Strong analytical skills with the ability to translate business needs into scalable data solutions
β’ Ability to work independently, take ownership, and build systems from scratch
β’ High attention to detail with a strong focus on data accuracy and reliability
β’ Experience with Git-based workflows and version control
β’ Clear communication skills and the ability to collaborate across technical and non-technical teams
β’ Structured, methodical approach to problem solving
β’ A builder mindset and comfort operating without an established playbook
Preferred Qualifications
β’ Relevant technical certifications
β’ Experience with AWS EMR, data lakes, and Lake Formation
β’ Experience coaching engineers or acting in a technical lead capacity
β’ Background or exposure to data analytics
β’ Experience working in small or agile teams
β’ Familiarity with Linux-based environments
β’ Hands-on experience with Jira and Confluence
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking βApply Todayβ you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.





