

Knowledge Services
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a contract through Dec 2026, paying "pay rate" in Indianapolis, IN. Requires 7–10 years of data engineering experience, proficiency in Python and SQL, and expertise in AWS, Lakehouse architecture, and containerization.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 3, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Indianapolis, IN
-
🧠 - Skills detailed
#Data Engineering #Logging #Docker #SQS (Simple Queue Service) #Delta Lake #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Programming #Databricks #Data Pipeline #IAM (Identity and Access Management) #Data Security #Python #API (Application Programming Interface) #Kubernetes #Minio #ML (Machine Learning) #Spark (Apache Spark) #Redshift #S3 (Amazon Simple Storage Service) #Scala #Infrastructure as Code (IaC) #AWS (Amazon Web Services) #Security #SQL (Structured Query Language) #Apache Iceberg #Terraform #Compliance #Trino #Datadog #Storage #Data Processing #Monitoring #Virtualization #Data Governance #Datasets #dbt (data build tool) #Data Ingestion #Cloud #Snowflake
Role description
Knowledge Services is seeking a Senior Data Engineer for a contract through the end of Dec 2026 (potential for extension) in Indianapolis, IN.
• This role will work a hybrid schedule and be onsite Tuesday, Wednesday, and Thursday each week in Indianapolis.
Senior Data Engineer Overview:
• This role focuses on supporting our client’s enterprise Data Platform by building and supporting a Lakehouse architecture on AWS, leveraging Amazon EKS for container orchestration.
• The platform will enable secure, scalable, and high-performance data processing for diverse agricultural and scientific datasets.
• It will integrate cloud-native technologies and distributed systems to manage data ingestion, transformation, and curation, supporting advanced analytics, machine learning, and operational workflows.
Responsibilities:
• The engineer will design and implement scalable, cloud-native data solutions to support an enterprise Data Platform.
• This includes building and maintaining data pipelines and infrastructure on AWS, leveraging Amazon EKS for container orchestration, S3 for storage, and Trino for distributed query processing.
• The role involves developing workflows using Dagster for orchestration and DBT for transformation, ensuring efficient ingestion, curation, and delivery of data across the Lakehouse architecture.
• The engineer will focus on performance optimization, security, and reliability while enabling advanced analytics and machine learning capabilities for diverse agricultural and scientific datasets.
Senior Data Engineer Requirements:
• 7–10 years in data engineering
• Programming: Strong proficiency in Python and SQL
• Data Platform & Lakehouse:
• Experience with Lakehouse architecture and modern open-source data platforms
• Familiarity with Apache Iceberg, Delta Lake, and distributed processing frameworks (e.g., Spark)
• Ability to design and implement cloud-native data pipelines and infrastructure on AWS
• Experience with Dagster for orchestration and workflow management
• Knowledge of Trino for federated query processing across heterogeneous data sources
• Familiarity with MinIO for object storage in Lakehouse environments
• Data Virtualization experience for integrating multiple data sources seamlessly
• Cloud Technologies: Hands-on experience with AWS services, including:
• S3
• ECS
• Lambda
• CloudFormation or Terraform
• EKS
• API Gateway
• SQS
• CloudWatch
• Networking: Solid understanding of networking fundamentals for secure and optimized data platform communication
• Security & Access Control: Experience implementing role-based access control (RBAC) and attribute-based access control (ABAC)
• Knowledge of data security policies, encryption, and identity-based access management (IAM)
• Containerization: Experience with Docker and Kubernetes (EKS)
• Monitoring & Logging: Proficiency with Datadog
• Data Warehousing: Solid understanding of modern solutions (e.g., Snowflake, Redshift)
• Infrastructure as Code: Proficiency with Terraform or CloudFormation
• Stakeholder Collaboration: Ability to translate complex requirements into scalable data solutions
• Soft Skills: Excellent problem-solving, attention to detail, and communicating with end users and troubleshooting
• Data Platform Expertise:
• Experience with Lakehouse architecture and technologies like Delta Lake or Apache Iceberg
• Familiarity with AWS services (S3, Glue, EMR, Lambda) and Amazon EKS for container orchestration
• Proficiency in Databricks, Spark, and distributed data processing frameworks
• Understanding of data governance, security, and compliance in cloud environments
Knowledge Services is seeking a Senior Data Engineer for a contract through the end of Dec 2026 (potential for extension) in Indianapolis, IN.
• This role will work a hybrid schedule and be onsite Tuesday, Wednesday, and Thursday each week in Indianapolis.
Senior Data Engineer Overview:
• This role focuses on supporting our client’s enterprise Data Platform by building and supporting a Lakehouse architecture on AWS, leveraging Amazon EKS for container orchestration.
• The platform will enable secure, scalable, and high-performance data processing for diverse agricultural and scientific datasets.
• It will integrate cloud-native technologies and distributed systems to manage data ingestion, transformation, and curation, supporting advanced analytics, machine learning, and operational workflows.
Responsibilities:
• The engineer will design and implement scalable, cloud-native data solutions to support an enterprise Data Platform.
• This includes building and maintaining data pipelines and infrastructure on AWS, leveraging Amazon EKS for container orchestration, S3 for storage, and Trino for distributed query processing.
• The role involves developing workflows using Dagster for orchestration and DBT for transformation, ensuring efficient ingestion, curation, and delivery of data across the Lakehouse architecture.
• The engineer will focus on performance optimization, security, and reliability while enabling advanced analytics and machine learning capabilities for diverse agricultural and scientific datasets.
Senior Data Engineer Requirements:
• 7–10 years in data engineering
• Programming: Strong proficiency in Python and SQL
• Data Platform & Lakehouse:
• Experience with Lakehouse architecture and modern open-source data platforms
• Familiarity with Apache Iceberg, Delta Lake, and distributed processing frameworks (e.g., Spark)
• Ability to design and implement cloud-native data pipelines and infrastructure on AWS
• Experience with Dagster for orchestration and workflow management
• Knowledge of Trino for federated query processing across heterogeneous data sources
• Familiarity with MinIO for object storage in Lakehouse environments
• Data Virtualization experience for integrating multiple data sources seamlessly
• Cloud Technologies: Hands-on experience with AWS services, including:
• S3
• ECS
• Lambda
• CloudFormation or Terraform
• EKS
• API Gateway
• SQS
• CloudWatch
• Networking: Solid understanding of networking fundamentals for secure and optimized data platform communication
• Security & Access Control: Experience implementing role-based access control (RBAC) and attribute-based access control (ABAC)
• Knowledge of data security policies, encryption, and identity-based access management (IAM)
• Containerization: Experience with Docker and Kubernetes (EKS)
• Monitoring & Logging: Proficiency with Datadog
• Data Warehousing: Solid understanding of modern solutions (e.g., Snowflake, Redshift)
• Infrastructure as Code: Proficiency with Terraform or CloudFormation
• Stakeholder Collaboration: Ability to translate complex requirements into scalable data solutions
• Soft Skills: Excellent problem-solving, attention to detail, and communicating with end users and troubleshooting
• Data Platform Expertise:
• Experience with Lakehouse architecture and technologies like Delta Lake or Apache Iceberg
• Familiarity with AWS services (S3, Glue, EMR, Lambda) and Amazon EKS for container orchestration
• Proficiency in Databricks, Spark, and distributed data processing frameworks
• Understanding of data governance, security, and compliance in cloud environments





