

TalentOla
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Chicago, IL, with a contract length of unspecified duration. The pay rate is not provided. Key skills include 10+ years of experience in Python, PySpark, AWS, and extensive big data expertise with EMR, Spark, and Kafka/Kinesis.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Redis #Spark (Apache Spark) #SQL (Structured Query Language) #Data Pipeline #IAM (Identity and Access Management) #Datasets #PySpark #AWS (Amazon Web Services) #Athena #Kafka (Apache Kafka) #NoSQL #Lambda (AWS Lambda) #Knowledge Graph #Databases #Elasticsearch #Terraform #Python #Complex Queries #Big Data #PostgreSQL #GIT #Airflow #Agile #Docker #Data Engineering
Role description
Role - Senior Data Engineer
Location: - Chicago, IL (Onsite)
Mandatory skills
• At least 10+ Years of experience
• Python, PySpark, AWS
• EMR, Spark, Kafka/ Kinesis
• Technical certification in multiple technologies is desirable.
Responsibilities:
• Proficient developer in multiple languages, Python is a must, with the ability to quickly learn new ones.
• Expertise in SQL (complex queries, relational databases preferably PostgreSQL, and NoSQL database - Redis and Elasticsearch).
• Exten
• sive big data experience, including EMR, Spark, Kafka/ Kinesis, and optimizing data pipelines, architectures, and datasets.
• AWS expert with hands-on experience in Lambda, Glue,Athena, Kinesis, IAM, EMR/PySpark, Docker,
• Proficient in CI/CD development using Git, Terraform, and agile methodologies.
• Comfortable with stream-processing systems (Storm, Spark-Streaming) and workflow management tools (Airflow).
• Exposure to knowledge graph technologies (Graph DB, OWL, SPARQL) is a plus.
Role - Senior Data Engineer
Location: - Chicago, IL (Onsite)
Mandatory skills
• At least 10+ Years of experience
• Python, PySpark, AWS
• EMR, Spark, Kafka/ Kinesis
• Technical certification in multiple technologies is desirable.
Responsibilities:
• Proficient developer in multiple languages, Python is a must, with the ability to quickly learn new ones.
• Expertise in SQL (complex queries, relational databases preferably PostgreSQL, and NoSQL database - Redis and Elasticsearch).
• Exten
• sive big data experience, including EMR, Spark, Kafka/ Kinesis, and optimizing data pipelines, architectures, and datasets.
• AWS expert with hands-on experience in Lambda, Glue,Athena, Kinesis, IAM, EMR/PySpark, Docker,
• Proficient in CI/CD development using Git, Terraform, and agile methodologies.
• Comfortable with stream-processing systems (Storm, Spark-Streaming) and workflow management tools (Airflow).
• Exposure to knowledge graph technologies (Graph DB, OWL, SPARQL) is a plus.






