

InfoVision Inc.
Sr Data Engineer-ETL
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer-ETL in Denver, CO, for a long-term contract. Requires 10+ years in software development, strong ETL skills with Spark and SQL, and experience with AWS, Kafka, and data pipeline management.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Denver, CO
-
🧠 - Skills detailed
#Apache Airflow #"ETL (Extract #Transform #Load)" #GitHub #RDS (Amazon Relational Database Service) #Batch #Data Bricks #Kafka (Apache Kafka) #SQL (Structured Query Language) #SNS (Simple Notification Service) #Cloud #ThoughtSpot #Snowflake #Python #Big Data #AWS (Amazon Web Services) #Computer Science #Monitoring #Databricks #Airflow #Data Pipeline #SQS (Simple Queue Service) #Scala #EC2 #Spark (Apache Spark) #Hadoop #Spark SQL #PySpark #Databases #NoSQL #S3 (Amazon Simple Storage Service) #Observability #JSON (JavaScript Object Notation) #Data Engineering
Role description
Job Title: Sr. Data Engineer-ETL
Location: Denver, CO
Duration: Long-term
Main Skill:
• Over 10+ years of experience in the Software Development Industry.
• We need data Engineering exp - building ETLS using spark and sql, real time and batch pipelines using Kafka/firehose, experience with building pipelines with data bricks/snowflake, experience with ingesting multiple data formats like json/parquet/delta etc.
Job Description:
About You
• You have a BS or MS in Computer Science or similar relevant field
• You work well in a collaborative, team-based environment
• You are an experienced engineering with 3+ years of experience
• You have a passion for big data structures
• You possess strong organizational and analytical skills related to working with structured and unstructured data operations
• You have experience implementing and maintaining high performance / high availability data structures
• You are most comfortable operating within cloud based eco systems
• You enjoy leading projects and mentoring other team members
Specific Skills:
• Over 10 years of experience in the Software Development Industry.
• Experience or knowledge of relational SQL and NoSQL databases
• High proficiency in Python, Pyspark, SQL and/or Scala
• Experience in designing and implementing ETL processes
• Experience in managing data pipelines for analytics and operational use
• Strong understanding of in-memory processing and data formats (Avro, Parquet, Json etc.)
• Experience or knowledge of AWS cloud services: EC2, MSK, S3, RDS, SNS, SQS
• Experience or knowledge of stream-processing systems: i.e., Storm, Spark-Structured-Streaming, Kafka consumers.
• Experience or knowledge of data pipeline and workflow management tools: i.e., Apache Airflow, AWS Data Pipeline
• Experience or knowledge of big data tools: i.e., Hadoop, Spark, Kafka.
• Experience or knowledge of software engineering tools/practices: i.e., Github, VSCode, CI/CD
• Experience or knowledge in data observability and monitoring
• Hands-on experience in designing and maintaining data schema life-cycles.
• Bonus - Experience in tools like Databricks, Snowflake and Thoughtspot
Job Title: Sr. Data Engineer-ETL
Location: Denver, CO
Duration: Long-term
Main Skill:
• Over 10+ years of experience in the Software Development Industry.
• We need data Engineering exp - building ETLS using spark and sql, real time and batch pipelines using Kafka/firehose, experience with building pipelines with data bricks/snowflake, experience with ingesting multiple data formats like json/parquet/delta etc.
Job Description:
About You
• You have a BS or MS in Computer Science or similar relevant field
• You work well in a collaborative, team-based environment
• You are an experienced engineering with 3+ years of experience
• You have a passion for big data structures
• You possess strong organizational and analytical skills related to working with structured and unstructured data operations
• You have experience implementing and maintaining high performance / high availability data structures
• You are most comfortable operating within cloud based eco systems
• You enjoy leading projects and mentoring other team members
Specific Skills:
• Over 10 years of experience in the Software Development Industry.
• Experience or knowledge of relational SQL and NoSQL databases
• High proficiency in Python, Pyspark, SQL and/or Scala
• Experience in designing and implementing ETL processes
• Experience in managing data pipelines for analytics and operational use
• Strong understanding of in-memory processing and data formats (Avro, Parquet, Json etc.)
• Experience or knowledge of AWS cloud services: EC2, MSK, S3, RDS, SNS, SQS
• Experience or knowledge of stream-processing systems: i.e., Storm, Spark-Structured-Streaming, Kafka consumers.
• Experience or knowledge of data pipeline and workflow management tools: i.e., Apache Airflow, AWS Data Pipeline
• Experience or knowledge of big data tools: i.e., Hadoop, Spark, Kafka.
• Experience or knowledge of software engineering tools/practices: i.e., Github, VSCode, CI/CD
• Experience or knowledge in data observability and monitoring
• Hands-on experience in designing and maintaining data schema life-cycles.
• Bonus - Experience in tools like Databricks, Snowflake and Thoughtspot




