Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer contract (6-12+ months) in Minneapolis, MN, requiring on-site work 4 days a week. Key skills include SQL, Python, and experience with structured/unstructured data. Familiarity with AWS, DBT, and Snowflake is a plus.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 19, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Greater Minneapolis-St. Paul Area
-
🧠 - Skills detailed
#Snowflake #Data Pipeline #Version Control #Code Reviews #Data Quality #Visualization #S3 (Amazon Simple Storage Service) #Data Engineering #GIT #Documentation #Airflow #AWS (Amazon Web Services) #Looker #Redshift #dbt (data build tool) #Scala #Cloud #Data Modeling #Python #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #SQL (Structured Query Language)
Role description
β€’ β€’ This is a 6-12+ month contract opportunity with our client located in Minneapolis, MN. On-site requirement is 4 days per week. Candidates must be authorized to work in the United States without the need for sponsorship. β€’ β€’ We are seeking a Senior Data Engineer to join a fast-paced, growing data engineering team within a forward-thinking investment firm. This role is ideal for a technically skilled data professional with strong experience in SQL and Python, and a passion for building data pipelines, modeling, and delivering modern data solutions. You must be comfortable working with both structured and unstructured data, and have a strong interest in cloud technologies. While experience with AWS, DBT, and Snowflake is a plus, the team is open to training the right candidate in these tools. Primary Duties β€’ Build and maintain scalable data pipelines using SQL and Python. β€’ Work with both structured and unstructured data to support analytical and operational use cases. β€’ Support cloud data engineering projects, especially those leveraging AWS (training available). β€’ Collaborate on data modeling and transformation efforts across teams. β€’ Assist with documentation, QA, and code reviews to ensure data quality and system reliability. β€’ Communicate effectively with engineers, analysts, and business partners to deliver timely data solutions. β€’ Continuously identify opportunities to optimize and improve data workflows. Required Qualifications β€’ Strong experience with SQL and Python. β€’ Ability to work confidently with structured and unstructured data. β€’ Strong analytical thinking and problem-solving skills. β€’ Excellent written and verbal communication skills. β€’ Self-starter who is eager to learn and grow in a fast-moving environment. Nice to Have Skills β€’ Experience with DBT (Data Build Tool). β€’ Familiarity with Snowflake or similar cloud data platforms. β€’ Exposure to AWS services (e.g., S3, Redshift, Glue, Lambda). β€’ Experience with data visualization tools such as Looker (training provided). β€’ Knowledge of version control (Git), CI/CD, and workflow orchestration tools like Airflow. β€’ Background in financial services or data-intensive business environments.