Wavicle Data Solutions

Senior Data Engineer - Contractor

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer - Contractor, offering a remote position for 8+ years of experience in AWS, Python, and ETL pipeline implementation. Contract length and pay rate are unspecified. Key skills include Apache Spark, SQL, and data warehousing solutions.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
January 23, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Data Integration #Lambda (AWS Lambda) #MongoDB #DynamoDB #Data Pipeline #Data Engineering #Computer Science #Apache Spark #Programming #AI (Artificial Intelligence) #Scala #SNS (Simple Notification Service) #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #AWS Lambda #S3 (Amazon Simple Storage Service) #PySpark #Redshift #Azure #Amazon Redshift #Python #Snowflake #Talend #dbt (data build tool) #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #Consulting #RDS (Amazon Relational Database Service) #Databricks #Automation #Cloud #Kafka (Apache Kafka) #Spark (Apache Spark) #Hadoop
Role description
A BIT ABOUT WAVICLE Wavicle Data Solutions is a founder-led, high-growth consulting firm helping organizations unlock the full potential of cloud, data, and AI. We’re known for delivering real business results through intelligent transformation—modernizing data platforms, enabling AI-driven decision-making, and accelerating time-to-value across industries. At the heart of our approach is WIT—the Wavicle Intelligence Framework. WIT brings together our proprietary accelerators, delivery models, and partner expertise into one powerful engine for transformation. It’s how we help clients move faster, reduce costs, and create lasting impact—and it’s where your ideas, skills, and contributions can make a real difference. Our work is deeply rooted in strong partnerships with AWS, Databricks, Google Cloud, and Azure, enabling us to deliver cutting-edge solutions built on the best technologies the industry has to offer. With over 500 team members across 42 cities in the U.S., Canada, and India, Wavicle offers a flexible, digitally connected work environment built on collaboration and growth. We invest in our people through: Competitive compensation and bonuses Unlimited paid time off Health, retirement, and life insurance plans Long-term incentive programs Meaningful work that blends innovation and purpose If you’re passionate about solving complex problems, exploring what’s next in AI, and being part of a team that values delivery excellence and career development—you’ll feel right at home here. THE OPPORTUNITY Wavicle is hiring a Senior Data Engineer with strong real-life experience in building data pipelines using emerging technologies. WHAT YOU WILL GET TO DO Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of sources like Hadoop, Spark, AWS Lambda, etc. Experience with AWS Cloud on Data Integration with Apache Spark, EMR, Glue, Kafka, Kinesis and Lambda in S3, Redshift, RDS, and MongoDB/DynamoDB ecosystems. Strong real-life experience in Python development, especially in PySpark in AWS Cloud environment. Design, develop, test, deploy, maintain and improve data integration pipeline. Develop pipeline objects using Apache Spark / Pyspark / Python or Scala. Design and develop data pipeline architectures using Hadoop, Spark and related AWS Services. Load and performance test data pipelines built using the above-mentioned technologies. WHAT YOU BRING TO THE TEAM Bachelor or Master's degree in Computer Science, or related field is required. 8+ years of hands-on professional work experience with AWS and Python programming, experience with Python frameworks is required Hands-on expertise with cloud platforms including AWS and GCP Expert level knowledge of using SQL to write complex, highly-optimized queries across large volumes of data. Working experience on ETL pipeline implementation using AWS services such as Glue, Lambda, EMR, Shell, S3, SNS, Pyspark, etc. is required. Strong knowledge of data warehousing solutions, particularly Amazon Redshift Hands-on professional work experience using emerging technologies (Snowflake, Talend, and/or Databricks) is highly desirable. Proficiency in DBT for data transformation and modeling Experience with automation of data workflows and processes Strong problem solving and troubleshooting skills with the ability to exercise mature judgement. EQUAL OPPORTUNITY EMPLOYER Wavicle is an Equal Opportunity Employer and committed to creating an inclusive environment for all employees. We welcome and encourage diversity in the workplace regardless of race, color, religion, national origin, gender, pregnancy, sexual orientation, gender identity, age, physical or mental disability, genetic information or veteran status. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.