

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, remote for 1 year, paying W2 only. Requires 3-6 years of experience, strong AWS expertise, proficiency in Python and SQL, and knowledge of ETL tools. Familiarity with AI/ML is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 26, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Cloud #Data Warehouse #Lambda (AWS Lambda) #Data Pipeline #Scala #Logging #Data Integrity #GIT #SQL (Structured Query Language) #Deployment #Monitoring #Data Lake #REST API #Security #Spark (Apache Spark) #SSIS (SQL Server Integration Services) #EC2 #Automation #Linux #Version Control #Airflow #AWS (Amazon Web Services) #Data Processing #Storage #Informatica #AI (Artificial Intelligence) #Big Data #REST (Representational State Transfer) #Data Architecture #S3 (Amazon Simple Storage Service) #Redshift #Athena #AWS Lambda #Python #AWS Glue #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #Data Engineering
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role : Data Engineer
Location : REMOTE
Duration : 1 Year
Note: Only W2 Candidates (No C2C/ No opt)
THE ROLE:
Key Responsibilities
β’ Develop and maintain scalable data pipelines and workflows using automation and orchestration tools such as Airflow.
β’ Build and optimize data architectures and models to support analytics and reporting needs.
β’ Work extensively with AWS services such as Lambda, Glue, Athena, S3, Redshift, and EC2 for data processing and storage.
β’ Ensure data integrity, quality, and security by implementing robust ETL processes and monitoring solutions.
β’ Debug and troubleshoot data pipeline issues with strong analytical and problem-solving skills.
β’ Implement modern data practices, including data lakes and real-time streaming processing capabilities.
β’ Collaborate with cross-functional teams and adapt to rapidly changing technological landscapes.
β’ Leverage tools like GIT and CI/CD pipelines for version control and deployment automation.
Required Qualifications
β’ 3-6 years of experience in Data Engineering or related fields.
β’ Strong expertise in AWS cloud services (AWS Lambda, Glue, Athena, S3, etc.).
β’ Proficiency in Python and SQL.
β’ Solid understanding of data architecture and modeling concepts.
β’ Experience with ETL tools (e.g., Pentaho, SSIS, Informatica, HVR).
β’ Knowledge of database, data warehouse, and big data technologies.
β’ Experience with monitoring and logging solutions.
Preferred Skills
β’ Knowledge of AI/ML and large language models (LLMs).
β’ Experience with REST APIs and Salesforce APIs.
β’ Technologies
β’ AWS Lambda, AWS Glue, Athena, S3, Redshift, EC2 Airflow, Spark, Linux