Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Data Integration) with a contract length of unspecified duration, offering a pay rate of "unknown." The position is fully remote, requires strong skills in Python, SQL, and cloud technologies, and is open to US citizens or GC holders only.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 3, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Airflow #Data Lake #AWS (Amazon Web Services) #Data Processing #GitHub #Version Control #AWS Lambda #Scala #Data Integration #Data Orchestration #Azure #Data Ingestion #Lambda (AWS Lambda) #SQL (Structured Query Language) #Data Engineering #Data Pipeline #ADF (Azure Data Factory) #Data Warehouse #DevOps #AWS Glue #Snowflake #Data Quality #Azure Data Factory #BigQuery #Cloud #Python
Role description
Job Description: Data Engineer (Data Integration) Position: Data Engineer (Data Integration) Location: Fully Remote (No NY or CA; preference for EST or CST candidates) Start Date: ASAP Visa Restrictions: No sponsorship available (US Citizens / GC holders only) About the Role We are seeking a highly skilled Data Engineer with strong experience in real-world data ingestion and integration. The ideal candidate will be capable of designing and implementing scalable data pipelines, creating and optimizing data lakes, and applying modern frameworks to ensure data quality and performance. This is a technical, hands-on role requiring expertise across cloud platforms, orchestration tools, and modern data warehouse technologies. Responsibilities β€’ Design, build, and maintain data ingestion pipelines from multiple structured/unstructured sources. β€’ Develop and optimize data lakes and data warehouse solutions (Snowflake, BigQuery, etc.). β€’ Leverage orchestration tools (Airflow, Dagster, ADF, Glue) to manage and monitor workflows. β€’ Implement serverless computing solutions (Azure Functions, AWS Lambda) for scalable data processing. β€’ Ensure adherence to best practices in data quality, validation, testing, scalability, and performance. β€’ Work with stakeholders to translate business requirements into technical solutions. β€’ Collaborate with DevOps teams on CI/CD pipelines, GitHub workflows, and version control. Required Qualifications β€’ Proven real-world experience with data ingestion and integration. β€’ Strong proficiency in Python and SQL (8+/10). β€’ Experience with Azure or AWS Data Engineering technologies (7+/10). β€’ Knowledge of data orchestration tools: Airflow, Dagster, Azure Data Factory, or AWS Glue (7/10). β€’ Experience with Azure Functions or AWS Lambda (7+/10). β€’ Hands-on expertise with Snowflake, BigQuery, or other modern data warehouses (8+/10). β€’ Strong understanding of data quality, validation, and testing practices. β€’ Experience with GitHub or other version control systems. β€’ Strong awareness of scalability and performance optimization in data solutions.