

Deliverse Consulting, LLC
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a luxury rail transportation client, offering a remote contract for 6 months at a competitive pay rate. Requires 2-5 years of experience with AWS, Databricks, Fivetran, SQL, and Power BI.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
288
-
🗓️ - Date
February 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Florida, United States
-
🧠 - Skills detailed
#PySpark #Security #Fivetran #Data Engineering #Data Pipeline #Version Control #Redshift #Lambda (AWS Lambda) #Data Warehouse #Databricks #Documentation #BI (Business Intelligence) #BitBucket #Data Quality #Spark (Apache Spark) #Data Modeling #Data Governance #Data Ingestion #Datasets #Microsoft Power BI #Python #Cloud #"ETL (Extract #Transform #Load)" #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #SQL (Structured Query Language)
Role description
About the Role
Our client, the leader in luxury, high-speed rail transportation, is seeking a highly-detailed and customer-oriented contract hands-on Data Engineer to join our data team and help build, maintain, and optimize our cloud-based data platform. In this role, you will work with modern technologies including AWS, Databricks, Fivetran, Adobe, and Power BI to ensure high-quality, reliable data is available for analytics and reporting.
You will collaborate closely with analytics, marketing, and business teams to support data-driven decision-making while continuing to grow your technical expertise in a modern data stack.
Key Responsibilities
• Build and maintain data pipelines using AWS and Databricks
• Manage and optimize ELT processes
• Ingest, transform, and model data from various sources, such as Adobe and other marketing platforms.
• Write efficient, maintainable SQL and transformation logic
• Support and optimize datasets used in Power BI dashboards and reports
• Ensure data quality, integrity, and documentation
• Monitor data pipelines and troubleshoot performance or reliability issues
• Collaborate with analysts and business stakeholders to translate requirements into data solutions
• Contribute to improving data engineering best practices and standards
Required Qualifications
• 2–5 years of experience in Data Engineering or a related field
• Hands-on experience with AWS (e.g., S3, Redshift, Glue, Lambda, or similar services)
• Experience working with Databricks (Spark, PySpark, or SQL)
• Experience using Fivetran or similar data ingestion tools
• Strong SQL skills and understanding of data modeling concepts
• Experience working with marketing or analytics data sources such as Adobe
• Experience supporting or developing data models for PowerBI. Building Dashboards inside of PowerBI is a plus.
• Basic knowledge of Python for data transformation
• Familiarity with BitBucket or other version control systems
Nice to Have
• Experience with data warehouse design principles
• Experience with performance optimization in cloud environments
• Exposure to CI/CD practices in data workflows
• Understanding of data governance and security best practices
Note: This position is remote, but the candidate is expected to work East Coast hours.
About the Role
Our client, the leader in luxury, high-speed rail transportation, is seeking a highly-detailed and customer-oriented contract hands-on Data Engineer to join our data team and help build, maintain, and optimize our cloud-based data platform. In this role, you will work with modern technologies including AWS, Databricks, Fivetran, Adobe, and Power BI to ensure high-quality, reliable data is available for analytics and reporting.
You will collaborate closely with analytics, marketing, and business teams to support data-driven decision-making while continuing to grow your technical expertise in a modern data stack.
Key Responsibilities
• Build and maintain data pipelines using AWS and Databricks
• Manage and optimize ELT processes
• Ingest, transform, and model data from various sources, such as Adobe and other marketing platforms.
• Write efficient, maintainable SQL and transformation logic
• Support and optimize datasets used in Power BI dashboards and reports
• Ensure data quality, integrity, and documentation
• Monitor data pipelines and troubleshoot performance or reliability issues
• Collaborate with analysts and business stakeholders to translate requirements into data solutions
• Contribute to improving data engineering best practices and standards
Required Qualifications
• 2–5 years of experience in Data Engineering or a related field
• Hands-on experience with AWS (e.g., S3, Redshift, Glue, Lambda, or similar services)
• Experience working with Databricks (Spark, PySpark, or SQL)
• Experience using Fivetran or similar data ingestion tools
• Strong SQL skills and understanding of data modeling concepts
• Experience working with marketing or analytics data sources such as Adobe
• Experience supporting or developing data models for PowerBI. Building Dashboards inside of PowerBI is a plus.
• Basic knowledge of Python for data transformation
• Familiarity with BitBucket or other version control systems
Nice to Have
• Experience with data warehouse design principles
• Experience with performance optimization in cloud environments
• Exposure to CI/CD practices in data workflows
• Understanding of data governance and security best practices
Note: This position is remote, but the candidate is expected to work East Coast hours.






