

Intelliswift Software
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer focused on streaming and data platform migration, lasting 6 months remotely in the U.S. Required skills include SQL, ETL/ELT pipeline experience, and familiarity with data warehouses. A bachelor's degree and 5+ years of experience are mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
May 15, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Accuracy #Data Lifecycle #Monitoring #Databricks #SaaS (Software as a Service) #SQL (Structured Query Language) #Data Migration #Databases #Snowflake #Data Engineering #MySQL #CRM (Customer Relationship Management) #Data Access #GraphQL #GitHub #Data Quality #BigQuery #Trino #"ETL (Extract #Transform #Load)" #Scala #Spark (Apache Spark) #Kafka (Apache Kafka) #Data Pipeline #Data Warehouse #Computer Science #Data Modeling #AI (Artificial Intelligence) #Migration #Data Documentation #Documentation
Role description
Job Title: Data Engineer (Streaming & Data Platform Migration)
Location: Remote (United States)
Duration: 6 Months, potential extensions
We are looking for a skilled Data Engineer to support a large-scale data platform migration initiative, transitioning business-critical systems from a SaaS CRM environment to an internally managed data platform.
This role will focus on designing and building scalable data pipelines, real-time data flows, and robust data models, ensuring high data quality, consistency, and reliability across systems. Youβll play a key role in modernizing how data is structured, processed, and consumed across the organization.
Responsibilities:
β’ Design and build bidirectional data pipelines between CRM systems, data warehouses, and internal operational data stores
β’ Develop real-time streaming pipelines using distributed event-streaming frameworks (e.g., Kafka, Kinesis, Pulsar)
β’ Define and manage data schemas and entity models, including access controls and data lifecycle rules
β’ Build and maintain data validation, reconciliation, and monitoring frameworks to ensure data accuracy during migration
β’ Develop and maintain data documentation (schema definitions, transformations, mappings, data dictionaries)
β’ Collaborate with engineering, business, and CRM stakeholders to define data contracts, SLAs, and migration strategies
β’ Investigate and resolve data quality issues, pipeline failures, and inconsistencies
β’ Leverage AI-assisted development tools to improve efficiency in SQL development, pipeline creation, and schema management.
Required Skills & Experience
β’ Strong proficiency in SQL and data transformation logic
β’ Hands-on experience building ETL/ELT pipelines in distributed data environments
β’ Experience with data warehouses such as Snowflake, BigQuery, Databricks, Hive, Spark, or Trino
β’ Experience with real-time streaming systems (Kafka, Pulsar, Kinesis, or similar)
β’ Solid understanding of data modeling (entity modeling, dimensional modeling)
β’ Experience with GraphQL and backend data access layers
β’ Familiarity with ORM frameworks and relational databases (MySQL or similar)
β’ Strong experience with data quality engineering (validation, monitoring, reconciliation)
β’ Experience with data migration across heterogeneous systems
β’ Ability to work independently and manage multiple priorities
Nice to Have
Experience working with CRM platforms (e.g., Salesforce)
Exposure to AI-assisted development tools (GitHub Copilot, Cursor, etc.)
Experience with event-driven architectures and subscription-based data systems
Basic Qualifications
Bachelorβs degree in Computer Science, Data Engineering, or related field
5+ years of data engineering experience
Job Title: Data Engineer (Streaming & Data Platform Migration)
Location: Remote (United States)
Duration: 6 Months, potential extensions
We are looking for a skilled Data Engineer to support a large-scale data platform migration initiative, transitioning business-critical systems from a SaaS CRM environment to an internally managed data platform.
This role will focus on designing and building scalable data pipelines, real-time data flows, and robust data models, ensuring high data quality, consistency, and reliability across systems. Youβll play a key role in modernizing how data is structured, processed, and consumed across the organization.
Responsibilities:
β’ Design and build bidirectional data pipelines between CRM systems, data warehouses, and internal operational data stores
β’ Develop real-time streaming pipelines using distributed event-streaming frameworks (e.g., Kafka, Kinesis, Pulsar)
β’ Define and manage data schemas and entity models, including access controls and data lifecycle rules
β’ Build and maintain data validation, reconciliation, and monitoring frameworks to ensure data accuracy during migration
β’ Develop and maintain data documentation (schema definitions, transformations, mappings, data dictionaries)
β’ Collaborate with engineering, business, and CRM stakeholders to define data contracts, SLAs, and migration strategies
β’ Investigate and resolve data quality issues, pipeline failures, and inconsistencies
β’ Leverage AI-assisted development tools to improve efficiency in SQL development, pipeline creation, and schema management.
Required Skills & Experience
β’ Strong proficiency in SQL and data transformation logic
β’ Hands-on experience building ETL/ELT pipelines in distributed data environments
β’ Experience with data warehouses such as Snowflake, BigQuery, Databricks, Hive, Spark, or Trino
β’ Experience with real-time streaming systems (Kafka, Pulsar, Kinesis, or similar)
β’ Solid understanding of data modeling (entity modeling, dimensional modeling)
β’ Experience with GraphQL and backend data access layers
β’ Familiarity with ORM frameworks and relational databases (MySQL or similar)
β’ Strong experience with data quality engineering (validation, monitoring, reconciliation)
β’ Experience with data migration across heterogeneous systems
β’ Ability to work independently and manage multiple priorities
Nice to Have
Experience working with CRM platforms (e.g., Salesforce)
Exposure to AI-assisted development tools (GitHub Copilot, Cursor, etc.)
Experience with event-driven architectures and subscription-based data systems
Basic Qualifications
Bachelorβs degree in Computer Science, Data Engineering, or related field
5+ years of data engineering experience






