

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Data Engineer on a 12-18 month W2 contract, 100% remote (PST preferred), with a pay rate of "unknown." Requires 5+ years in data engineering, proficiency in MS Fabric, Python, SQL, and familiarity with big data technologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 5, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Pipeline #Computer Science #Datasets #Python #Data Science #Azure #ADF (Azure Data Factory) #Monitoring #DevOps #Compliance #Spark (Apache Spark) #Storage #Synapse #Data Engineering #SQL (Structured Query Language) #Hadoop #Big Data #Programming #"ETL (Extract #Transform #Load)" #Scala #YAML (YAML Ain't Markup Language) #Data Access #Data Storage #Java #Data Governance #ML (Machine Learning) #Data Modeling #Azure DevOps #Data Quality #Databricks #AI (Artificial Intelligence)
Role description
Job Description: (NO C2C OR C2H)
Job Title: AI Data Engineer
Contract on w2
Location: 100% remote, PST preferred
Duration: 12 - 18 months
Typical Day in the Role
β’ Purpose of the Team: This team is part of the Devices Operations group within E+D (Engineering and Devices). This is the Digital Transformation and Services (DTS) team responsible for all engineering work related to device data. This includes managing the data platform, data engineering, and extending to AI and Copilot integration with data interfaces. They handle the entire lifecycle of MS-manufactured devices and provide insights to end users.
β’ Key projects: outlined in the job description
β’ Typical task breakdown and operating rhythm: The role will consist of 70% execution on tasks, 30% meetings, and ad hoc calls
Compelling Story & Candidate Value Proposition
β’ What makes this role interesting? - This role provides the opportunity to be part of the development of the Vnext platform for data engineering and integration with new Copilot agents.
β’ Unique Selling Points: They will be working with the latest and greatest MS Copilot data engineering technology.
β’ Degrees or certifications required: Nice to have Bachelorβs degree in computer science, Engineering, or a related field, OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field, OR equivalent experience.
β’ Disqualifiers: N/A
β’ Best vs. Average: The ideal resume would contain familiarity with big data technologies (e.g., Hadoop, Spark), experience in programming languages like Scala and Java
β’ Performance Indicators: Performance will be assessed based on biweekly sprint execution, monitoring the deliverables and timelines agreed upon for each sprint.
Top 3 Hard Skills Required + Years of Experience
1. Minimum 5+ years experience with building data pipelines and data stores.
1. Minimum 5+ years experience with proficiency in MS Fabric technology
1. Minimum 5+ years experience with proficiency in programming languages such as Python
1. Minimum 5+ years experience with strong experience in SQL and database technologies
Summary:
The main function of an AI Data Engineer is to construct and maintain data pipelines, ETL processes, and data storage systems to facilitate efficient data collection, processing, and analysis. They ensure data quality and accessibility for data-driven decision-making, collaborating closely with data scientists and analysts.
Job Responsibilities:
β’ As a Data Engineer in MDO, you will drive critical initiatives to help our data platform scale to the needs of business priorities and technology advancements
β’ Build scalable data models and data pipelines to extract, load, and transform data, ensure secure data storage, with a focus on data quality and compliance on Azure using services (ADF, HDI, Databricks, Synapse/Fabric, etc.)
β’ You will play a critical role in developing and building datasets and integrating with AI/ML and Co-Pilot applications
β’ Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions
β’ Implement and manage CI/CD(YAML + Classic) pipelines for data engineering projects, leveraging tools like Azure DevOps
β’ You will anticipate data governance needs, designing data modeling and handling procedures to ensure compliance with all applicable laws and policies. Youβll also govern data accessibility within your assigned pipelines.
β’ Acts as a Designated Responsible Individual (DRI) and guides other engineers by developing and following the playbook, working on call to monitor system/product/service for degradation, downtime, or interruptions, alerting stakeholders about status and initiates actions to restore system/product/service for simple and complex problems when appropriate.
Qualifications:
β’ Bachelorβs degree in computer science, Engineering, or a related field
β’ OR master's degree in computer science, Math, Software Engineering, Computer Engineering, or related field
β’ OR equivalent experience
β’ Experience with building data pipelines and data stores.
β’ 5+ years of experience in data engineering or a similar role.
β’ Proficiency in programming languages such as Python, Scala, or Java.
β’ Strong experience with SQL and database technologies
β’ Familiarity with big data technologies (e.g., Hadoop, Spark).
Please let me know if this is something you would love to do and help me with your updated resume. Feel free to reach me out at harshiv@ifgpr.com if you have any questions.
Thanks
Job Description: (NO C2C OR C2H)
Job Title: AI Data Engineer
Contract on w2
Location: 100% remote, PST preferred
Duration: 12 - 18 months
Typical Day in the Role
β’ Purpose of the Team: This team is part of the Devices Operations group within E+D (Engineering and Devices). This is the Digital Transformation and Services (DTS) team responsible for all engineering work related to device data. This includes managing the data platform, data engineering, and extending to AI and Copilot integration with data interfaces. They handle the entire lifecycle of MS-manufactured devices and provide insights to end users.
β’ Key projects: outlined in the job description
β’ Typical task breakdown and operating rhythm: The role will consist of 70% execution on tasks, 30% meetings, and ad hoc calls
Compelling Story & Candidate Value Proposition
β’ What makes this role interesting? - This role provides the opportunity to be part of the development of the Vnext platform for data engineering and integration with new Copilot agents.
β’ Unique Selling Points: They will be working with the latest and greatest MS Copilot data engineering technology.
β’ Degrees or certifications required: Nice to have Bachelorβs degree in computer science, Engineering, or a related field, OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field, OR equivalent experience.
β’ Disqualifiers: N/A
β’ Best vs. Average: The ideal resume would contain familiarity with big data technologies (e.g., Hadoop, Spark), experience in programming languages like Scala and Java
β’ Performance Indicators: Performance will be assessed based on biweekly sprint execution, monitoring the deliverables and timelines agreed upon for each sprint.
Top 3 Hard Skills Required + Years of Experience
1. Minimum 5+ years experience with building data pipelines and data stores.
1. Minimum 5+ years experience with proficiency in MS Fabric technology
1. Minimum 5+ years experience with proficiency in programming languages such as Python
1. Minimum 5+ years experience with strong experience in SQL and database technologies
Summary:
The main function of an AI Data Engineer is to construct and maintain data pipelines, ETL processes, and data storage systems to facilitate efficient data collection, processing, and analysis. They ensure data quality and accessibility for data-driven decision-making, collaborating closely with data scientists and analysts.
Job Responsibilities:
β’ As a Data Engineer in MDO, you will drive critical initiatives to help our data platform scale to the needs of business priorities and technology advancements
β’ Build scalable data models and data pipelines to extract, load, and transform data, ensure secure data storage, with a focus on data quality and compliance on Azure using services (ADF, HDI, Databricks, Synapse/Fabric, etc.)
β’ You will play a critical role in developing and building datasets and integrating with AI/ML and Co-Pilot applications
β’ Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions
β’ Implement and manage CI/CD(YAML + Classic) pipelines for data engineering projects, leveraging tools like Azure DevOps
β’ You will anticipate data governance needs, designing data modeling and handling procedures to ensure compliance with all applicable laws and policies. Youβll also govern data accessibility within your assigned pipelines.
β’ Acts as a Designated Responsible Individual (DRI) and guides other engineers by developing and following the playbook, working on call to monitor system/product/service for degradation, downtime, or interruptions, alerting stakeholders about status and initiates actions to restore system/product/service for simple and complex problems when appropriate.
Qualifications:
β’ Bachelorβs degree in computer science, Engineering, or a related field
β’ OR master's degree in computer science, Math, Software Engineering, Computer Engineering, or related field
β’ OR equivalent experience
β’ Experience with building data pipelines and data stores.
β’ 5+ years of experience in data engineering or a similar role.
β’ Proficiency in programming languages such as Python, Scala, or Java.
β’ Strong experience with SQL and database technologies
β’ Familiarity with big data technologies (e.g., Hadoop, Spark).
Please let me know if this is something you would love to do and help me with your updated resume. Feel free to reach me out at harshiv@ifgpr.com if you have any questions.
Thanks