

IFG - International Financial Group
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of 18 months, remote work location, and a pay rate on W2. Key skills include Python, SQL, Airflow, and experience in data pipeline development. A Bachelor’s degree in a related field is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
February 19, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Science #Scala #Data Quality #Azure #Data Engineering #Data Modeling #Data Warehouse #Data Governance #Python #Security #Data Pipeline #Cloud #Airflow #Kubernetes #Data Accuracy #Documentation #BigQuery #Data Security #Observability #Redshift #AI (Artificial Intelligence) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #AWS (Amazon Web Services) #Snowflake #GCP (Google Cloud Platform) #Requirements Gathering #SQL (Structured Query Language) #Monitoring #Data Analysis #Computer Science
Role description
Job Title: AI Data Engineer 1
Location: Remote
Contract on W2
Duration: 18 months
Top IT Firm
Typical Day in the Role
• Purpose of the Team: The purpose of this team is responsible for designing, developing, and maintaining data platforms\_.
• Key projects: This role will contribute to designing, developing, and maintaining efficient and reliable data pipelines\_.
Candidate Requirements
• Disqualifiers: Candidates with low tenure and constant job hopping will not be eligible for the role.
• Degree or Certification: Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field is preferred.
Hard Skills Assessments
• Expected Dates that Hard Skills Assessments will be scheduled: after 2/23\_.
• Hard Skills Assessment Process: The assessment process will include 1-2 rounds\_.
• Required Candidate Preparation: Candidates should be able to speak on prior experience prepared prior to the assessment.
Top Skills:
• Data Engineering background that includes
• Python
• SQL
• Kubernetes
• Airflow
• Scala
Key Responsibilities:
Data Platform: Design, build, and maintain scalable data platform and pipelines using Python, SQL, Airflow, and Spark.
Business Requirements Gathering: Collaborate with stakeholders to understand and translate business requirements into technical specifications.
Data Modeling: Develop and implement data models that support analytics and reporting needs.
Data Quality and Governance: Ensure data accuracy, consistency, and reliability by implementing robust data validation and quality checks.
Stakeholder Collaboration: Work with cross-functional teams, including data analysts, data scientists, and business leaders, to deliver high-quality data solutions.
Performance Optimization: Continuously monitor and optimize data pipelines for performance, scalability, and cost-efficiency.
Monitoring and Observability: Build and implement monitoring and observability metrics to ensure data quality and detect anomalies in data pipelines.
Documentation and Communication: Maintain clear and comprehensive documentation of data processes and communicate technical concepts effectively to non-technical stakeholders.
Qualifications:
Experience: 2 years of experience in a data engineering and infrastructure.
Technical Skills: Proficiency in datawarehouse management, Python, SQL, Airflow, and Spark.
Data Pipeline Expertise: Strong experience in building and maintaining robust data pipelines and ETL processes.
Analytical Skills: Ability to gather business requirements, debug issues for ingestion or any other areas of the data warehouse.
Communication: Excellent verbal and written communication skills, with the ability to convey technical information to non-technical audiences.
Collaboration: Proven ability to work effectively in a collaborative, cross-functional environment.
Education: Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field.
Preferred Qualifications:
Experience with cloud platforms such as AWS, GCP, or Azure.
Familiarity with data warehousing technology (e.g., Deltalake, Azure Fabric, Snowflake, Redshift, BigQuery).
Knowledge of data governance and data security best practices.
Please let me know if this is something you would love to do, and help me with your updated resume. Feel free to reach out at Harshiv@ifgpr.com if you have any questions.
Thanks
Job Title: AI Data Engineer 1
Location: Remote
Contract on W2
Duration: 18 months
Top IT Firm
Typical Day in the Role
• Purpose of the Team: The purpose of this team is responsible for designing, developing, and maintaining data platforms\_.
• Key projects: This role will contribute to designing, developing, and maintaining efficient and reliable data pipelines\_.
Candidate Requirements
• Disqualifiers: Candidates with low tenure and constant job hopping will not be eligible for the role.
• Degree or Certification: Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field is preferred.
Hard Skills Assessments
• Expected Dates that Hard Skills Assessments will be scheduled: after 2/23\_.
• Hard Skills Assessment Process: The assessment process will include 1-2 rounds\_.
• Required Candidate Preparation: Candidates should be able to speak on prior experience prepared prior to the assessment.
Top Skills:
• Data Engineering background that includes
• Python
• SQL
• Kubernetes
• Airflow
• Scala
Key Responsibilities:
Data Platform: Design, build, and maintain scalable data platform and pipelines using Python, SQL, Airflow, and Spark.
Business Requirements Gathering: Collaborate with stakeholders to understand and translate business requirements into technical specifications.
Data Modeling: Develop and implement data models that support analytics and reporting needs.
Data Quality and Governance: Ensure data accuracy, consistency, and reliability by implementing robust data validation and quality checks.
Stakeholder Collaboration: Work with cross-functional teams, including data analysts, data scientists, and business leaders, to deliver high-quality data solutions.
Performance Optimization: Continuously monitor and optimize data pipelines for performance, scalability, and cost-efficiency.
Monitoring and Observability: Build and implement monitoring and observability metrics to ensure data quality and detect anomalies in data pipelines.
Documentation and Communication: Maintain clear and comprehensive documentation of data processes and communicate technical concepts effectively to non-technical stakeholders.
Qualifications:
Experience: 2 years of experience in a data engineering and infrastructure.
Technical Skills: Proficiency in datawarehouse management, Python, SQL, Airflow, and Spark.
Data Pipeline Expertise: Strong experience in building and maintaining robust data pipelines and ETL processes.
Analytical Skills: Ability to gather business requirements, debug issues for ingestion or any other areas of the data warehouse.
Communication: Excellent verbal and written communication skills, with the ability to convey technical information to non-technical audiences.
Collaboration: Proven ability to work effectively in a collaborative, cross-functional environment.
Education: Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field.
Preferred Qualifications:
Experience with cloud platforms such as AWS, GCP, or Azure.
Familiarity with data warehousing technology (e.g., Deltalake, Azure Fabric, Snowflake, Redshift, BigQuery).
Knowledge of data governance and data security best practices.
Please let me know if this is something you would love to do, and help me with your updated resume. Feel free to reach out at Harshiv@ifgpr.com if you have any questions.
Thanks






