

Generative AI Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Generative AI Engineer on a contract-to-hire basis, paying $45–47/hr. Required skills include 3–5 years of data engineering experience, proficiency in SQL and Python, and familiarity with ETL tools and cloud platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date discovered
July 3, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Privacy #SQL (Structured Query Language) #Security #Data Science #Cloud #Data Pipeline #Version Control #Deployment #ML (Machine Learning) #AWS (Amazon Web Services) #Python #Programming #Data Engineering #Talend #Data Modeling #Apache Airflow #Big Data #Informatica #Spark (Apache Spark) #Data Warehouse #"ETL (Extract #Transform #Load)" #Storage #Java #Data Accuracy #Data Quality #Hadoop #Pandas #Data Governance #AI (Artificial Intelligence) #Airflow #Data Manipulation #Databases #Model Deployment #Computer Science #NoSQL #Data Integration #CRM (Customer Relationship Management) #Data Processing #Compliance #Data Lake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
JOB TITLE: GEN AI Engineer
LOCATION: Remote (excluding CA and HI) — must be available during Eastern Time hours
DURATION: Contract-to-hire
PAY RANGE: $45–47/hr
REQUIREMENTS:
• 3–5 years of experience as a data engineer
• Strong background in ETL processes and data warehousing
• Proficient in SQL and Python
• Experience with data integration tools such as Informatica, Talend, or Apache Airflow
• Technical expertise in:
• Data processing and pipeline development
• Relational and NoSQL databases
• Cloud platforms (AWS, Google Cloud)
• Big data technologies (e.g., Hadoop, Spark)
• Familiarity with:
• Programming languages (Python, Java)
• Data manipulation tools (e.g., Pandas, SQL)
• Version control systems
• Knowledge of machine learning fundamentals, model deployment, and data preparation
• Understanding of data governance, quality, and security
• Familiarity with data privacy and regulatory compliance in AI applications
• Strong problem-solving, communication, and collaboration skills
• Bachelor's degree in Computer Science, Engineering, or a related field
Key Skills/Experience:
• Pandas, SQL, Java
• Relational database experience
• AI and ML experience in contact center environments
• Measuring customer feedback data
• Using Python for ETL, data integration, or modeling
POSITION OVERVIEW:
This role involves developing and maintaining robust data pipelines that support advanced analytics and AI initiatives. The engineer will extract, transform, and load data from multiple systems into data warehouses and data lakes, ensuring data quality and accessibility for analysis. The position supports operational efficiency and improved customer experience through close collaboration with business and analytics teams.
PRIMARY RESPONSIBILITIES:
Data Engineering:
• Build and manage data pipelines using modern tools and frameworks
• Extract data from various platforms including CRM and call center systems
• Standardize and transform data for consistency and usability
• Load data into storage environments for analytics and reporting
Data Modeling:
• Create data models that reflect business requirements
• Implement dimensional models for reporting and analysis
Data Integration:
• Combine data from multiple sources, resolving inconsistencies
• Maintain ETL workflows to automate ingestion and transformation
Data Quality:
• Monitor data accuracy and completeness
• Design and enforce data validation protocols
Performance Optimization:
• Tune pipelines and queries for optimal performance
• Identify and implement improvements to accelerate data delivery
Collaboration:
• Translate business needs into technical solutions through teamwork with analysts and stakeholders
• Partner with data science teams to support machine learning and AI initiatives