

Generative AI Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Generative AI Engineer in Charlotte, NC, offering $70/hr for a 12-18 month contract. Requires 3-5 years of experience with data platforms, proficiency in GenAI frameworks, Apache Spark, Python, and data modeling methodologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date discovered
July 9, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte Metro
-
π§ - Skills detailed
#Data Management #Spark (Apache Spark) #Data Ingestion #Slowly Changing Dimensions #Databases #Kafka (Apache Kafka) #Data Integration #Visualization #Data Catalog #Scripting #Scala #MongoDB #Data Quality #Data Pipeline #AI (Artificial Intelligence) #Data Processing #API (Application Programming Interface) #Azure Cosmos DB #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Vault #Data Lineage #JavaScript #Azure #Programming #PySpark #Pig #Big Data #Data Vault #Data Architecture #Data Modeling #Datasets #Apache Spark #Collibra #Hadoop #Python #Snowflake #Langchain #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Data Solutions Analyst - Generative AI
Location: Charlotte, NC
Pay: $70/hr
Type: 12-18 month W2 Contract
Day to Day
As a Data Solutions Analyst, you will play a key role in designing and implementing large-scale data architectures and platforms. This is a mid-level, yet impactful role suited for someone with 3-5 years of experience who wants to be hands-on, learn continuously, and grow into emerging technologies like Generative AI. You will work on end-to-end data solutions, integrating various data systems, and contributing to the development of a robust data fabric and data management ecosystem.
Key Responsibilities:
β’ Design, develop, and implement scalable big data architectures and data platforms that support analytics, AI, and data-driven decision making.
β’ Lead end-to-end data integration, including data ingestion, transformation, and orchestration across diverse systems.
β’ Build and maintain data pipelines, ensuring data quality, lineage, and governance (using tools like Calibra).
β’ Develop and optimize data transformation processes with Apache Spark and Python for large-scale data processing.
β’ Create interactive data visualizations and dashboards using JavaScript.
β’ Collaborate with cross-functional teams to understand data requirements and deliver integrated data solutions.
β’ Support data cataloging, data lineage, and data management initiatives, ensuring data is discoverable, trustworthy, and compliant.
β’ Explore and grow knowledge in emerging AI and Generative AI (GenAI) technologies, understanding their potential applications.
β’ Stay current with industry best practices and contribute to continuous improvement of data architecture and platform.
Must Haves:
β’ 3-5 years exp in understanding and working with Data Platforms and Data Applications.
β’ Experience hands-on developing actual prototypes
β’ Proficient in GenAI frameworks such as Langchain, LamaIndex, OpenAI's Assistants API, FAISS, RAG (Retrieval-Augmented Generation), Azure OpenAI Service, Azure Cognitive Search, and vector databases like Pinecone, VectorDB, AstraDB, and Azure Cosmos DB for MongoDB vCore.
β’ Proficient in data modeling methodologies including dimensional modeling (Star and Snowflake schemas), Data Vault, 3NF, Kimball and Inmon approaches, and designing slowly changing dimensions (SCDs) for tracking historical data changes.
β’ Expertise in Big Data tools including Hive for SQL-like queries on Hadoop, Pig for data flow scripting, Flink for stateful computations, Storm for real-time stream processing, MapReduce programming paradigm, Kafka for distributed event streaming, and PySpark for Python-based Spark programming, enabling efficient processing of large-scale datasets.
β’ Experience designing and building big data architectures and data platforms.
β’ Strong proficiency in Python for data processing and scripting.
β’ Practical experience with Apache Spark for large-scale data transformation.
β’ Ability to develop data visualizations and dashboards with JavaScript.
β’ Familiarity with data pipeline management, data cataloging, data lineage, and data quality tools
Plusses
β’ Knowledge of graph processing and graph stores is a plus.
β’ Basic understanding of Javaspringboot is good, but not a requirement.
β’ Exposure to open data sources and cloud platforms is a huge plus.
β’ Collibra experience