

Data Engineer with GENAI
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with GENAI in Charlotte, NC, offering a contract position. Key skills include Spark, ETL, Snowflake, and Generative AI. Experience with Python and limited knowledge of data pipelines and cloud data warehouses is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Redshift #Airflow #Data Management #Indexing #Metadata #Python #Data Lineage #dbt (data build tool) #AI (Artificial Intelligence) #Data Warehouse #Spark (Apache Spark) #Snowflake #Migration #Cloud #Data Engineering #Data Pipeline #"ETL (Extract #Transform #Load)"
Role description
Role: Data Engineer with AI Engineer
Location: Charlotte, NC (Onsite)
Skills: Spark, ETL, Data Engineer, Snowflakes, Gen AI
Job Description:
Β· Limited experience with query optimization, indexing, and performance profiling. No background in database scaling, migration, or high availability
Β· Little to no hands-on experience with ETL/ELT tools like Spark, Airflow, or dbt or maintaining large-scale ETL systems
Β· No experience designing end-to-end or real-time data pipelines, event-driven processing,
Β· Limited experience with cloud data warehouses like Snowflake, Big Query, or Redshift
Β· Minimal focus on data lineage, metadata management, complex data transformation and optimization techniques
Β· Should have experience in python
Β· Hands-on working experience with Generative AI patterns and frameworks , LLM
Β· Sound knowledge of RAG
Role: Data Engineer with AI Engineer
Location: Charlotte, NC (Onsite)
Skills: Spark, ETL, Data Engineer, Snowflakes, Gen AI
Job Description:
Β· Limited experience with query optimization, indexing, and performance profiling. No background in database scaling, migration, or high availability
Β· Little to no hands-on experience with ETL/ELT tools like Spark, Airflow, or dbt or maintaining large-scale ETL systems
Β· No experience designing end-to-end or real-time data pipelines, event-driven processing,
Β· Limited experience with cloud data warehouses like Snowflake, Big Query, or Redshift
Β· Minimal focus on data lineage, metadata management, complex data transformation and optimization techniques
Β· Should have experience in python
Β· Hands-on working experience with Generative AI patterns and frameworks , LLM
Β· Sound knowledge of RAG