

GenAI Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Generative AI Developer/Engineer for a project-based contract with an immediate start, offering competitive pay. Key skills include ComfyUI, Python, multimodal model development, and experience with AR/VR environments. A degree in a related field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Python #Batch #GIT #Cloud #Documentation #Version Control #Computer Science #Storytelling #REST API #Deployment #TensorFlow #Automated Testing #Hugging Face #"ETL (Extract #Transform #Load)" #Docker #Transformers #Generative Models #PyTorch #Linux #AI (Artificial Intelligence) #REST (Representational State Transfer) #Scala #Automation #AWS (Amazon Web Services) #Libraries
Role description
Generative AI (GenAI) Developer/Engineer
About Groove Jones:
Groove Jones is a leader in digital innovation, specializing in augmented reality (AR), virtual reality (VR), and immersive interactive media. Our multidisciplinary team delivers cutting-edge solutions for global clients, blending state-of-the-art technology with creative storytelling.
Position Overview:
We are seeking a Generative AI Developer/Engineer with strong experience in designing and deploying scalable generative AI pipelines for image, video, and 3D content. This is a highly technical role focused on implementing, optimizing, and automating multimodal generative models and integrating them into interactive production environments. Proficiency in ComfyUI for workflow management is a key requirement, alongside other foundational skills in model development, prompt engineering, and creative automation.
This is a project-based contract position with immediate start.
Responsibilities:
β’ Architect, develop, and automate end-to-end generative pipelines for text-to-image, text-to-video, and text-to-3D/scene synthesis using state-of-the-art models (e.g., Stable Diffusion, Sora, DALL-E, diffusion and transformer-based frameworks).
β’ Implement and optimize Gaussian splatting and point-based rendering pipelines for high-fidelity, interactive 3D scene representation and integration with AR/VR/XR platforms.
β’ Integrate and extend ComfyUI-based node workflows into broader production toolchains, ensuring robust version control, reproducibility, and pipeline modularity.
β’ Develop custom nodes, scripts, and tools to interface generative models with creative and real-time engines (Unity / WebGL).
β’ Prototype and deploy advanced workflows for prompt engineering, multi-model chaining, batch generation, and asset post-processing.
β’ Apply best practices in model fine-tuning, inference optimization, distributed training, data augmentation, and A/B content testing to improve pipeline performance and output.
β’ Work with cross-functional teams to translate creative and technical requirements into scalable AI-powered content solutions.
β’ Maintain thorough documentation of pipeline architecture, dependencies, workflow variants, and key customization for internal and client use.
β’ Stay up to date with developments in generative AI, visual computing, neural rendering, and creative automation frameworks.
Minimum Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Artificial Intelligence, Computational Imaging, or closely related field.
β’ Hands-on experience with ComfyUI for workflow management, node customization, and integration of generative AI models.
β’ Proficiency in Python and libraries including PyTorch, TensorFlow, Hugging Face (Transformers, Diffusers), RunwayML, Sora, Nerfstudio, and Gaussian Splatting toolkits.
β’ Strong background in multimodal model development (image, video, 3D), prompt engineering, and creative automation.
β’ Experience developing and deploying generative 3D workflows involving Gaussian splatting, NeRF, photogrammetry, or point cloud rendering.
β’ Proven ability to work with REST APIs, containerized environments (Docker), and CI/CD pipelines for scalable deployment.
β’ Experience integrating generative outputs into production creative environments such as Unity or WebGL.
β’ Experience with cloud services like AWS and proficiency in Linux system administration and command-line operations.
β’ Advanced understanding of version control (Git), modular code design, automated testing, and performance profiling in production environments.
β’ Strong analytical and communication skills with the ability to document and share technical processes across a multidisciplinary team.
Generative AI (GenAI) Developer/Engineer
About Groove Jones:
Groove Jones is a leader in digital innovation, specializing in augmented reality (AR), virtual reality (VR), and immersive interactive media. Our multidisciplinary team delivers cutting-edge solutions for global clients, blending state-of-the-art technology with creative storytelling.
Position Overview:
We are seeking a Generative AI Developer/Engineer with strong experience in designing and deploying scalable generative AI pipelines for image, video, and 3D content. This is a highly technical role focused on implementing, optimizing, and automating multimodal generative models and integrating them into interactive production environments. Proficiency in ComfyUI for workflow management is a key requirement, alongside other foundational skills in model development, prompt engineering, and creative automation.
This is a project-based contract position with immediate start.
Responsibilities:
β’ Architect, develop, and automate end-to-end generative pipelines for text-to-image, text-to-video, and text-to-3D/scene synthesis using state-of-the-art models (e.g., Stable Diffusion, Sora, DALL-E, diffusion and transformer-based frameworks).
β’ Implement and optimize Gaussian splatting and point-based rendering pipelines for high-fidelity, interactive 3D scene representation and integration with AR/VR/XR platforms.
β’ Integrate and extend ComfyUI-based node workflows into broader production toolchains, ensuring robust version control, reproducibility, and pipeline modularity.
β’ Develop custom nodes, scripts, and tools to interface generative models with creative and real-time engines (Unity / WebGL).
β’ Prototype and deploy advanced workflows for prompt engineering, multi-model chaining, batch generation, and asset post-processing.
β’ Apply best practices in model fine-tuning, inference optimization, distributed training, data augmentation, and A/B content testing to improve pipeline performance and output.
β’ Work with cross-functional teams to translate creative and technical requirements into scalable AI-powered content solutions.
β’ Maintain thorough documentation of pipeline architecture, dependencies, workflow variants, and key customization for internal and client use.
β’ Stay up to date with developments in generative AI, visual computing, neural rendering, and creative automation frameworks.
Minimum Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Artificial Intelligence, Computational Imaging, or closely related field.
β’ Hands-on experience with ComfyUI for workflow management, node customization, and integration of generative AI models.
β’ Proficiency in Python and libraries including PyTorch, TensorFlow, Hugging Face (Transformers, Diffusers), RunwayML, Sora, Nerfstudio, and Gaussian Splatting toolkits.
β’ Strong background in multimodal model development (image, video, 3D), prompt engineering, and creative automation.
β’ Experience developing and deploying generative 3D workflows involving Gaussian splatting, NeRF, photogrammetry, or point cloud rendering.
β’ Proven ability to work with REST APIs, containerized environments (Docker), and CI/CD pipelines for scalable deployment.
β’ Experience integrating generative outputs into production creative environments such as Unity or WebGL.
β’ Experience with cloud services like AWS and proficiency in Linux system administration and command-line operations.
β’ Advanced understanding of version control (Git), modular code design, automated testing, and performance profiling in production environments.
β’ Strong analytical and communication skills with the ability to document and share technical processes across a multidisciplinary team.