

Openkyber
MLOps
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Technical Writer in Houston, TX, on a 3+ month contract, offering competitive pay. Key skills include experience with Databricks, data engineering, and strong documentation abilities. Familiarity with cloud environments and ML concepts is essential.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 28, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas
-
🧠 - Skills detailed
#GCP (Google Cloud Platform) #Data Lineage #Python #Visualization #ML (Machine Learning) #Snowflake #Security #Strategy #Spark (Apache Spark) #Hugging Face #Cloud #Documentation #Databricks #AWS EMR (Amazon Elastic MapReduce) #PyTorch #Version Control #SharePoint #Data Engineering #GIT #Data Governance #AI (Artificial Intelligence) #Data Science #Scala #Azure #TensorFlow #AWS (Amazon Web Services) #MLflow #Data Lake
Role description
Title: Technical Writer Information Technology Location: Houston, TX, 77380 (Onsite 3x per week) Duration: 3+ month contract Work Requirements: s, Holders, or Authorized to Work in the U.S. Skillset / Experience: OpenKyber is seeking a candidate to play a central role in developing clear, accurate, and scalable policies and procedures that support the organization's use of Databricks. This position blends deep technical understanding with strong documentation skills to ensure teams can confidently adopt, govern, and optimize Databricks for data engineering, analytics, and machine learning workloads. You will collaborate closely with data engineers, platform administrators, security teams, and business stakeholders to translate complex technical processes into accessible, compliant, and user-friendly documentation. Responsibilities for this role include: Develop, write, and maintain policies, standard operating procedures (SOPs), and governance documentation related to Databricks usage, including:
Workspace management
Cluster configuration and optimization
Data governance, access control, and security standards
Job orchestration and workflow management
MLflow usage, model lifecycle management, and tracking experiment
Partners with data engineering, data science, visualization, data governance, and security teams to ensure documentation aligns with organizational standards, regulatory requirements, and best practices. Translate technical concepts into clear, concise, and actionable guidance for both technical and non-technical audiences. Create documentation templates, style guides, and frameworks to ensure consistency across all Databricks-related materials. Maintain version control and change management processes for all documentation. Conduct interviews, research, and hands-on testing within Databricks to validate accuracy and completeness. Support internal training initiatives by producing supplemental materials such as quick start guides, FAQs, and process maps. Continuously evaluate documentation for clarity, usability, and alignment with evolving Databricks features and organizational needs. Job Qualifications: Proven experience as a Technical Writer, Policy Writer, or similar role in a technical environment. Strong understanding of Databricks or similar cloud-based data platforms (e.g., Spark, Azure Data Lake, AWS EMR, Snowflake). Experience with data products. Ability to interpret and document complex data engineering, analytics, and machine learning workflows. Experience writing policies, procedures, or governance documentation in regulated or enterprise environments. Excellent written communication skills with a focus on clarity, structure, and accuracy. Familiarity with version control tools (Git), documentation platforms (Confluence, SharePoint), and workflow tools. Ability to collaborate effectively with cross-functional teams and subject matter experts. Hands-on experience working directly in Databricks notebooks, clusters, or MLflow. Experience with ML and generative AI concepts: model training, inference, MLOps workflows, prompt engineering, RAG architectures, or AI agents. Familiarity with Python and ML frameworks such as PyTorch, TensorFlow, Hugging Face, or MLflow. Knowledge of data governance frameworks (e.g., Unity Catalog, RBAC, data lineage). Experience in cloud environments such as Azure, AWS, or Google Cloud Platform. Background in data engineering, analytics, or software development. Experience creating visual aids such as diagrams, flowcharts, and process maps. Working knowledge of Python for data science. Background in journalism, technical communication, or content strategy, with skills in research and interviewing SMEs. Experience creating instructional videos and visual media. What Success Looks Like: Clear, consistent, and compliant documentation that enables teams to confidently use Databricks. Well-structured policies and procedures that reduce operational risk and improve platform governance. Documentation that evolves alongside Databricks features and organizational needs. Strong partnerships with data engineering, data science, visualization, governance, and security teams. About OpenKyber Technology: Technology is our focus, and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, projects, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at OpenKyber.com. OpenKyber provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, OpenKyber complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
For applications and inquiries, contact: hirings@openkyber.com
Title: Technical Writer Information Technology Location: Houston, TX, 77380 (Onsite 3x per week) Duration: 3+ month contract Work Requirements: s, Holders, or Authorized to Work in the U.S. Skillset / Experience: OpenKyber is seeking a candidate to play a central role in developing clear, accurate, and scalable policies and procedures that support the organization's use of Databricks. This position blends deep technical understanding with strong documentation skills to ensure teams can confidently adopt, govern, and optimize Databricks for data engineering, analytics, and machine learning workloads. You will collaborate closely with data engineers, platform administrators, security teams, and business stakeholders to translate complex technical processes into accessible, compliant, and user-friendly documentation. Responsibilities for this role include: Develop, write, and maintain policies, standard operating procedures (SOPs), and governance documentation related to Databricks usage, including:
Workspace management
Cluster configuration and optimization
Data governance, access control, and security standards
Job orchestration and workflow management
MLflow usage, model lifecycle management, and tracking experiment
Partners with data engineering, data science, visualization, data governance, and security teams to ensure documentation aligns with organizational standards, regulatory requirements, and best practices. Translate technical concepts into clear, concise, and actionable guidance for both technical and non-technical audiences. Create documentation templates, style guides, and frameworks to ensure consistency across all Databricks-related materials. Maintain version control and change management processes for all documentation. Conduct interviews, research, and hands-on testing within Databricks to validate accuracy and completeness. Support internal training initiatives by producing supplemental materials such as quick start guides, FAQs, and process maps. Continuously evaluate documentation for clarity, usability, and alignment with evolving Databricks features and organizational needs. Job Qualifications: Proven experience as a Technical Writer, Policy Writer, or similar role in a technical environment. Strong understanding of Databricks or similar cloud-based data platforms (e.g., Spark, Azure Data Lake, AWS EMR, Snowflake). Experience with data products. Ability to interpret and document complex data engineering, analytics, and machine learning workflows. Experience writing policies, procedures, or governance documentation in regulated or enterprise environments. Excellent written communication skills with a focus on clarity, structure, and accuracy. Familiarity with version control tools (Git), documentation platforms (Confluence, SharePoint), and workflow tools. Ability to collaborate effectively with cross-functional teams and subject matter experts. Hands-on experience working directly in Databricks notebooks, clusters, or MLflow. Experience with ML and generative AI concepts: model training, inference, MLOps workflows, prompt engineering, RAG architectures, or AI agents. Familiarity with Python and ML frameworks such as PyTorch, TensorFlow, Hugging Face, or MLflow. Knowledge of data governance frameworks (e.g., Unity Catalog, RBAC, data lineage). Experience in cloud environments such as Azure, AWS, or Google Cloud Platform. Background in data engineering, analytics, or software development. Experience creating visual aids such as diagrams, flowcharts, and process maps. Working knowledge of Python for data science. Background in journalism, technical communication, or content strategy, with skills in research and interviewing SMEs. Experience creating instructional videos and visual media. What Success Looks Like: Clear, consistent, and compliant documentation that enables teams to confidently use Databricks. Well-structured policies and procedures that reduce operational risk and improve platform governance. Documentation that evolves alongside Databricks features and organizational needs. Strong partnerships with data engineering, data science, visualization, governance, and security teams. About OpenKyber Technology: Technology is our focus, and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, projects, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at OpenKyber.com. OpenKyber provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, OpenKyber complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
For applications and inquiries, contact: hirings@openkyber.com






