

Creative Information Technology, Inc.
Cloud Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Engineer with a contract length of "unknown", offering a pay rate of "unknown". Key skills include Databricks administration, AWS expertise, automation with Terraform, and compliance knowledge. A Bachelor's degree and relevant certifications are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 15, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Washington, DC
-
π§ - Skills detailed
#Data Integration #Storage #IAM (Identity and Access Management) #Terraform #Cloud #Scripting #Security #Monitoring #Microsoft Power BI #Classification #BI (Business Intelligence) #Deployment #AWS (Amazon Web Services) #Data Access #Automation #Data Pipeline #Infrastructure as Code (IaC) #Debugging #CLI (Command-Line Interface) #GitLab #REST (Representational State Transfer) #REST API #"ETL (Extract #Transform #Load)" #Big Data #Python #AI (Artificial Intelligence) #Delta Lake #Forecasting #ML (Machine Learning) #Metadata #Compliance #SQL (Structured Query Language) #VPC (Virtual Private Cloud) #Data Governance #AutoScaling #DevOps #Databricks #GIT #Scala #Data Engineering #Disaster Recovery #S3 (Amazon Simple Storage Service) #Data Quality #Documentation #Libraries #Data Catalog #Logging
Role description
About Us
Creative Information Technology Inc. (CITI) is an esteemed IT enterprise renowned for its exceptional customer service and innovation. We serve both government and commercial sectors, offering a range of solutions such as Healthcare IT, Human Services, Identity Credentialing, Cloud Computing, and Big Data Analytics. With clients in the US and abroad, we hold key contract vehicles, including GSA IT Schedule 70, NIH CIO-SP3, GSA Alliant, and DHS Eagle II.
Join us in driving growth and seizing new business opportunities!
Position Description:
Databricks Administrator is the hands-on technical owner of the agencyβs Databricks platform supporting EDP. This role is accountable for platform operations, security, and governance configuration end-to-endβensuring the environment is compliant, reliable, cost-controlled, and enables secure analytics and AI/ML workloads at scale
The candidate shall also demonstrate the below knowledge and experience:
β’ Hands-on experience administering Databricks (workspace administration, clusters/compute policies, jobs, SQL warehouses, repos, runtime management) and expertise using Databricks CLI.
β’ Strong Unity Catalog administration: metastores; catalogs/schemas; grants; service principals; external locations; storage credentials; governed storage access.
β’ Identity & Access Management proficiency: SSO concepts, SCIM provisioning, groupbased RBAC, service principals, least-privilege patterns.
β’ Security fundamentals: secrets management, secure connectivity, audit logging, access monitoring, and evidence-ready operations.
β’ Cloud platform expertise (AWS ): IAM roles/policies, object storage security patterns,
β’ networking basics (VPC concepts), logging/monitoring integration.
β’ Automation skills: scripting and/or IaC using Terraform/CLI/REST APIs for repeatable configuration and environment promotion.
β’ Experience implementing data governance controls (classification/tagging, lineage/metadata integrations) in partnership with governance teams.
β’ CI/CD practices for jobs/notebooks/config promotion across SDLC environments.
β’ Understanding of lakehouse concepts (e.g., Delta, table lifecycle management, separation of storage/compute).
β’ SQL proficiency and data engineering fundamentals for troubleshooting query performance issues, understanding ETL/ELT workflow patterns, and debugging data pipeline failures; basic Python/Scala familiarity for notebook/code issue diagnosis.
β’ Experience with compliance and regulatory frameworks (FedRAMP, HIPAA, SOC2, or similar) including implementation of data residency requirements, retention policies, and audit-ready evidence collection.
β’ Hands-on experience with AWS security and networking services including PrivateLink, Secrets Manager/Systems Manager integration, CloudWatch/CloudTrail integration, S3 bucket policies, cross-account access patterns, and KMS encryption key management.
β’ Experience administering Databricks serverless compute, Workspace Git integrations (GitLab), Databricks Asset Bundles (DABs) for deployment automation, and modern workspace features supporting DevOps workflows.
β’ SLA/SLO management and stakeholder communication skills; ability to define platform service levels, produce operational reports, translate technical issues to business stakeholders, and manage vendor relationships (Databricks account teams).
Education / Experience/Certifications/Accreditations
β’ Bachelorβs degree in a related field or equivalent practical experience.
β’ 7+ years in cloud/data platform administration and operations, including 4+ years supporting Databricks or similar platforms.
β’ Databricks Platform Administrator/Databricks AWS Platform Architect
β’ Databricks Certified Data Engineer Associate/Professional
β’ AWS Certified Solutions Architect Associate or Professional
The Contractor shall deliver, but not limited to, the following:
β’ Administer Databricks account and workspaces across SDLC environments; standardize configuration, naming, and operational patterns.
β’ Configure and maintain clusters/compute, job compute, SQL warehouses, runtime versions, libraries, repos, and workspace settings.
β’ Implement platform monitoring/alerting, operational dashboards, and health checks; maintain runbooks and operational procedures.
β’ Provide Tier 2/3 operational support: troubleshoot incidents, perform root-cause analysis, and drive remediation and preventive actions.
β’ Manage change control for upgrades, feature rollouts, configuration changes, and integration changes; document impacts and rollback plans.
β’ Enforce least privilege across platform resources (workspaces, jobs, clusters, SQL warehouses, repos, secrets) using role/group-based access patterns.
β’ Configure and manage secrets and secure credential handling (secret scopes / key management integrations) for platform and data connectivity.
β’ Enable and maintain audit logging and access/event visibility; support security reviews and evidence requests.
β’ Administer Unity Catalog governance: metastores, catalogs/schemas/tables, ownership, grants, and environment/domain patterns.
β’ Configure and manage external locations, storage credentials, and governed access to cloud object storage.
β’ Partner with governance stakeholders to support metadata/lineage integration, classification/tagging, and retention controls where applicable.
β’ Coordinate secure connectivity and guardrails with cloud/network teams: private connectivity patterns, egress controls, firewall/proxy needs.
β’ Configure cloud integrations required for governed data access and service connectivity (roles/permissions, endpoints, storage integrations).
β’ Implement cost guardrails: cluster policies, auto-termination, scheduling, workload sizing standards, and capacity planning.
β’ Produce usage/cost insights and optimization recommendations; address waste drivers (idle compute, oversized clusters, inefficient jobs).
β’ Automate administration and configuration using APIs/CLI/IaC (e.g., Terraform) to reduce manual drift and improve repeatability.
β’ Maintain platform documentation: configuration baselines, security/governance standards, onboarding guides, and troubleshooting references.
β’ Design and implement backup and disaster recovery procedures for workspace configurations, notebooks, Unity Catalog metadata, and job definitions; maintain recovery runbooks and perform periodic DR testing aligned to RTO/RPO objectives.
β’ Monitor and optimize platform performance, including SQL warehouse query tuning, cluster autoscaling configuration, Photon enablement, and Delta Lake optimization guidance (OPTIMIZE, VACUUM, Z-ordering strategies). Administer Delta Live Tables (DLT) pipelines and coordinate with data engineering teams on pipeline health, data quality monitoring, failed job remediation, and pipeline configuration best practices.
β’ Manage third-party integrations and ecosystem connectivity, including BI tool integrations (e.g., Power BI), and external metadata catalog integrations.
β’ Implement Databricks Asset Bundles (DABs) for standardized deployment patterns; automate workspace resource deployment (jobs, pipelines, dashboards) across SDLC environments using bundle-based CI/CD workflows.
β’ Conduct capacity planning and scalability analysis, including forecasting concurrent user/workload growth, platform scaling strategies, and proactive resource allocation during peak usage periods.
β’ Facilitate user onboarding and enablement, including new user/team onboarding procedures, training coordination, workspace access provisioning, and creation of selfservice documentation/guides
About Us
Creative Information Technology Inc. (CITI) is an esteemed IT enterprise renowned for its exceptional customer service and innovation. We serve both government and commercial sectors, offering a range of solutions such as Healthcare IT, Human Services, Identity Credentialing, Cloud Computing, and Big Data Analytics. With clients in the US and abroad, we hold key contract vehicles, including GSA IT Schedule 70, NIH CIO-SP3, GSA Alliant, and DHS Eagle II.
Join us in driving growth and seizing new business opportunities!
Position Description:
Databricks Administrator is the hands-on technical owner of the agencyβs Databricks platform supporting EDP. This role is accountable for platform operations, security, and governance configuration end-to-endβensuring the environment is compliant, reliable, cost-controlled, and enables secure analytics and AI/ML workloads at scale
The candidate shall also demonstrate the below knowledge and experience:
β’ Hands-on experience administering Databricks (workspace administration, clusters/compute policies, jobs, SQL warehouses, repos, runtime management) and expertise using Databricks CLI.
β’ Strong Unity Catalog administration: metastores; catalogs/schemas; grants; service principals; external locations; storage credentials; governed storage access.
β’ Identity & Access Management proficiency: SSO concepts, SCIM provisioning, groupbased RBAC, service principals, least-privilege patterns.
β’ Security fundamentals: secrets management, secure connectivity, audit logging, access monitoring, and evidence-ready operations.
β’ Cloud platform expertise (AWS ): IAM roles/policies, object storage security patterns,
β’ networking basics (VPC concepts), logging/monitoring integration.
β’ Automation skills: scripting and/or IaC using Terraform/CLI/REST APIs for repeatable configuration and environment promotion.
β’ Experience implementing data governance controls (classification/tagging, lineage/metadata integrations) in partnership with governance teams.
β’ CI/CD practices for jobs/notebooks/config promotion across SDLC environments.
β’ Understanding of lakehouse concepts (e.g., Delta, table lifecycle management, separation of storage/compute).
β’ SQL proficiency and data engineering fundamentals for troubleshooting query performance issues, understanding ETL/ELT workflow patterns, and debugging data pipeline failures; basic Python/Scala familiarity for notebook/code issue diagnosis.
β’ Experience with compliance and regulatory frameworks (FedRAMP, HIPAA, SOC2, or similar) including implementation of data residency requirements, retention policies, and audit-ready evidence collection.
β’ Hands-on experience with AWS security and networking services including PrivateLink, Secrets Manager/Systems Manager integration, CloudWatch/CloudTrail integration, S3 bucket policies, cross-account access patterns, and KMS encryption key management.
β’ Experience administering Databricks serverless compute, Workspace Git integrations (GitLab), Databricks Asset Bundles (DABs) for deployment automation, and modern workspace features supporting DevOps workflows.
β’ SLA/SLO management and stakeholder communication skills; ability to define platform service levels, produce operational reports, translate technical issues to business stakeholders, and manage vendor relationships (Databricks account teams).
Education / Experience/Certifications/Accreditations
β’ Bachelorβs degree in a related field or equivalent practical experience.
β’ 7+ years in cloud/data platform administration and operations, including 4+ years supporting Databricks or similar platforms.
β’ Databricks Platform Administrator/Databricks AWS Platform Architect
β’ Databricks Certified Data Engineer Associate/Professional
β’ AWS Certified Solutions Architect Associate or Professional
The Contractor shall deliver, but not limited to, the following:
β’ Administer Databricks account and workspaces across SDLC environments; standardize configuration, naming, and operational patterns.
β’ Configure and maintain clusters/compute, job compute, SQL warehouses, runtime versions, libraries, repos, and workspace settings.
β’ Implement platform monitoring/alerting, operational dashboards, and health checks; maintain runbooks and operational procedures.
β’ Provide Tier 2/3 operational support: troubleshoot incidents, perform root-cause analysis, and drive remediation and preventive actions.
β’ Manage change control for upgrades, feature rollouts, configuration changes, and integration changes; document impacts and rollback plans.
β’ Enforce least privilege across platform resources (workspaces, jobs, clusters, SQL warehouses, repos, secrets) using role/group-based access patterns.
β’ Configure and manage secrets and secure credential handling (secret scopes / key management integrations) for platform and data connectivity.
β’ Enable and maintain audit logging and access/event visibility; support security reviews and evidence requests.
β’ Administer Unity Catalog governance: metastores, catalogs/schemas/tables, ownership, grants, and environment/domain patterns.
β’ Configure and manage external locations, storage credentials, and governed access to cloud object storage.
β’ Partner with governance stakeholders to support metadata/lineage integration, classification/tagging, and retention controls where applicable.
β’ Coordinate secure connectivity and guardrails with cloud/network teams: private connectivity patterns, egress controls, firewall/proxy needs.
β’ Configure cloud integrations required for governed data access and service connectivity (roles/permissions, endpoints, storage integrations).
β’ Implement cost guardrails: cluster policies, auto-termination, scheduling, workload sizing standards, and capacity planning.
β’ Produce usage/cost insights and optimization recommendations; address waste drivers (idle compute, oversized clusters, inefficient jobs).
β’ Automate administration and configuration using APIs/CLI/IaC (e.g., Terraform) to reduce manual drift and improve repeatability.
β’ Maintain platform documentation: configuration baselines, security/governance standards, onboarding guides, and troubleshooting references.
β’ Design and implement backup and disaster recovery procedures for workspace configurations, notebooks, Unity Catalog metadata, and job definitions; maintain recovery runbooks and perform periodic DR testing aligned to RTO/RPO objectives.
β’ Monitor and optimize platform performance, including SQL warehouse query tuning, cluster autoscaling configuration, Photon enablement, and Delta Lake optimization guidance (OPTIMIZE, VACUUM, Z-ordering strategies). Administer Delta Live Tables (DLT) pipelines and coordinate with data engineering teams on pipeline health, data quality monitoring, failed job remediation, and pipeline configuration best practices.
β’ Manage third-party integrations and ecosystem connectivity, including BI tool integrations (e.g., Power BI), and external metadata catalog integrations.
β’ Implement Databricks Asset Bundles (DABs) for standardized deployment patterns; automate workspace resource deployment (jobs, pipelines, dashboards) across SDLC environments using bundle-based CI/CD workflows.
β’ Conduct capacity planning and scalability analysis, including forecasting concurrent user/workload growth, platform scaling strategies, and proactive resource allocation during peak usage periods.
β’ Facilitate user onboarding and enablement, including new user/team onboarding procedures, training coordination, workspace access provisioning, and creation of selfservice documentation/guides






