

Senior Data Modeler- Federal Labor Category
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Modeler with a contract length of unspecified duration, offering $60.00 - $64.00 per hour. Requires 10+ years in AI, Data Science, AWS, Databricks, ETL, and financial industry experience. On-site work location.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
512
-
ποΈ - Date discovered
August 16, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Washington, DC 20551
-
π§ - Skills detailed
#Data Manipulation #Tableau #Cloud #Data Integrity #Databricks #Documentation #Data Science #SQL Queries #Data Modeling #Computer Science #Compliance #Databases #Data Lake #AWS (Amazon Web Services) #Physical Data Model #Data Governance #Scala #Spark (Apache Spark) #Python #Data Dictionary #Data Analysis #Data Processing #Deployment #Data Quality #Clustering #Visualization #"ETL (Extract #Transform #Load)" #Storage #Data Engineering #Security #Data Layers #AI (Artificial Intelligence) #SQL (Structured Query Language) #Data Integration
Role description
Senior Data Modeler
Personnel Qualifications: At least ten or more years of experience in AI, Data Science, Software Engineering experience, including knowledge of Data ecosystemBachelor's degree in Computer Science, Information Systems, or other related field is required or related work experienceData Modeling: Expertise in designing and implementing data models optimized for storage, retrieval, and analytics within Databricks on AWS, including conceptual, logical, and physical data modelingDatabricks Proficiency: In-depth knowledge and hands-on experience with AWS Databricks platform, including Databricks SQL, Runtime, clusters, notebooks, and integrations.ELT (Extract, Load, Transform) Processes: Proficiency in developing ETL pipelines to extract data from various sources, transform it as per business requirements, and load it into the central data lake using Databricks tools and SparkData Integration: Experience integrating data from heterogeneous sources (relational databases, APIs, files) into Databricks while ensuring data quality, consistency, and lineagePerformance Optimization: Ability to optimize data processing workflows and SQL queries in Databricks for performance, scalability, and cost-effectiveness, leveraging partitioning, clustering, caching, and Spark optimization techniquesData Governance and Security: Understanding of data governance principles and implementing security measures to ensure data integrity, confidentiality, and compliance within the centralized data lake environmentAdvanced SQL and Spark Skills: Proficiency in writing complex SQL queries and Spark code (Scala/Python) for data manipulation, transformation, aggregation, and analysis tasks within Databricks notebooksCloud Architecture: Understanding of cloud computing principles, AWS architecture, and services for designing scalable and resilient data solutionsData Visualization: Basic knowledge of data visualization tools (e.g. Tableau) to create insightful visualizations and dashboards for data analysis and reporting purposesFamiliarity with government cloud deployment regulations/compliance policies such as FedRAMP, FISMA, etc.
Capabilities: Leverage financial industry expertise to define conceptual, logical and physical data models in Databricks to support new and existing business domainsWork with product owners, system architects, data engineers, and vendors to create data models optimized for query performance, compute and storage costsDefine best practices for the implementation of the Bronze/Silver/Gold data layers of the lakehouseProvide data model documentation and artifacts generated from data, data dictionary, data definitions, etc.
Job Type: Contract
Pay: $60.00 - $64.00 per hour
Expected hours: 40 per week
Experience:
Data Modeler: 10 years (Required)
Federal: 10 years (Required)
Labor: 10 years (Required)
AI: 10 years (Required)
Data Science: 10 years (Required)
Data Ecosystem: 10 years (Required)
AWS: 10 years (Required)
Databricks: 10 years (Required)
ETL: 10 years (Required)
Spark: 10 years (Required)
Scala: 10 years (Required)
Python: 10 years (Required)
Financial Industry: 10 years (Required)
Work Location: In person
Senior Data Modeler
Personnel Qualifications: At least ten or more years of experience in AI, Data Science, Software Engineering experience, including knowledge of Data ecosystemBachelor's degree in Computer Science, Information Systems, or other related field is required or related work experienceData Modeling: Expertise in designing and implementing data models optimized for storage, retrieval, and analytics within Databricks on AWS, including conceptual, logical, and physical data modelingDatabricks Proficiency: In-depth knowledge and hands-on experience with AWS Databricks platform, including Databricks SQL, Runtime, clusters, notebooks, and integrations.ELT (Extract, Load, Transform) Processes: Proficiency in developing ETL pipelines to extract data from various sources, transform it as per business requirements, and load it into the central data lake using Databricks tools and SparkData Integration: Experience integrating data from heterogeneous sources (relational databases, APIs, files) into Databricks while ensuring data quality, consistency, and lineagePerformance Optimization: Ability to optimize data processing workflows and SQL queries in Databricks for performance, scalability, and cost-effectiveness, leveraging partitioning, clustering, caching, and Spark optimization techniquesData Governance and Security: Understanding of data governance principles and implementing security measures to ensure data integrity, confidentiality, and compliance within the centralized data lake environmentAdvanced SQL and Spark Skills: Proficiency in writing complex SQL queries and Spark code (Scala/Python) for data manipulation, transformation, aggregation, and analysis tasks within Databricks notebooksCloud Architecture: Understanding of cloud computing principles, AWS architecture, and services for designing scalable and resilient data solutionsData Visualization: Basic knowledge of data visualization tools (e.g. Tableau) to create insightful visualizations and dashboards for data analysis and reporting purposesFamiliarity with government cloud deployment regulations/compliance policies such as FedRAMP, FISMA, etc.
Capabilities: Leverage financial industry expertise to define conceptual, logical and physical data models in Databricks to support new and existing business domainsWork with product owners, system architects, data engineers, and vendors to create data models optimized for query performance, compute and storage costsDefine best practices for the implementation of the Bronze/Silver/Gold data layers of the lakehouseProvide data model documentation and artifacts generated from data, data dictionary, data definitions, etc.
Job Type: Contract
Pay: $60.00 - $64.00 per hour
Expected hours: 40 per week
Experience:
Data Modeler: 10 years (Required)
Federal: 10 years (Required)
Labor: 10 years (Required)
AI: 10 years (Required)
Data Science: 10 years (Required)
Data Ecosystem: 10 years (Required)
AWS: 10 years (Required)
Databricks: 10 years (Required)
ETL: 10 years (Required)
Spark: 10 years (Required)
Scala: 10 years (Required)
Python: 10 years (Required)
Financial Industry: 10 years (Required)
Work Location: In person