

Data Modeler/Architect (USC/GC/H4/L2 Only)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Modeler/Architect with a contract length of 3+ months, offering a pay rate of $55/hr (W-2). Located in Atlanta, Columbus, or Jersey City, it requires expert proficiency in Erwin Data Modeler and advanced data modeling skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
440
-
ποΈ - Date discovered
September 30, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Atlanta, GA
-
π§ - Skills detailed
#Security #Data Analysis #Data Quality #Computer Science #ERWin #Data Architecture #Data Pipeline #Data Lineage #Physical Data Model #Data Lake #Scala #Databricks #Data Engineering #PySpark #Data Management #Metadata #Data Governance #MDM (Master Data Management) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Documentation #Version Control #GIT #Data Modeling
Role description
Job Title: Data Modeler/Architect (USC/GC/H4/L2 Only)
Pay Rate: $55 / HR (W-2)
Duration: 3+ months
Location: Atlanta/GA, Columbus/OH, Jersey City/NJ
We are looking for a Data Modeler and an Information Architect to lead the redesign and enhancement of enterprise data models, integrate MDM and data quality best practices, and architect optimized data lake and Databricks solutions for one of our Big Four clients. This is a hands-on role requiring expert proficiency with Erwin Data Modeler, advanced data modeling, and the ability to interpret SPARK code in support of robust analytics and data pipelines.
Key Responsibilities:
β’ Redesign, enhance, and govern conceptual, logical, and physical data models to meet evolving business and analytics needs.
β’ Integrate master data management (MDM) and data quality best practices, including standardization, matching/merging, survivorship, and stewardship workflows.
β’ Collaborate with Data Engineering, Analytics, Product, and Governance teams to ensure models support reporting, advanced analytics, and broader enterprise requirements.
β’ Use Erwin Data Modeler for modeling, version control, naming standards, metadata management, and model-to-database synchronization.
β’ Architect and optimize data lakes and Databricks environments (table design, partitioning, performance tuning, cost optimization).
β’ Read and interpret SPARK code (PySpark/Scala) to understand data lineage and transformations, validate models, and support pipeline design and optimization.
β’ Produce documentation: data dictionaries, entity definitions, lineage and mapping specs, model change logs, and standards.
β’ Champion data architecture best practices, including reusability, security and privacy considerations, and scalability.
Requirements:
Expert-level proficiency in:
β’ Erwin Data Modeler
β’ Enterprise data architecture
β’ Databricks ecosystem
β’ Advanced data modeling skills across relational and analytical use cases (e.g., 3NF, dimensional).
β’ Proven experience implementing MDM and data quality frameworks and tooling.
β’ Hands-on background with large-scale data lakes and lakehouse patterns.
β’ Ability to read and interpret SPARK code to support and validate data pipelines.
β’ Strong teamwork, stakeholder management, communication, and documentation skills.
Required Skills & Qualifications:
β’ 6β9 years of enterprise data analysis experience
β’ 6β9 years of information data architecture experience
β’ 6β9 years of data architecture experience
β’ Bachelorβs degree in Computer Science, Information Systems, Engineering, or related field (or equivalent experience)
Preferred Skills & Qualifications:
β’ Experience with data governance frameworks and metadata/lineage tools
β’ Familiarity with CI/CD for data (e.g., Git-based model versioning)
β’ Exposure to data quality platforms and MDM suites
β’ Understanding of security/privacy controls and PII handling within data platforms
Job Title: Data Modeler/Architect (USC/GC/H4/L2 Only)
Pay Rate: $55 / HR (W-2)
Duration: 3+ months
Location: Atlanta/GA, Columbus/OH, Jersey City/NJ
We are looking for a Data Modeler and an Information Architect to lead the redesign and enhancement of enterprise data models, integrate MDM and data quality best practices, and architect optimized data lake and Databricks solutions for one of our Big Four clients. This is a hands-on role requiring expert proficiency with Erwin Data Modeler, advanced data modeling, and the ability to interpret SPARK code in support of robust analytics and data pipelines.
Key Responsibilities:
β’ Redesign, enhance, and govern conceptual, logical, and physical data models to meet evolving business and analytics needs.
β’ Integrate master data management (MDM) and data quality best practices, including standardization, matching/merging, survivorship, and stewardship workflows.
β’ Collaborate with Data Engineering, Analytics, Product, and Governance teams to ensure models support reporting, advanced analytics, and broader enterprise requirements.
β’ Use Erwin Data Modeler for modeling, version control, naming standards, metadata management, and model-to-database synchronization.
β’ Architect and optimize data lakes and Databricks environments (table design, partitioning, performance tuning, cost optimization).
β’ Read and interpret SPARK code (PySpark/Scala) to understand data lineage and transformations, validate models, and support pipeline design and optimization.
β’ Produce documentation: data dictionaries, entity definitions, lineage and mapping specs, model change logs, and standards.
β’ Champion data architecture best practices, including reusability, security and privacy considerations, and scalability.
Requirements:
Expert-level proficiency in:
β’ Erwin Data Modeler
β’ Enterprise data architecture
β’ Databricks ecosystem
β’ Advanced data modeling skills across relational and analytical use cases (e.g., 3NF, dimensional).
β’ Proven experience implementing MDM and data quality frameworks and tooling.
β’ Hands-on background with large-scale data lakes and lakehouse patterns.
β’ Ability to read and interpret SPARK code to support and validate data pipelines.
β’ Strong teamwork, stakeholder management, communication, and documentation skills.
Required Skills & Qualifications:
β’ 6β9 years of enterprise data analysis experience
β’ 6β9 years of information data architecture experience
β’ 6β9 years of data architecture experience
β’ Bachelorβs degree in Computer Science, Information Systems, Engineering, or related field (or equivalent experience)
Preferred Skills & Qualifications:
β’ Experience with data governance frameworks and metadata/lineage tools
β’ Familiarity with CI/CD for data (e.g., Git-based model versioning)
β’ Exposure to data quality platforms and MDM suites
β’ Understanding of security/privacy controls and PII handling within data platforms