
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include ETL, Snowflake, Kafka, and data quality. Requires 10+ years in metadata modeling and 5 years in project management.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date discovered
September 5, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Alpharetta, GA
-
π§ - Skills detailed
#MS SQL (Microsoft SQL Server) #Snowflake #Data Engineering #Data Integration #Kafka (Apache Kafka) #Documentation #Impala #Project Management #Jira #Metadata #SQL (Structured Query Language) #Data Quality #Agile #"ETL (Extract #Transform #Load)" #Oracle #GitHub #Spark (Apache Spark)
Role description
ENGINEER JOB DESCRIPTION:
Build Transformation and Load processes using Kafka streamed data into a
Snowflake database. Analyze layouts and SQL design requirements. Define metadata
for identifying and ingesting source files. Create and update source to target mapping
lineage, transformation rules, and data definitions. Identify PII details and conform with
standard naming conventions. Coordinate data integration, conformity, quality, integrity,
and consolidation efforts. Analyse Source to Target Field level mapping for data
sources. Design and implement transformation rules within the transformation
framework. Provide support for resolving data quality issues. Coordinate with technical
teams, SMEs, and architects on enhancements and change requests. Provide training
as well as create detailed documentation, implementation plans, and project trackers.
Prior experience must include:
Β· 10+ years conducting metadata modelling, data transformation processes, and data
profiling
Β· 8 years of data quality, integrity, and data consolidation
Β· 4 years working with Snowflake, Oracle, MS SQL, SparkSQL, Hive or Impala
Β· 5 years designing and optimizing data transformation processes
Β· 5 years of software development life cycle (SDLC) processes including Agile
methodologies
Β· 5 years working with technical teams, stakeholders, and project managers to provide
instructions and demonstrations on software delivery
Β· 5 years of project management and documentation using GitHub, JIRA, and
Confluence
ENGINEER JOB DESCRIPTION:
Build Transformation and Load processes using Kafka streamed data into a
Snowflake database. Analyze layouts and SQL design requirements. Define metadata
for identifying and ingesting source files. Create and update source to target mapping
lineage, transformation rules, and data definitions. Identify PII details and conform with
standard naming conventions. Coordinate data integration, conformity, quality, integrity,
and consolidation efforts. Analyse Source to Target Field level mapping for data
sources. Design and implement transformation rules within the transformation
framework. Provide support for resolving data quality issues. Coordinate with technical
teams, SMEs, and architects on enhancements and change requests. Provide training
as well as create detailed documentation, implementation plans, and project trackers.
Prior experience must include:
Β· 10+ years conducting metadata modelling, data transformation processes, and data
profiling
Β· 8 years of data quality, integrity, and data consolidation
Β· 4 years working with Snowflake, Oracle, MS SQL, SparkSQL, Hive or Impala
Β· 5 years designing and optimizing data transformation processes
Β· 5 years of software development life cycle (SDLC) processes including Agile
methodologies
Β· 5 years working with technical teams, stakeholders, and project managers to provide
instructions and demonstrations on software delivery
Β· 5 years of project management and documentation using GitHub, JIRA, and
Confluence