

Cozen Technology Solutions Inc
Lead Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with a contract length of "X months" and a pay rate of "$X/hour." Key skills include Teradata, Databricks, Spark/PySpark, and strong SQL proficiency. Requires 13-15+ years of experience and familiarity with AWS services.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 25, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Morris County, NJ
-
🧠 - Skills detailed
#PySpark #Unix #Shell Scripting #EC2 #Cloud #SQL (Structured Query Language) #Delta Lake #Artifactory #Data Warehouse #Jenkins #Teradata #AWS IAM (AWS Identity and Access Management) #Data Modeling #Jira #AWS (Amazon Web Services) #Data Lake #Databricks #Linux #Scripting #AWS CloudWatch #IAM (Identity and Access Management) #Teradata SQL #"ETL (Extract #Transform #Load)" #Data Engineering #Agile #GIT #SQL Queries #AWS EC2 (Amazon Elastic Compute Cloud) #Scala #AWS Lambda #AWS Glue #Spark (Apache Spark) #SQS (Simple Queue Service) #Azure #AWS S3 (Amazon Simple Storage Service) #Server Administration #Lambda (AWS Lambda) #Spark SQL #DevOps #SNS (Simple Notification Service) #S3 (Amazon Simple Storage Service)
Role description
Must Have Skills:
• Teradata
• Databricks
• Spark/PySpark
• Person is 13-15+ years' experience that takes initiative and has command of tools...works in a consultative manner vs waiting for direction and orders.
• Experience working with both business and IT leaders
Duties:
• Collaborate with business and technical stakeholders to gather and understand requirements.
• Design scalable data solutions and document technical designs.
• Develop production-grade, high-performance ETL pipelines using Spark and PySpark.
• Perform data modeling to support business requirements.
• Write optimized SQL queries using Teradata SQL, Hive SQL, and Spark SQL across platforms such as Teradata and Databricks Unity Catalog.
• Implement CI/CD pipelines to deploy code artifacts to platforms like AWS and Databricks.
• Orchestrate Databricks jobs using Databricks Workflows.
• Monitor production jobs, troubleshoot issues, and implement effective solutions.
• Actively participate in Agile ceremonies including sprint planning, grooming, daily stand-ups, demos, and retrospectives.
Skills:
• Strong hands-on experience with Spark, PySpark, Shell scripting, Teradata, and Databricks.
• Proficiency in writing complex and efficient SQL queries and stored procedures.
• Solid experience with Databricks for data lake/data warehouse implementations.
• Familiarity with Agile methodologies and DevOps tools such as Git, Jenkins, and Artifactory.
• Experience with Unix/Linux shell scripting (KSH) and basic Unix server administration.
• Knowledge of job scheduling tools like CA7 Enterprise Scheduler.
• Hands-on experience with AWS services including S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch.
• Expertise in Databricks components such as Delta Lake, Notebooks, Pipelines, cluster management, and cloud integration (Azure/AWS).
• Proficiency with collaboration tools like Jira and Confluence.
• Demonstrated creativity, foresight, and sound judgment in planning and delivering technical solutions.
Required Skills:
• Spark
• Pyspark
• Shell Scripting
• Teradata
• Databricks
Additional Skills:
• AWS SQS
• Foresight
• Sound Judgment
• SQL
• Stored Procedures
• Databricks For Data Lake/Data Warehouse Implementations
• Agile Methodologies
• GIT
• Jenkins
• Artifactory
• Unix/Linux Shell Scripting
• Unix Server Administration
• Ca7 Enterprise Scheduler
• Aws S3
• Aws Ec2
• AWS SNS
• Aws Lambda
• AWS ECS
• Aws Glue
• AWS IAM
• Aws Cloudwatch
• Databricks Delta Lake
• Databricks Notebooks
• Databricks Pipelines
• Databricks Cluster Management
• Databricks Cloud Integration (Azure/Aws)
• JIRA
• Confluence
• Creativity
Must Have Skills:
• Teradata
• Databricks
• Spark/PySpark
• Person is 13-15+ years' experience that takes initiative and has command of tools...works in a consultative manner vs waiting for direction and orders.
• Experience working with both business and IT leaders
Duties:
• Collaborate with business and technical stakeholders to gather and understand requirements.
• Design scalable data solutions and document technical designs.
• Develop production-grade, high-performance ETL pipelines using Spark and PySpark.
• Perform data modeling to support business requirements.
• Write optimized SQL queries using Teradata SQL, Hive SQL, and Spark SQL across platforms such as Teradata and Databricks Unity Catalog.
• Implement CI/CD pipelines to deploy code artifacts to platforms like AWS and Databricks.
• Orchestrate Databricks jobs using Databricks Workflows.
• Monitor production jobs, troubleshoot issues, and implement effective solutions.
• Actively participate in Agile ceremonies including sprint planning, grooming, daily stand-ups, demos, and retrospectives.
Skills:
• Strong hands-on experience with Spark, PySpark, Shell scripting, Teradata, and Databricks.
• Proficiency in writing complex and efficient SQL queries and stored procedures.
• Solid experience with Databricks for data lake/data warehouse implementations.
• Familiarity with Agile methodologies and DevOps tools such as Git, Jenkins, and Artifactory.
• Experience with Unix/Linux shell scripting (KSH) and basic Unix server administration.
• Knowledge of job scheduling tools like CA7 Enterprise Scheduler.
• Hands-on experience with AWS services including S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch.
• Expertise in Databricks components such as Delta Lake, Notebooks, Pipelines, cluster management, and cloud integration (Azure/AWS).
• Proficiency with collaboration tools like Jira and Confluence.
• Demonstrated creativity, foresight, and sound judgment in planning and delivering technical solutions.
Required Skills:
• Spark
• Pyspark
• Shell Scripting
• Teradata
• Databricks
Additional Skills:
• AWS SQS
• Foresight
• Sound Judgment
• SQL
• Stored Procedures
• Databricks For Data Lake/Data Warehouse Implementations
• Agile Methodologies
• GIT
• Jenkins
• Artifactory
• Unix/Linux Shell Scripting
• Unix Server Administration
• Ca7 Enterprise Scheduler
• Aws S3
• Aws Ec2
• AWS SNS
• Aws Lambda
• AWS ECS
• Aws Glue
• AWS IAM
• Aws Cloudwatch
• Databricks Delta Lake
• Databricks Notebooks
• Databricks Pipelines
• Databricks Cluster Management
• Databricks Cloud Integration (Azure/Aws)
• JIRA
• Confluence
• Creativity






