

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 12-month contract at a pay rate of "$XX/hour". Remote work is available. Key skills include T-SQL, SSRS, Azure Data Factory, and Python (PySpark). A minimum of 10 years of relevant experience is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 20, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Azure DevOps #DBA (Database Administrator) #PySpark #System Testing #Databases #Database Design #Indexing #Spark (Apache Spark) #Documentation #Databricks #Deployment #Azure SQL Database #Metadata #Data Processing #Python #Data Engineering #Computer Science #Azure #SSIS (SQL Server Integration Services) #Microsoft SQL #Data Lakehouse #Azure SQL #Data Lake #SQL (Structured Query Language) #SSRS (SQL Server Reporting Services) #Data Warehouse #Data Modeling #Visualization #Database Systems #SQL Server #Microsoft Azure #Azure Data Factory #Datasets #Consulting #"ETL (Extract #Transform #Load)" #Data Quality #ADF (Azure Data Factory) #Data Bricks #SSAS (SQL Server Analysis Services) #.Net #Data Pipeline #Data Management #Microsoft SQL Server #SQL Queries #MS SQL (Microsoft SQL Server) #Data Cleansing #DevOps
Role description
W2 only - no C2C or subcompany referrals
ECCO Select is a talent acquisition and consulting company specializing in people, process and technology solutions. We provide the talent behind the technology enabling our clients to achieve their goals. For more information about ECCO Select, visit us at www.eccoselect.com.
Position Title: Data Engineer
SUMMARY
The purpose of this position is to perform Data Development functions including design new or enhance existing enterprise database systems, maintain or develop critical databases processes, perform unit and system testing, perform support and help desk tasks. It also requires defining and adopting best practices for each of the data development functions as well as visualization and ETL processes. The position is responsible for architecting report solutions using SSRS reports by collaborating with management team. This position is also responsible for architecting ETL functions between a multitude of relational databases and external data files.
ESSENTIAL DUTIES AND RESPONSIBILITIES
β’ Work with a highly dynamic team focused towards Digital Transformation
β’ Understand the domain and business processes to implement successful data pipelines
β’ Provide work status, and coordinate with Data Engineers
β’ Manage customer deliverables and regularly report the status via Weekly/Monthly reviews
β’ Ability to work in a dynamic environment with changing requirements
β’ Good communication and presentation skills
β’ Working experience with a wide range of data technologies, data modeling, and metadata management
β’ Working experience with T-SQL and relational databases including currently supported versions of Microsoft SQL Server in Azure
β’ Design, develop and maintain Stored Procedures, Functions and Views
β’ Working experience with development and maintenance of Data Factory data pipelines
β’ Working experience with development and maintenance of data pipelines on the Databricks platform in Azure
β’ Working experience using DevOps CI/CD in Azure
β’ Program in T-SQL with relational databases including currently supported versions of Microsoft SQL Server
β’ Working experience using SQL and Python (PySpark) for data processing and transformation
β’ Strong experience working across the full Microsoft tech stack, including Azure Data Factory, Azure SQL, Azure Data Lakehouse, Azure Data Bricks, and Azure DevOps
β’ Write high performance SQL queries using Joins, Cross Apply, Aggregate Queries, Merge, Pivot
β’ Design normalized database tables with proper indexing and constraints
β’ Perform SQL query tuning and performance optimization on complex inefficient queries
β’ Provide guidance in the use of table variable, temporary table, CTE appropriately to deal with large datasets
β’ Collaborate with DBA on database design and performance enhancements
β’ Leading in all phases of the software development life cycle in a team environment
β’ Debug existing code and troubleshoot for issues
β’ Design and provide a framework for maintaining existing data warehouse for reporting and data analytics
β’ Following best practices, design, develop, test and document ETL processes
β’ Develop data cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching
β’ Keep up-to-date with the latest database features, techniques, and technologies
β’ Support current business applications including the implementation of bug fixes as needed
β’ Able to multi-task and adapt to shifting priorities
β’ Able to meet deadlines set in project planning
β’ Effectively communicate progress through the project execution phase
β’ Follow industry and company standard coding practices
β’ Produce technical and application documentation
β’ Produce quality deliverables upon deployment
β’ Other duties as assigned
QUALIFICATIONS AND REQUIREMENTS To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or abilities.
β’ Minimum 10 years of work-related experience in T-SQL, SSRS, and ETL. If candidate has relevant education in computer science, computer information systems, or related field, lack of experience can be supplemented by education as follows:
β’ Minimum 7 years of experience with development and maintenance of Azure Data Factory data pipelines
β’ Minimum 5 years of experience with development and maintenance of data pipelines on the DataBricks platform in Azure
β’ Minimum of 3 years of experience using DevOps CI/CD in Azure
β’ Minimum of 3 years of supervising data infrastructure and data flows
β’ Strong proficiency in the Microsoft Tech Stack, including:
β’ Microsoft Azure ecosystem (Azure Data Factory, Azure Data Lakehouse, Azure SQL Database)
β’ Databricks
β’ SQL Server
β’ SSRS, SSIS, SSAS
β’ T-SQL
β’ Python (PySpark) β required
β’ Understanding of data lakehouse architecture in Azure
β’ Experience developing, maintaining, and supporting database processes using Microsoft SQL Server with emphasis on .NET technologies
β’ Proficient at a senior level with T-SQL, SSRS, SSIS, SSAS, ETL, Data Warehousing, and Python
β’ Demonstrated ability to perform above listed essential job functions
ECCO Select is committed to hiring and retaining a diverse workforce. Our policy is to provide equal opportunity to all people without regard to race, color, religion, national origin, ancestry, marital status, veteran status, age, disability, pregnancy, genetic information, citizenship status, sex, sexual orientation, gender identity or any other legally protected category. Veterans of our United States Uniformed Services are specifically encouraged to apply for ECCO Select opportunities.
Equal Employment Opportunity is The Law
This Organization Participates in E-Verify
W2 only - no C2C or subcompany referrals
ECCO Select is a talent acquisition and consulting company specializing in people, process and technology solutions. We provide the talent behind the technology enabling our clients to achieve their goals. For more information about ECCO Select, visit us at www.eccoselect.com.
Position Title: Data Engineer
SUMMARY
The purpose of this position is to perform Data Development functions including design new or enhance existing enterprise database systems, maintain or develop critical databases processes, perform unit and system testing, perform support and help desk tasks. It also requires defining and adopting best practices for each of the data development functions as well as visualization and ETL processes. The position is responsible for architecting report solutions using SSRS reports by collaborating with management team. This position is also responsible for architecting ETL functions between a multitude of relational databases and external data files.
ESSENTIAL DUTIES AND RESPONSIBILITIES
β’ Work with a highly dynamic team focused towards Digital Transformation
β’ Understand the domain and business processes to implement successful data pipelines
β’ Provide work status, and coordinate with Data Engineers
β’ Manage customer deliverables and regularly report the status via Weekly/Monthly reviews
β’ Ability to work in a dynamic environment with changing requirements
β’ Good communication and presentation skills
β’ Working experience with a wide range of data technologies, data modeling, and metadata management
β’ Working experience with T-SQL and relational databases including currently supported versions of Microsoft SQL Server in Azure
β’ Design, develop and maintain Stored Procedures, Functions and Views
β’ Working experience with development and maintenance of Data Factory data pipelines
β’ Working experience with development and maintenance of data pipelines on the Databricks platform in Azure
β’ Working experience using DevOps CI/CD in Azure
β’ Program in T-SQL with relational databases including currently supported versions of Microsoft SQL Server
β’ Working experience using SQL and Python (PySpark) for data processing and transformation
β’ Strong experience working across the full Microsoft tech stack, including Azure Data Factory, Azure SQL, Azure Data Lakehouse, Azure Data Bricks, and Azure DevOps
β’ Write high performance SQL queries using Joins, Cross Apply, Aggregate Queries, Merge, Pivot
β’ Design normalized database tables with proper indexing and constraints
β’ Perform SQL query tuning and performance optimization on complex inefficient queries
β’ Provide guidance in the use of table variable, temporary table, CTE appropriately to deal with large datasets
β’ Collaborate with DBA on database design and performance enhancements
β’ Leading in all phases of the software development life cycle in a team environment
β’ Debug existing code and troubleshoot for issues
β’ Design and provide a framework for maintaining existing data warehouse for reporting and data analytics
β’ Following best practices, design, develop, test and document ETL processes
β’ Develop data cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching
β’ Keep up-to-date with the latest database features, techniques, and technologies
β’ Support current business applications including the implementation of bug fixes as needed
β’ Able to multi-task and adapt to shifting priorities
β’ Able to meet deadlines set in project planning
β’ Effectively communicate progress through the project execution phase
β’ Follow industry and company standard coding practices
β’ Produce technical and application documentation
β’ Produce quality deliverables upon deployment
β’ Other duties as assigned
QUALIFICATIONS AND REQUIREMENTS To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or abilities.
β’ Minimum 10 years of work-related experience in T-SQL, SSRS, and ETL. If candidate has relevant education in computer science, computer information systems, or related field, lack of experience can be supplemented by education as follows:
β’ Minimum 7 years of experience with development and maintenance of Azure Data Factory data pipelines
β’ Minimum 5 years of experience with development and maintenance of data pipelines on the DataBricks platform in Azure
β’ Minimum of 3 years of experience using DevOps CI/CD in Azure
β’ Minimum of 3 years of supervising data infrastructure and data flows
β’ Strong proficiency in the Microsoft Tech Stack, including:
β’ Microsoft Azure ecosystem (Azure Data Factory, Azure Data Lakehouse, Azure SQL Database)
β’ Databricks
β’ SQL Server
β’ SSRS, SSIS, SSAS
β’ T-SQL
β’ Python (PySpark) β required
β’ Understanding of data lakehouse architecture in Azure
β’ Experience developing, maintaining, and supporting database processes using Microsoft SQL Server with emphasis on .NET technologies
β’ Proficient at a senior level with T-SQL, SSRS, SSIS, SSAS, ETL, Data Warehousing, and Python
β’ Demonstrated ability to perform above listed essential job functions
ECCO Select is committed to hiring and retaining a diverse workforce. Our policy is to provide equal opportunity to all people without regard to race, color, religion, national origin, ancestry, marital status, veteran status, age, disability, pregnancy, genetic information, citizenship status, sex, sexual orientation, gender identity or any other legally protected category. Veterans of our United States Uniformed Services are specifically encouraged to apply for ECCO Select opportunities.
Equal Employment Opportunity is The Law
This Organization Participates in E-Verify