Data Engineer (contract)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position in Charlotte, NC, lasting 12 months at a pay rate of "X". Requires 5+ years of data engineering experience, proficiency in SQL Server and SSIS, and financial services industry experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 20, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Migration #PySpark #Cybersecurity #Data Governance #Spark SQL #Database Migration #Spark (Apache Spark) #Databricks #Deployment #Metadata #Data Processing #Data Engineering #Azure #Monitoring #Cloud #Security #Data Lake #Compliance #SQL (Structured Query Language) #Logging #Data Ingestion #Automation #SQL Server #Azure Data Factory #Synapse #"ETL (Extract #Transform #Load)" #Data Quality #ADF (Azure Data Factory) #Data Catalog #Data Analysis #Replication #Data Pipeline #Data Management #SSIS (SQL Server Integration Services) #Scala #DevOps
Role description
Title: Data Engineer Location: Charlotte, NC Duration: 12 months Work Engagement: W-2 Benefits on offer for this contract position: Health Insurance, Life insurance, 401K and Voluntary Benefits Summary: In this contingent resource assignment, you may: Consult on complex initiatives with broad impact and large-scale planning for Information Security Engineering. Review and analyze complex multi-faceted, larger scale or longer-term Information Security Engineering challenges that require in-depth evaluation of multiple factors including intangibles or unprecedented factors. Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables. Strategically collaborate and consult with client personnel. Seeking a highly skilled Data Engineer to design, build, and maintain robust data pipelines and integration workflows using SQL Server, SSIS, and Microsoft Fabric. This role is critical to enabling secure, scalable, and high-performance data solutions that support cybersecurity metrics, reporting, and automation initiatives. Responsibilities: β€’ Design and implement ETL pipelines using SSIS and SQL Server to support data ingestion, transformation, and delivery across cybersecurity platforms. β€’ Develop and optimize data workflows within the Microsoft Fabric ecosystem, including Azure Data Factory, Synapse Analytics, and OneLake. β€’ Collaborate with data analysts, architects, and cybersecurity SMEs to translate business requirements into scalable data solutions. β€’ Maintain and enhance SSIS packages, including deployment, configuration, and job scheduling via SQL Server Agent. β€’ Participate in database migration and modernization efforts, including Always On configurations and cross-data center replication. β€’ Ensure data quality, integrity, and lineage through rigorous validation, logging, and monitoring practices. β€’ Support hybrid cloud deployments and integration between on-premises SQL environments and Microsoft Fabric services. β€’ Document data models, workflows, and operational procedures for QA, audit and knowledge transfer. Qualifications: β€’ Applicants must be authorized to work for ANY employer in the U.S. This position is not eligible for visa sponsorship. β€’ 5+ years of experience in data engineering with a focus on SQL Server and SSIS. β€’ Hands-on experience with Microsoft Fabric components such as Azure Data Factory, Synapse, and Data Lake. β€’ Strong SQL skills, including performance tuning, stored procedures, and job automation. β€’ Experience with enterprise-scale ETL processes and data warehousing. β€’ Familiarity with data governance, security, and compliance in regulated environments. β€’ Financial services experience specifically big banks. Preferred Qualifications: β€’ Experience with Spark SQL, PySpark, or Databricks within Microsoft Fabric. β€’ Exposure to DevOps practices, including CI/CD pipelines for data workflows. β€’ Knowledge of structured streaming and real-time data processing. β€’ Familiarity with metadata management and data cataloging tools.