

Sr. Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with a 6+ month contract in Charlotte, NC (Hybrid). Pay ranges from $75-$79/HR. Requires a Bachelor's degree, 5 years in Data/BI Engineering, and expertise in GCP, SQL, and ML model deployment.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
632
-
ποΈ - Date discovered
July 31, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Model Deployment #Batch #Automation #Data Science #Business Analysis #Big Data #AI (Artificial Intelligence) #HBase #Oracle #Scala #Database Design #Teradata #Deployment #Data Security #Data Quality #Computer Science #REST API #Visualization #Cloud #Informatica BDM (Big Data Management) #"ETL (Extract #Transform #Load)" #Data Engineering #Hadoop #Observability #GCP (Google Cloud Platform) #Integration Testing #Monitoring #UAT (User Acceptance Testing) #Workday #ML (Machine Learning) #Agile #Storage #BI (Business Intelligence) #Data Management #Data Ingestion #Data Governance #Logging #Migration #Security #SQL (Structured Query Language) #REST (Representational State Transfer)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Sr. Data Engineer
Duration: 06+ Months Contract
Location: Charlotte, NC (Hybrid)
Pay Range: $75-$79/HR on W2, 40 hrs/Week
Day To Day Responsibilities
β’ Lead the integration of machine learning models into business-critical applications using GCP Vertex AI.
β’ Collaborate with data engineers, data scientists, software engineers, and product owners to ensure seamless model deployment and performance in production environments.
β’ Design and implement scalable, resilient, and secure model inference pipelines using Vertex AI, Vertex Pipelines, and related services.
β’ Enable continuous delivery and monitoring of models via Vertex AI Model Registry, Prediction Endpoints, and Model Monitoring features.
β’ Optimize model serving performance, cost, and throughput under high-load, real-time, and batch scenarios.
β’ Automate model lifecycle management including CI/CD pipelines, retraining, versioning, rollback, and shadow testing.
β’ Participate in architecture reviews and advocate best practices in ML model orchestration, resource tuning, and observability.
Key Responsibilities
β’ Translates complex cross-functional business requirements and functional specifications into logical program designs, code modules, stable application systems, and data solutions; partners with Product Team to understand business needs and functional specifications
β’ Collaborates with cross-functional teams to ensure specifications are converted into flexible, scalable, and maintainable solution designs; evaluates project deliverables to ensure they meet specifications and architectural standards
β’ Contributes in the design and build of complex data solutions and ensures the architecture blueprint, standards, target state architecture, and strategies are aligned with the requirements.
β’ Coordinates, executes, and participates in component integration (CIT) scenarios, systems integration testing (SIT), and user acceptance testing (UAT) to identify application errors and to ensure quality software deployment
β’ Participates in all software development end-to-end product lifecycle phases by applying and sharing an in-depth understanding of complex industry methodologies, policies, standards, and controls
β’ Develops detailed architecture plans for large scale enterprise architecture projects and drives the plans to fruition
β’ Solves complex architecture/design and business problems; solutions are extensible; works to simplify, optimize, remove bottlenecks, etc.
Data Engineering Responsibilities
β’ Executes the development, maintenance, and enhancements of data ingestion solutions of varying complexity levels across various data sources like DBMS, File systems (structured and unstructured), APIs and Streaming on on-prem and cloud infrastructure; demonstrates strong acumen in Data Ingestion toolsets and nurtures and grows junior members in this capability
β’ Builds, tests and enhances data curation pipelines integration data from wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity
β’ Supports the development of feature / inputs for the data models in an Agile manner; Hosts Model Via Rest APIs; ensures non-functional requirements such as logging, authentication, error capturing, and concurrency management are accounted for when model hosting
β’ Works with Data Science team to understand mathematical models and algorithms; recommends improvements to analytic methods, techniques, standards, policies and procedures
β’ Works to ensure the manipulation and administration of data and systems are secure and in accordance with enterprise governance by staying compliant with industry best practices, enterprise standards, corporate policy and department procedures; handles the manipulation (extract, load, transform), data visualization, and administration of data and systems securely and in accordance with enterprise data governance standards
β’ Maintains the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensures high availability of the platform; works with Infrastructure Engineering teams to maintain the data platform; serves as an SME of one or more application
β’ BI Engineering Responsibilities
β’ Responsible for the development, maintenance, and enhancements of BI solutions of varying complexity levels across different data sources like DBMS, File systems (structured and unstructured) on-prem and cloud infrastructure; creates level metrics and other complex metrics; use custom groups, consolidations, drilling, and complex filters
β’ Demonstrates database skill (Teradata/Oracle/Db2/Hadoop) by writing views for business requirements; uses freeform SQLs and pass-through functions; analyzes and finds errors from SQL generation; creates RSD and dashboard
β’ Responsible for building, testing and enhancement of BI solutions from a wide variety of sources like Teradata, Hive, Hbase, Google Big Query and File systems; develops solutions with optimized data performance and data security
β’ Works with business analysts to understand requirements and create dashboard/dossiers wireframes; makes use of widgets and Vitara charts to make the dashboard/dossiers visually appealing
β’ Coordinates and takes necessary actions from DART side for application upgrades (e.g., Teradata, Workday), storage migration, and user management automation; supports such things as Cluster Management and Project Configuration settings
Competencies:
β’ Technical Competencies
β’ Agile Development
β’ Big Data Management and Analytics
β’ Cloud Computing
β’ Database Design (Physical)
β’ Release Management
Minimum Qualifications:
β’ Bachelor's Degree in Engineering /Computer Science, CIS, or related field (or equivalent work experience in a related field)
β’ 5 years of experience in Data or BI Engineering, Data Warehousing/ETL, or Software Engineering
β’ 4 years of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)