Motion Recruitment

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a 6-month hybrid Senior Data Engineer position in Alpharetta, GA, requiring 4+ years of experience, proficiency in Python/Java, SQL, and cloud technologies (preferably Azure). Experience with ETL tools and data virtualization is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
528
-
🗓️ - Date
October 3, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#Data Warehouse #Security #Mathematics #SAP BODS (BusinessObjects Data Services) #Migration #"ETL (Extract #Transform #Load)" #Azure #Data Governance #SAP #Data Engineering #Java #Statistics #API (Application Programming Interface) #Documentation #Scala #GIT #Microservices #Data Integration #Data Migration #Azure Data Factory #BI (Business Intelligence) #SQL (Structured Query Language) #Agile #Computer Science #Data Pipeline #Virtualization #Python #Cloud #Data Architecture #ADF (Azure Data Factory) #NoSQL #Data Analysis #Databases #SAP BusinessObjects
Role description
Exciting contract Senior Data Engineer opportunity with an established firm. This is a hybrid role local to Alpharetta, GA. The Senior Data Engineer will design, build, and implement data integration solutions, including data pipelines, data API's, and ETL jobs, to meet the data needs of applications, services, micro services, data assets, and business intelligence and analytical tools. Working with data architects, application development teams, data analytics teams, business analytics, product managers, and the data governance COE, the Senior Data Engineer will design and develop interfaces between applications, databases, data assets, external partners, and third-party systems in a combination of cloud and on-premise platforms. Contract Duration: 6-Months Hybrid - Wed/Thurs in office Alpharetta, GA Required Skills & Experience • 4+ w/Bachelors degree in computer science, Mathematics, Statistics, or another related technical field. • Experience in Python or Java development. • Experience in SQL (No-SQL experience is a plus). • Preferred experience in data virtualization, Tibco Data Virtualization. • Knowledge of best practices and IT operations in an always-up, always-available service • Deep understanding of Cloud technologies (preferably Azure), and security and how they can be combined to design scalable cloud solutions • Experience with GIT repository management tool (On prem and Cloud) • Experience with or knowledge of Agile Software Development methodologies • Excellent problem solving and troubleshooting skills • Process oriented with great documentation skills • Excellent oral and written communication skills with a keen sense of customer service Desired Skills & Experience • Experience in ETL tool, SAP BusinessObjects Data Services and MS Azure Data factory preferred What You Will Be Doing Daily Responsibilities • Develops and maintains scalable data pipelines and builds out new integrations to support continuing increases in data volume and complexity • Designs and develops scalable ETL packages for point to point integration of data between source systems, extraction and integration of data into various data assets, including data warehouse and fit for purpose data repositories, both on prem and cloud • Designs and develops scalable data APIs to provide data as a service to microservices, applications, and analytical tools • Designs and develops data migrations in support of enterprise application and system implementations from legacy systems • Writes functional specifications for data pipelines and APIs and writes and performs unit/integration tests • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.