

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Warren, MI, lasting over 10 months, with a pay rate of "unknown." Candidates should have 5-7 years of ETL experience, proficiency in Azure, and skills in Scala, Spark, Python, and SQL.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 13, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Warren, MI
-
π§ - Skills detailed
#API (Application Programming Interface) #REST (Representational State Transfer) #Agile #Computer Science #Data Engineering #Spark (Apache Spark) #AI (Artificial Intelligence) #Data Integration #Data Architecture #Scala #Mathematics #SSIS (SQL Server Integration Services) #BI (Business Intelligence) #Big Data #Databricks #Databases #ML (Machine Learning) #SQL Server #Python #Leadership #Monitoring #Data Pipeline #Strategy #Azure #SQL (Structured Query Language) #Code Reviews #.Net #Data Science #REST API #"ETL (Extract #Transform #Load)" #Continuous Deployment #Deployment #Angular #Automation #Azure cloud #PostgreSQL #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Data Engineer
Duration :10+ Months
Work Location: Warren, MI
Top Skills:
1. 5-7 years of experience building ETL
1. Working with azure cloud technologies
1. Understanding databases, database structures, data model, data architecture
Job Description:
As a Data Engineer, you will build industrialized data assets and data pipelines in support of Business Intelligence and Advance Analytic objectives. The Data Engineer handles leading and delivering new and innovative data driven solutions that are elegant and professional. You will work closely with our forward-thinking Data Scientists, BI developers, System Architects, and Data Architects to deliver value to our vision for the future. Our team focuses on writing maintainable tests and code that meet the customerβs need and scale without rework. Our engineers and architects work in highly collaborative environments across many disciplines (user experience, database, streaming technology, custom rules engines, and ai/ml and most web technologies). We work on innovative technologies β understanding and inventing modern designs and integration patterns along the way. Our Data Engineers must β
β’ You will assemble large, complex data sets that meet functional / non-functional business requirements.
β’ You will identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
β’ Lead and deliver exceptional data driven solutions across many different languages, tools, and technology.
β’ Develop a culture which takes challenging and complex ideas and turns them into production solutions.
β’ Have a broad, enterprise-wide view of business and varying degrees of appreciating for strategy, process, capabilities, enablers, and governance.
β’ Be able to critically apply themselves to solving problems in an interconnected and integrated environment.
β’ Think strategically about adopting innovative technologies that are beyond the horizon and deliver on sound solution design.
β’ Create high-level models that can be leveraged in future analysis to extend and mature the business architecture.
β’ Working towards continuously raising our standards of engineering excellence in quality, efficiency, and developing repeatable designs.
β’ Performing hands-on development, leading code reviews and testing, creating automation tools, and creating proofs of concepts
β’ Ability to work along with operations team during any production issues related to platform.
β’ Best practices like agile methodologies, design thinking and continuous deployment that will allow you to innovate fast.
β’ Deliver solutions across Big Data applications to support business strategies and deliver business value.
β’ Build tool / automation to make deployment and monitoring production environment more repeatable.
β’ Build strong relationships with Business & Technology Partners and provide leadership, direction, best practices, and coaching to technology development teams.
β’ Owning all Technical aspects of development and for assigned applications.
β’ Driving continuous improvement in applications, through use of consistent development practices and tools, and through ongoing design and code refactoring
β’ Collaborating with stakeholders through ongoing product and platform releases, to solve existing needs, identify exciting opportunities, and predict future challenges.
β’ Working closely with product managers and architects to prioritize and manage features, technical requirements, known defects, and issues.
β’ Managing and appropriately escalating delivery impediments, risks, issues, and changes tied to the product development initiatives.
β’ Data Integration β Design and develop new source system integrations from a variety of formats including files, database extracts and APIs.
β’ Data Pipelines β Design and develop highly scalable Data Pipelines that incorporate complex transformations and efficient code.
β’ Data Delivery β Design and develop solutions for delivering data that meets SLAs
Degree Requirement β Bachelor Degree in the data space or a related field (computer science, information technology, mathematics are good fields)
Years of experience β 5-7 yearsβ experience building ETL
(ETL β extract transform and load)
ETL β code utilizing different technologies, connects you to a database and helps extract it out so the table and the fields do the transformation cleaning, connect to another database and actually load it to the new database
Working with azure cloud technologies
Understanding databases, database structures, data model, data architecture
Expert in multiple tools and technologies including Azure and Databricks
Scala, Spark, Python, SQL, .NET, REST API, Angular
SQL Server, PostgreSQL, SQL Server Integration Services (SSIS), Complex ETL with massive data, PowerBI