

Millennium Software and Staffing
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a contract lasting over 6 months, paying from $80.00 per hour. Key skills include Databricks, Azure data services, PySpark, SQL, and Power BI. Only USC/GC candidates are eligible.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
640
-
ποΈ - Date
February 26, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Greenville, SC 29616
-
π§ - Skills detailed
#Data Engineering #Storage #Automation #Databases #Cloud #Data Warehouse #Documentation #PySpark #Java #Python #Data Pipeline #Scripting #Spark (Apache Spark) #Data Quality #Oracle #Apache Hive #Looker #VBA (Visual Basic for Applications) #Bash #Database Performance #SQL (Structured Query Language) #Unix #Data Architecture #Data Lake #Microsoft Power BI #MS SQL (Microsoft SQL Server) #Spark SQL #Shell Scripting #SQL Queries #Microsoft SQL Server #Microsoft SQL #Informatica #Databricks #Talend #Azure #AWS (Amazon Web Services) #Scala #SQL Server #Data Integration #Agile #ML (Machine Learning) #Hadoop #Big Data #"ETL (Extract #Transform #Load)" #BI (Business Intelligence) #Datasets #Database Design
Role description
<Overview>We are seeking a skilled Data Engineer with strong expertise in Databricks, Azure data services, PySpark, SQL, and Power BI. The ideal candidate will be responsible for building scalable ETL/ELT pipelines, optimizing data models, and developing high-performance dashboards while ensuring governance and best practices.
Only USC/GC
<Duties>
Develop and maintain robust data pipelines using ETL (Extract, Transform, Load) processes to ensure high-quality data flow across systems
Design and implement scalable data architectures leveraging cloud platforms such as AWS and Azure Data Lake to support big data initiatives
Collaborate with cross-functional teams to gather requirements and translate them into efficient database designs and data models
Optimize SQL queries and database performance for Microsoft SQL Server, Oracle, and other relational databases
Build and deploy analytics dashboards using Looker, providing stakeholders with clear insights into key metrics
Utilize Hadoop, Apache Hive, Spark, and other big data tools to process large datasets efficiently
Write clean, efficient code in Java, Python, Bash (Unix shell), Shell Scripting, and other languages to automate workflows and enhance system integration
Support model training activities by preparing data sets and ensuring data quality for machine learning projects
Maintain comprehensive documentation of data pipelines, architecture diagrams, and technical procedures
Follow Agile methodologies to ensure iterative development, continuous improvement, and timely delivery of projects
<Skills>
Extensive experience with cloud platforms such as AWS and Azure Data Lake for scalable storage solutions
Proficiency in Java, Python, Bash (Unix shell), Shell Scripting for automation and system integration
Deep understanding of big data technologies including Hadoop, Apache Hive, Spark, and related ecosystems
Strong knowledge of ETL tools like Talend and Informatica for seamless data integration
Expertise in SQL development for Microsoft SQL Server, Oracle databases, including database design and optimization techniques
Familiarity with Looker or similar BI tools for creating interactive dashboards and reports
Experience working with RESTful APIs for system communication and data exchange
Knowledge of Linked Data principles to connect disparate datasets across platforms
Hands-on experience with Data Warehouse architecture design and implementation strategies
Ability to analyze complex datasets to extract meaningful insights; strong analysis skills are essential
Familiarity with Model Training processes within machine learning workflows
Experience with VBA scripting for automation within Excel or other Office applications is a plus
Strong understanding of Agile development practices to foster collaboration and iterative progress
Embrace this opportunity to leverage your technical prowess in a vibrant environment where innovation drives success. Your contributions will directly influence how we harness the power of data to shape strategic decisions!
Job Types: Full-time, Contract
Pay: From $80.00 per hour
Expected hours: 8 per week
Work Location: In person
<Overview>We are seeking a skilled Data Engineer with strong expertise in Databricks, Azure data services, PySpark, SQL, and Power BI. The ideal candidate will be responsible for building scalable ETL/ELT pipelines, optimizing data models, and developing high-performance dashboards while ensuring governance and best practices.
Only USC/GC
<Duties>
Develop and maintain robust data pipelines using ETL (Extract, Transform, Load) processes to ensure high-quality data flow across systems
Design and implement scalable data architectures leveraging cloud platforms such as AWS and Azure Data Lake to support big data initiatives
Collaborate with cross-functional teams to gather requirements and translate them into efficient database designs and data models
Optimize SQL queries and database performance for Microsoft SQL Server, Oracle, and other relational databases
Build and deploy analytics dashboards using Looker, providing stakeholders with clear insights into key metrics
Utilize Hadoop, Apache Hive, Spark, and other big data tools to process large datasets efficiently
Write clean, efficient code in Java, Python, Bash (Unix shell), Shell Scripting, and other languages to automate workflows and enhance system integration
Support model training activities by preparing data sets and ensuring data quality for machine learning projects
Maintain comprehensive documentation of data pipelines, architecture diagrams, and technical procedures
Follow Agile methodologies to ensure iterative development, continuous improvement, and timely delivery of projects
<Skills>
Extensive experience with cloud platforms such as AWS and Azure Data Lake for scalable storage solutions
Proficiency in Java, Python, Bash (Unix shell), Shell Scripting for automation and system integration
Deep understanding of big data technologies including Hadoop, Apache Hive, Spark, and related ecosystems
Strong knowledge of ETL tools like Talend and Informatica for seamless data integration
Expertise in SQL development for Microsoft SQL Server, Oracle databases, including database design and optimization techniques
Familiarity with Looker or similar BI tools for creating interactive dashboards and reports
Experience working with RESTful APIs for system communication and data exchange
Knowledge of Linked Data principles to connect disparate datasets across platforms
Hands-on experience with Data Warehouse architecture design and implementation strategies
Ability to analyze complex datasets to extract meaningful insights; strong analysis skills are essential
Familiarity with Model Training processes within machine learning workflows
Experience with VBA scripting for automation within Excel or other Office applications is a plus
Strong understanding of Agile development practices to foster collaboration and iterative progress
Embrace this opportunity to leverage your technical prowess in a vibrant environment where innovation drives success. Your contributions will directly influence how we harness the power of data to shape strategic decisions!
Job Types: Full-time, Contract
Pay: From $80.00 per hour
Expected hours: 8 per week
Work Location: In person






