Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Databricks, AWS, Apache Spark, Python, and PySpark. A Bachelor’s degree and 10+ years of experience are required, along with prior enterprise-level consulting experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
584
-
🗓️ - Date discovered
August 30, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Beaverton, OR
-
🧠 - Skills detailed
#Python #"ETL (Extract #Transform #Load)" #Data Modeling #Cloud #Apache Spark #AWS (Amazon Web Services) #Databricks #Data Processing #Datasets #Data Integrity #Scala #Automation #Security #Quality Assurance #Data Engineering #Database Management #Spark (Apache Spark) #PySpark #Database Design #Consulting #Data Governance
Role description
We are seeking a Senior Data Engineer to join our team. This role is responsible for designing, building, and maintaining scalable data systems that enable high-quality analytics and insights across the organization. The ideal candidate has strong expertise in Databricks, AWS, Apache Spark, Python, and PySpark, and can ensure seamless data flow and accessibility for enterprise-level applications. Key Responsibilities • Design, implement, and maintain database management systems, standards, and best practices. • Establish data governance frameworks, including security policies, data maintenance plans, and capacity planning. • Build and optimize ETL pipelines, data models, and scalable architectures to support analytics and reporting. • Work with large, complex datasets from multiple sources, ensuring data integrity and quality. • Develop and maintain logical and conceptual database designs, documenting specifications and data flows. • Support the design and testing of business applications, reports, and user interfaces. • Collaborate with cross-functional teams to translate business requirements into scalable data solutions. • Conduct performance tuning, troubleshoot issues, and optimize database efficiency. • Provide expertise in relational data modeling, including understanding of tables, fields, and relationships. • Participate in quality assurance processes and develop test application code. Required Qualifications • Bachelor’s degree and 10+ years of relevant experience. • 4–6 years of hands-on experience with: • Databricks • AWS • Apache Spark • Python / PySpark • Strong knowledge of database design, ETL development, and large-scale data processing. • Ability to translate complex business needs into technical solutions. • Prior enterprise-level consulting or industry experience is a strong plus. • Familiarity with contractor/ETW processes is beneficial About Brickred Systems: Brickred Systems is a global leader in next-generation technology, consulting, and business process service companies. We enable clients to navigate their digital transformation. Brickred Systems delivers a range of consulting services to our clients across multiple industries around the world. Our practices employ highly skilled and experienced individuals with a client-centric passion for innovation and delivery excellence. With ISO 27001 and ISO 9001 certification and over a decade of experience in managing the systems and workings of global enterprises, we harness the power of cognitive computing hyper-automation, robotics, cloud, analytics, and emerging technologies to help our clients adapt to the digital world and make them successful. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.