

Sr. Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with a contract length of "Unknown," offering an hourly pay rate of $50.00-$57.69. Remote work is required. Key skills include SQL, AWS, Apache Spark, ETL tools, and data governance experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
456
-
ποΈ - Date discovered
July 18, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Kansas City, MO
-
π§ - Skills detailed
#Deployment #Java #Data Engineering #dbt (data build tool) #Cloud #Kubernetes #Data Security #Quality Assurance #Data Pipeline #Data Processing #Python #AWS (Amazon Web Services) #Airflow #Apache Spark #Mathematics #Microsoft Power BI #Security #Docker #Spark (Apache Spark) #Kafka (Apache Kafka) #Redshift #Data Governance #Data Modeling #SQL (Structured Query Language) #Computer Science #Data Science #Scala #ML (Machine Learning) #Model Deployment #Apache Kafka #"ETL (Extract #Transform #Load)" #BI (Business Intelligence)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We're searching for a talented and experienced Senior Data Engineer to join our expanding data engineering team. In this role, you'll collaborate with data scientists, analysts, and various business units (marketing, sales, and application developers) to design, develop, and maintain scalable data systems and pipelines. This is a fantastic opportunity to power critical business insights in a fast-paced environment and make a significant impact on our data infrastructure.
Essential Duties
β’ Solve Complex Data Challenges: Leverage your extensive experience and expertise to develop innovative solutions for intricate data problems. You'll apply professional judgment to determine the best methods and techniques, even when navigating new territory.
β’ Design & Build Scalable Data Systems: Architect, develop, and maintain high-performance, scalable, and testable data pipelines and ETL (Extract, Transform, Load) processes.
β’ Drive Cross-Functional Collaboration: Partner closely with data science, analytics, product, marketing, and engineering teams to translate business needs into robust, scalable data solutions. You'll also network with senior internal and external stakeholders to enhance our overall data strategies.
β’ Exhibit Autonomy & Ownership: Work independently with minimal oversight, ensuring the technical integrity of all your solutions. You'll define methods and procedures for new assignments and contribute to coordinating data engineering activities.
β’ Optimize & Secure Data Infrastructure: Continuously improve our existing data platforms to guarantee system performance, reliability, and security. You'll also implement and advocate for best practices in data governance, security, and quality assurance.
β’ Lead & Mentor: Influence project planning and timelines, provide constructive feedback, and mentor junior engineers, fostering a high-performing and collaborative team environment. You'll stay ahead of industry trends and emerging technologies to continuously enhance our data engineering processes.
Work Remotely: This is a fully remote position, requiring a reliable internet connection in a private setting.
Basic Qualifications
β’ You'll need a Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field; equivalent experience is also acceptable.
β’ Bring 5+ years of data engineering experience to the table, with a proven track record of solving complex problems and scaling data pipelines.
β’ You should have strong SQL skills and experience with data processing frameworks like Apache Spark or Apache Kafka.
β’ We're looking for expertise in cloud data platforms, specifically AWS.
β’ You'll need hands-on experience with ETL tools such as Airflow or dbt.
Preferred Qualifications
β’ Experience with containerization and orchestration tools like Docker and Kubernetes.
β’ Familiarity with machine learning workflows and model deployment pipelines.
β’ Skills in Power BI management and report building.
β’ Knowledge of data security best practices and frameworks.
β’ Proficiency in Python, Java, or Scala is essential.
β’ Demonstrate a solid understanding of data warehousing concepts (e.g., Redshift).
β’ You'll also have experience with data modeling, governance, and quality assurance.
β’ Lastly, strong analytical and problem-solving skills are a must, along with the ability to work independently and exercise good judgment.
Hourly: $50.00-$57.69/hour AW