

Tential Solutions
Big Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a contract length of "unknown" and a pay rate of "$XX/hour." Required skills include Hadoop, Spark, Python, and experience in the Financial Services industry. A Bachelor's degree and five years of experience are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 3, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Rockville, MD
-
π§ - Skills detailed
#Kanban #Scrum #Storage #Java #Data Ingestion #Deployment #Quality Assurance #Agile #Data Architecture #"ETL (Extract #Transform #Load)" #ChatGPT #Spark (Apache Spark) #Trino #Data Analysis #Data Pipeline #Data Processing #Scala #System Testing #Data Science #Data Engineering #GitHub #Automation #Debugging #Data Quality #Python #Stories #Big Data #Data Integration #Programming #Hadoop #Automated Testing #Cloud #Computer Science #AI (Artificial Intelligence)
Role description
Job Description Summary
We are seeking a highly skilled and experienced Big Data Engineer to design, develop, and optimize large-scale data processing systems. In this role, you will work closely with cross-functional teams to architect data pipelines, implement data integration solutions, and ensure the performance, scalability, and reliability of big data platforms. The ideal candidate will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc
Responsibilities
β’ Design, develop, and maintain large-scale data processing pipelines using Big Data technologies (e.g., Hadoop, Spark, Python, Scala).
β’ Implement data ingestion, storage, transformation, and analysis of solutions that are scalable, efficient, and reliable.
β’ Stay current with industry trends and emerging Big Data technologies to continuously improve the data architecture
β’ Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
β’ Optimize and enhance existing data pipelines for performance, scalability, and reliability.
β’ Develop automated testing frameworks and implement continuous testing for data quality assurance.
β’ Conduct unit, integration, and system testing to ensure the robustness and accuracy of data pipelines.
β’ Work with data scientists and analysts to support data-driven decision-making across the organization.
β’ Ability to write and maintain automated unit, integration, and end-to-end tests
β’ Monitor and troubleshoot data pipelines in production environments to identify and resolve issues.
Education/Experience Requirements
Bachelorβs degree in Computer Science, Information Systems or related discipline with at least five (5) years of related experience, or equivalent training and/or work experience; Masterβs degree and past Financial Services industry experience preferred.
Demonstrated technical expertise in Object Oriented and database technologies/concepts which resulted in deployment of enterprise quality solutions.
Past experience with developing enterprise quality solutions in an iterative or Agile environment.
Extensive knowledge of industry leading software engineering approaches including Test Automation, Build Automation and Configuration Management frameworks.
Strong written and verbal technical communication skills.
Demonstrated ability to develop effective working relationships that improved the quality of work products.
Should be well organized, thorough, and able to handle competing priorities.
Ability to maintain focus and develop proficiency in new skills rapidly.
Ability to work in a fast paced environment.
Experience with object oriented programming languages such as Java, Scala or Python.
Essential Technical Skills
β’ AI Tool Proficiency: Hands-on experience with AI development tools (GitHub Copilot, Q Developer, ChatGPT, Claude, etc.)
β’ Technical Background: Strong software development background with ability to contribute to technical discussions
β’ Agile Methodology: Extensive experience with Scrum, Kanban, and continuous improvement practices
Big Data technologies
β’ Experience with Big data technologies such as Hadoop, Spark, Hive & Trino
β’ Evaluate understanding of common issues like:
β’ Data skew and strategies to mitigate it.
β’ Working with massive data volumes in PetaBytes.
β’ Troublehsooting job failures due to resource limitations, bad data, scalability challenged.
β’ Look for real-world debugging and mitigation stories.
AI Skills
β’ Prompt Engineering: Proficiency in crafting effective prompts for AI coding assistants and analysis tools
β’ AI Workflow Design: Experience redesigning development processes to leverage AI capabilities
β’ Data Analysis: Ability to interpret AI-generated insights and translate them into actionable team improvements
β’ Change Management: Experience leading teams through AI adoption and workflow transformation
#Dice
Job Description Summary
We are seeking a highly skilled and experienced Big Data Engineer to design, develop, and optimize large-scale data processing systems. In this role, you will work closely with cross-functional teams to architect data pipelines, implement data integration solutions, and ensure the performance, scalability, and reliability of big data platforms. The ideal candidate will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc
Responsibilities
β’ Design, develop, and maintain large-scale data processing pipelines using Big Data technologies (e.g., Hadoop, Spark, Python, Scala).
β’ Implement data ingestion, storage, transformation, and analysis of solutions that are scalable, efficient, and reliable.
β’ Stay current with industry trends and emerging Big Data technologies to continuously improve the data architecture
β’ Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
β’ Optimize and enhance existing data pipelines for performance, scalability, and reliability.
β’ Develop automated testing frameworks and implement continuous testing for data quality assurance.
β’ Conduct unit, integration, and system testing to ensure the robustness and accuracy of data pipelines.
β’ Work with data scientists and analysts to support data-driven decision-making across the organization.
β’ Ability to write and maintain automated unit, integration, and end-to-end tests
β’ Monitor and troubleshoot data pipelines in production environments to identify and resolve issues.
Education/Experience Requirements
Bachelorβs degree in Computer Science, Information Systems or related discipline with at least five (5) years of related experience, or equivalent training and/or work experience; Masterβs degree and past Financial Services industry experience preferred.
Demonstrated technical expertise in Object Oriented and database technologies/concepts which resulted in deployment of enterprise quality solutions.
Past experience with developing enterprise quality solutions in an iterative or Agile environment.
Extensive knowledge of industry leading software engineering approaches including Test Automation, Build Automation and Configuration Management frameworks.
Strong written and verbal technical communication skills.
Demonstrated ability to develop effective working relationships that improved the quality of work products.
Should be well organized, thorough, and able to handle competing priorities.
Ability to maintain focus and develop proficiency in new skills rapidly.
Ability to work in a fast paced environment.
Experience with object oriented programming languages such as Java, Scala or Python.
Essential Technical Skills
β’ AI Tool Proficiency: Hands-on experience with AI development tools (GitHub Copilot, Q Developer, ChatGPT, Claude, etc.)
β’ Technical Background: Strong software development background with ability to contribute to technical discussions
β’ Agile Methodology: Extensive experience with Scrum, Kanban, and continuous improvement practices
Big Data technologies
β’ Experience with Big data technologies such as Hadoop, Spark, Hive & Trino
β’ Evaluate understanding of common issues like:
β’ Data skew and strategies to mitigate it.
β’ Working with massive data volumes in PetaBytes.
β’ Troublehsooting job failures due to resource limitations, bad data, scalability challenged.
β’ Look for real-world debugging and mitigation stories.
AI Skills
β’ Prompt Engineering: Proficiency in crafting effective prompts for AI coding assistants and analysis tools
β’ AI Workflow Design: Experience redesigning development processes to leverage AI capabilities
β’ Data Analysis: Ability to interpret AI-generated insights and translate them into actionable team improvements
β’ Change Management: Experience leading teams through AI adoption and workflow transformation
#Dice






