

Big Data Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with a contract length of "unknown," offering a pay rate of "unknown." Candidates should have 5+ years of experience in Scala, SQL, Apache Spark, and cloud-based platforms. Strong data modeling and problem-solving skills are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
640
-
ποΈ - Date discovered
June 7, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Francisco, CA
-
π§ - Skills detailed
#DevOps #AWS (Amazon Web Services) #Airflow #Datasets #Data Quality #Data Architecture #BigQuery #SQL (Structured Query Language) #NiFi (Apache NiFi) #Teradata #Automation #Impala #Monitoring #Programming #Databases #Unit Testing #Security #Data Governance #Data Warehouse #Apache Spark #Data Processing #DataOps #Azure #Data Modeling #Big Data #Scala #Spark (Apache Spark) #Python #Version Control #Deployment #Compliance #Kafka (Apache Kafka) #NoSQL #Java #Redshift #Data Pipeline #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Key Responsibilities:
Develop, maintain, and support data pipelines and systems to meet business SLAs using Scala, Apache Spark, and cloud technologies.
Ensure code quality, performance, and maintainability by supporting version control and deployment practices.
Perform unit testing, support integration and performance testing, and ensure adequate test coverage for developed components.
Research emerging technologies, recommend solutions, and contribute to the continuous improvement of architecture and engineering practices.
Own data quality, integrity, and consistency for assigned datasets and ensure robust monitoring and alerting are in place.
Collaborate with cross-functional teams to design scalable data solutions and models.
Analyze large datasets to identify trends, gaps, and improvement opportunities.
Support DataOps best practices including orchestration (Airflow, Oozie), workflow automation, and real-time streaming (Kafka, NIFI).
Required Qualifications:
5+ years of hands-on experience in Scala or Java development.
5+ years of experience with SQL and NoSQL databases, working with both structured and unstructured data.
5+ years of experience with Apache Spark and distributed data processing.
Strong experience with big data ecosystem tools such as Hive, Impala, OOZIE, Airflow, Kafka, and NIFI.
Experience in Data Modeling and designing large-scale data architectures.
Proficient in working on cloud-based or on-prem Big Data platforms like AWS Redshift, Google BigQuery, Azure Data Warehouse, Netezza, or Teradata.
Strong problem-solving skills and the ability to analyze complex data sets for insights and optimization.
Nice to Have:
Programming experience in Python.
Experience with data governance, security, and compliance in cloud environments.
Familiarity with CI/CD pipelines and DevOps tools.