

PRI Global
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and is remote. Key skills include Spark, SQL, and Hadoop. A Bachelor's degree in a quantitative discipline is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Arlington, VA
-
🧠 - Skills detailed
#SQL Queries #Spark SQL #Mathematics #PySpark #HDFS (Hadoop Distributed File System) #SQL (Structured Query Language) #SQL Server #Scala #Data Modeling #Python #"ETL (Extract #Transform #Load)" #Big Data #YARN (Yet Another Resource Negotiator) #Hadoop #Impala #Spark (Apache Spark) #Data Warehouse #Data Engineering #Database Design #Data Pipeline #Apache Spark
Role description
Must Skills:
1. Spark
1. SQL
1. Hadoop
Day to Day:
• Coordinating with PM and Teammates to determine prioritization of existing and upcoming tasks.
• Assist in creating, running and optimizing queries on Impala and Spark to extract data from the Data Warehouse.
• Convert business requirements from client stakeholders into technical specs.
• Build PySpark jobs based on the technical specs for multiple projects in parallel.
• Monitor and Troubleshoot the execution of jobs.
• Ingest Data into SQL Server and transform it for Custom Analytics models.
• Generate Reports/ Dashboards to be then exported back to the client.
• Brainstorm and collaborate with stakeholders to solve blockers, if any.
• Ensure timely completion of daily deliverables.
• Experience as a Data Engineer or in a similar role, with a strong understanding of data engineering concepts and methodologies.
• Strong knowledge of writing and optimizing SQL queries to retrieve, manipulate, and analyze data efficiently.
• Hands-on experience with big data technologies such as:
o Apache Spark (PySpark, Spark SQL, Spark Streaming)
o Hadoop ecosystem (HDFS/ Ozone, Hive, YARN)
• Familiarity with ETL frameworks and the ability to design, implement, and maintain data pipelines.
• Understanding data modeling concepts and database design to support scalable data solutions.
• Familiarity with Python.
• Ability to analyze and troubleshoot data issues and provide solutions with minimal supervision.
• Basic knowledge of testing and validating data to ensure accuracy and consistency in data pipelines.
• Excellent verbal and written communication skills, with the ability to articulate complex ideas clearly and concisely to both technical and non-technical stakeholders.
• Bachelor's degree in quantitative discipline such as Engineering, Mathematics, Finance, Business, or a related field. Equivalent practical experience may also be considered.
Must Skills:
1. Spark
1. SQL
1. Hadoop
Day to Day:
• Coordinating with PM and Teammates to determine prioritization of existing and upcoming tasks.
• Assist in creating, running and optimizing queries on Impala and Spark to extract data from the Data Warehouse.
• Convert business requirements from client stakeholders into technical specs.
• Build PySpark jobs based on the technical specs for multiple projects in parallel.
• Monitor and Troubleshoot the execution of jobs.
• Ingest Data into SQL Server and transform it for Custom Analytics models.
• Generate Reports/ Dashboards to be then exported back to the client.
• Brainstorm and collaborate with stakeholders to solve blockers, if any.
• Ensure timely completion of daily deliverables.
• Experience as a Data Engineer or in a similar role, with a strong understanding of data engineering concepts and methodologies.
• Strong knowledge of writing and optimizing SQL queries to retrieve, manipulate, and analyze data efficiently.
• Hands-on experience with big data technologies such as:
o Apache Spark (PySpark, Spark SQL, Spark Streaming)
o Hadoop ecosystem (HDFS/ Ozone, Hive, YARN)
• Familiarity with ETL frameworks and the ability to design, implement, and maintain data pipelines.
• Understanding data modeling concepts and database design to support scalable data solutions.
• Familiarity with Python.
• Ability to analyze and troubleshoot data issues and provide solutions with minimal supervision.
• Basic knowledge of testing and validating data to ensure accuracy and consistency in data pipelines.
• Excellent verbal and written communication skills, with the ability to articulate complex ideas clearly and concisely to both technical and non-technical stakeholders.
• Bachelor's degree in quantitative discipline such as Engineering, Mathematics, Finance, Business, or a related field. Equivalent practical experience may also be considered.






