

IntraEdge
DaaP Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DaaP Data Engineer in New York City, requiring a Bachelor's in Computer Science and experience in Big Data, ETL, Python, SQL, and GCP. Contract length and pay rate are unspecified. Hybrid work model; local candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 5, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Version Control #Documentation #Data Lifecycle #Databases #Data Pipeline #Linux #Shell Scripting #PySpark #Data Ingestion #GitHub #Data Engineering #BigQuery #"ETL (Extract #Transform #Load)" #GitLab #BitBucket #Jenkins #Dataflow #Monitoring #Data Accuracy #GIT #Python #Data Science #Security #Cloud #Big Data #Unix #Logstash #SQL (Structured Query Language) #Compliance #Scripting #GCP (Google Cloud Platform) #Spark (Apache Spark) #Code Reviews #Programming #Computer Science
Role description
IntraEdge has an immediate need for a DaaP Data Engineer in New York City, New York.
Candidates must be local and able to commute to NYC Hybrid (and interview onsite)
• this position is on the (Infrastructure Data & Analytics) DATA SCIENCE GLOBAL INFRASTRUCTURE team
Requested Skills:
Big Data, CI/CD, Data Analytics, ELK Stack (Elastic Search, Logstash, Kibana)
ETL – Big Data / Data Warehousing, GCP, Git (GitHub, GitLab, BitBucket, SVN)
Manta, Python, SQL
Required Skills and Qualifications:
· Bachelor's degree in Computer Science Engineering, or a related field.
· Proven experience as Data Engineer or similar role. · Strong proficiency in Object Oriented programming using Python.
· Experience with ETL jobs design principles. · Solid understanding of SQL and data modelling.
· Knowledge on Unix/Linux and Shell scripting principles.
· Familiarity with Git and version control systems.
· Experience with Jenkins and CI/CD pipelines. · Knowledge of software development best practices and design patterns.
· Excellent problem-solving skills and attention to detail.
· Strong communication and collaboration skills.
· Experience with GCP
Job Details
Key Responsibilities:
· Design and develop solutions using tools like Dataflow, Dataproc, and BigQuery etc.
· Extensive hands-on experience in object-oriented programming using Python, PySpark APIs etc.
· Experience in building data pipelines for huge volume of data.
· Extracting, transforming, and loading data from various sources, including databases, APIs, and flat files, using Python and PySpark.
· Experience in implementing and maintaining Data Ingestion process.
· Hands on experience in writing basic to advance level of optimized queries using SQL & BigQuery.
· Hands on experience in designing, implementing, and maintaining Data Transformation jobs using most efficient tools/technologies.
· Ensure the performance, quality, and responsiveness of solutions by implementing rigorous testing, validation, and monitoring to ensure data accuracy and reliability.
· Participate in code reviews to maintain code quality.
· Work with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
· Adhere to security best practices for cloud environments and ensuring compliance with regulatory standards.
· Manage and optimize the entire data lifecycle, from ingestion to archiving and deletion.
· Should be able to write shell scripts. · Utilize GitHub for source version control.
· Set up and maintain CI/CD pipelines.
· Create and maintain documentation for data solutions including design specifications and user guides.
· Troubleshoot, debug, and upgrade existing application & ETL job chains.
IntraEdge has an immediate need for a DaaP Data Engineer in New York City, New York.
Candidates must be local and able to commute to NYC Hybrid (and interview onsite)
• this position is on the (Infrastructure Data & Analytics) DATA SCIENCE GLOBAL INFRASTRUCTURE team
Requested Skills:
Big Data, CI/CD, Data Analytics, ELK Stack (Elastic Search, Logstash, Kibana)
ETL – Big Data / Data Warehousing, GCP, Git (GitHub, GitLab, BitBucket, SVN)
Manta, Python, SQL
Required Skills and Qualifications:
· Bachelor's degree in Computer Science Engineering, or a related field.
· Proven experience as Data Engineer or similar role. · Strong proficiency in Object Oriented programming using Python.
· Experience with ETL jobs design principles. · Solid understanding of SQL and data modelling.
· Knowledge on Unix/Linux and Shell scripting principles.
· Familiarity with Git and version control systems.
· Experience with Jenkins and CI/CD pipelines. · Knowledge of software development best practices and design patterns.
· Excellent problem-solving skills and attention to detail.
· Strong communication and collaboration skills.
· Experience with GCP
Job Details
Key Responsibilities:
· Design and develop solutions using tools like Dataflow, Dataproc, and BigQuery etc.
· Extensive hands-on experience in object-oriented programming using Python, PySpark APIs etc.
· Experience in building data pipelines for huge volume of data.
· Extracting, transforming, and loading data from various sources, including databases, APIs, and flat files, using Python and PySpark.
· Experience in implementing and maintaining Data Ingestion process.
· Hands on experience in writing basic to advance level of optimized queries using SQL & BigQuery.
· Hands on experience in designing, implementing, and maintaining Data Transformation jobs using most efficient tools/technologies.
· Ensure the performance, quality, and responsiveness of solutions by implementing rigorous testing, validation, and monitoring to ensure data accuracy and reliability.
· Participate in code reviews to maintain code quality.
· Work with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
· Adhere to security best practices for cloud environments and ensuring compliance with regulatory standards.
· Manage and optimize the entire data lifecycle, from ingestion to archiving and deletion.
· Should be able to write shell scripts. · Utilize GitHub for source version control.
· Set up and maintain CI/CD pipelines.
· Create and maintain documentation for data solutions including design specifications and user guides.
· Troubleshoot, debug, and upgrade existing application & ETL job chains.





