Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a long-term contract in Jersey City, NJ. Requires 8+ years in data engineering, 3+ years with Snowflake and Python, expertise in SQL, and familiarity with Airflow. Hybrid work model.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 11, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Python #GIT #Data Warehouse #Data Pipeline #Data Ingestion #Snowflake #Jira #Airflow #Shell Scripting #Data Lineage #Data Strategy #Data Transformations #Data Engineering #Automated Testing #BI (Business Intelligence) #Spark (Apache Spark) #Data Governance #Agile #"ETL (Extract #Transform #Load)" #JavaScript #Data Processing #Strategy #Data Quality #Data Architecture #Metadata #Data Management #Scala #SQL (Structured Query Language) #Data Orchestration #Automation #Scripting #Project Management #Compliance
Role description
Hello Everyone, Title: Senior Data Engineer Location: Jersey City, NJ (Hybrid) Type: Contract Duration: Long-term Job Summary The company is seeking a Senior Data Engineer with expertise in designing, building, and managing scalable data architectures. You will work with tools like Snowflake, Python, SQL, and data orchestration platforms to develop high-performance solutions that support business intelligence and analytics. You will collaborate with engineering and analytics teams to ensure seamless data flow, system efficiency, and robust data governance. Key Responsibilities β€’ Design and implement enterprise-level data solutions using Snowflake β€’ Build and maintain data pipelines for structured and unstructured data β€’ Develop Python and SQL-based data transformations for analytics and reporting β€’ Design and optimize data warehouses in Snowflake β€’ Implement scalable data ingestion and processing pipelines using Spark, Scala, and Python β€’ Use orchestration tools like Airflow or Automic to automate workflows β€’ Manage metadata, data lineage, and governance for data quality and compliance β€’ Utilize SQL analytical functions for advanced data processing β€’ Develop automation scripts using Shell scripting and JavaScript β€’ Apply best practices in CI/CD, automated testing, and performance optimization β€’ Use Git, Confluence, and Jira for code and project management β€’ Troubleshoot data issues and implement proactive solutions for system reliability β€’ Collaborate with cross-functional teams to align data strategy with business goals Required Qualifications & Experience β€’ 8+ years of experience in data engineering and enterprise data solutions β€’ 3+ years of hands-on experience with Snowflake β€’ 3+ years of experience in Python development β€’ Strong expertise in SQL and Python for data processing and transformation β€’ Production experience with Spark, Scala, and Python β€’ Hands-on experience with data orchestration tools (e.g., Airflow, Automic) β€’ Solid knowledge of metadata management and data lineage β€’ Ability to thrive in a fast-paced, agile environment β€’ Excellent problem-solving and communication skills β€’ Strong collaborative and team-oriented mindset Work Location & Expectations β€’ Hybrid model – a mix of remote and on-site work β€’ Expected to be on-site at the client location 3–4 consecutive days per month