PTR Global

Data Engineer -601/602

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer specializing in AWS, based in Jersey City, New Jersey, for a contract duration of over 6 months, offering $65-70/hr. Key skills include Python, Spark, SQL, and experience with data lakes and governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
December 3, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Data Engineering #Vault #Databricks #Data Storage #Data Quality #Data Pipeline #Data Science #Python #Hadoop #Data Access #Code Reviews #Data Lifecycle #JSON (JavaScript Object Notation) #Redshift #Spark (Apache Spark) #Data Vault #Complex Queries #Agile #Data Warehouse #MongoDB #Scala #NoSQL #DynamoDB #Data Modeling #Airflow #AWS (Amazon Web Services) #Physical Data Model #Security #SQL (Structured Query Language) #PySpark #Compliance #Data Analysis #Scripting #Oracle #Data Lake #AI (Artificial Intelligence) #Unix #Storage #Data Processing #Data Governance #Batch #Data Lakehouse #Databases #Cloud #Snowflake
Role description
THIS JOB DESCRIPTION WAS CREATED BY AI, REVIEW BEFORE POSTING Position: Data Engineer- AWS Location: Jersey City, New Jersey Duration: Contract Job ID: 172543 Job Overview: As a Data Engineer III specializing in Python, Spark, and Data Lake technologies within the Consumer and Community Bank, you will be a key member of an agile team. Your role will involve designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. You will develop, test, and maintain essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve organizational objectives. Responsibilities: • Review and ensure sufficient protection of enterprise data through controls. • Make custom configuration changes in tools to meet business or customer requests. • Update logical or physical data models based on new use cases. • Utilize SQL frequently and understand NoSQL databases and their applications. • Contribute to a team culture of diversity, opportunity, inclusion, and respect. • Develop enterprise data models and maintain large-scale data processing pipelines. • Lead code reviews and provide mentoring to team members. • Drive data quality and ensure data accessibility for analysts and data scientists. • Ensure compliance with data governance requirements and alignment with business goals. Qualifications: • Formal training or certification in data engineering concepts with 2+ years of applied experience. • Experience across the data lifecycle with advanced SQL skills and understanding of NoSQL databases. • Proficiency in statistical data analysis and determining appropriate tools and data patterns. • Extensive experience in AWS and maintaining data pipelines using Python and PySpark. • Proficient in Python and PySpark for writing and executing complex queries. • Proven experience in performance tuning to optimize job execution. • Advanced proficiency in leveraging Gen AI models using APIs/SDKs. • Expertise in cloud data lakehouse platforms such as AWS Data Lake, Databricks, or Hadoop. • Proficiency in relational data stores like Postgres, Oracle, or similar, and NoSQL data stores like Cassandra, DynamoDB, or MongoDB. • Advanced proficiency in Cloud Data Warehouses such as Snowflake or AWS Redshift. • Experience with scheduling/orchestration tools like Airflow or AWS Step Functions. • Proficiency in Unix scripting, data structures, and data serialization formats like JSON, AVRO, or Protobuf. • Knowledge of big-data storage formats such as Parquet or Iceberg. • Familiarity with data processing methodologies like batch, micro-batching, or streaming. • Experience with data modeling techniques such as Dimensional, Data Vault, Kimball, or Inmon. • Understanding of Agile methodology, TDD/BDD, and CI/CD tools. Preferred Qualifications: • Knowledge of data governance and security best practices. • Experience in conducting data analysis to support business insights. • Strong expertise in Python and Spark. About PTR Global: PTR Global is a leading provider of information technology and workforce solutions. PTR Global has become one of the largest providers in its industry, with over 5000 professionals providing services across the U.S. and Canada. For more information visit www.ptrglobal.com At PTR Global, we understand the importance of your privacy and security. We NEVER ASK job applicants to: • Pay any fee to be considered for, submitted to, or selected for any opportunity. • Purchase any product, service, or gift cards from us or for us as part of an application, interview, or selection process. • Provide sensitive financial information such as credit card numbers or banking information. Successfully placed or hired candidates would only be asked for banking details after accepting an offer from us during our official onboarding processes as part of payroll setup. Pay Range: $65-70/hr The specific compensation for this position will be determined by several factors, including the scope, complexity, and location of the role, as well as the cost of labor in the market; the skills, education, training, credentials, and experience of the candidate; and other conditions of employment. Our full-time consultants have access to benefits, including medical, dental, vision, and 401K contributions, as well as PTO, sick leave, and other benefits mandated by applicable state or localities where you reside or work. If you receive a suspicious message, email, or phone call claiming to be from PTR Global do not respond or click on any links. Instead, contact us directly at +1 214-740-2424. To report any concerns, please email us at legal@pinnacle1.com