

Selby Jennings
Data Engineer (Contract)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Contract) in Boston for 12 months, paying a competitive rate. Key skills include SQL, Python, PostgreSQL, and financial data experience. Candidates must not require visa sponsorship and should be comfortable with data quality tooling.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
880
-
🗓️ - Date
April 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Boston, MA
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Data Quality #Documentation #Strategy #SQL Queries #Datasets #FactSet #PostgreSQL #Snowflake #Data Pipeline #Spark (Apache Spark) #Apache Spark #SQL Server #Data Processing #Python #Data Engineering
Role description
NO C2C, NO 1099 , NO LLC - DIRECT W2 CANDIDATES ONLY
WE CAN ONLY ENGAGE WITH CANDIDATES THAT DO NOT REQUIRE VISA SPONSORSHIP
About the Role
My client are looking for an experienced Data Engineer to join their team in Boston on an initial 12-month contract basis. This is a hands-on, delivery-focused role designed to carry a critical body of work through to completion. The successful candidate will be working close to the data every day: diagnosing issues, building and optimising pipelines, and delivering incremental progress within an existing codebase and data landscape.
This is not a strategy or architecture role. My client needs an executor: someone technically strong, comfortable in the detail, and capable of making consistent forward progress with limited oversight.
Key Responsibilities
• Write, optimise, and performance-tune complex SQL queries against large financial datasets
• Develop and maintain Python-based data pipelines and processing workflows
• Work with PostgreSQL, Parquet, and modern data engineering patterns on a daily basis
• Diagnose and resolve data quality issues, ensuring pipeline resiliency and reliability
• Apply orchestration best practices, including job partitioning to prevent full reruns on pipeline failures
• Handle financial and investment data including portfolios, reference data, positions, and analytics inputs
• Track tasks, manage your own workload, and keep progress moving in ambiguous or evolving situations
• Collaborate with internal stakeholders and operate independently within existing systems and documentation
Required Experience
• Strong, demonstrable SQL skills including performance tuning against large datasets, across PostgreSQL and/or SQL Server
• Solid Python experience (3+ years) for data processing and pipeline development
• Hands-on experience with PostgreSQL, Parquet, and medallion architecture
• Familiarity with Apache Spark and distributed data processing
• Exposure to Snowflake, including features such as time travel and partitioning strategies
• Proven experience working with financial or investment data: portfolios, reference data, positions, or analytics inputs
• Ability to operate independently within an existing codebase and data environment
• Data quality tooling experience, for example Great Expectations or equivalent
Highly Desirable
• FactSet experience, particularly across multiple modules including Portfolio Analytics (PA)
• Experience handling delayed or late-arriving data in financial or insurance workflows
• Familiarity with IBOR data and related import/partitioning patterns
• Comfort working in-office in a collaborative team environment
NO C2C, NO 1099 , NO LLC - DIRECT W2 CANDIDATES ONLY
WE CAN ONLY ENGAGE WITH CANDIDATES THAT DO NOT REQUIRE VISA SPONSORSHIP
About the Role
My client are looking for an experienced Data Engineer to join their team in Boston on an initial 12-month contract basis. This is a hands-on, delivery-focused role designed to carry a critical body of work through to completion. The successful candidate will be working close to the data every day: diagnosing issues, building and optimising pipelines, and delivering incremental progress within an existing codebase and data landscape.
This is not a strategy or architecture role. My client needs an executor: someone technically strong, comfortable in the detail, and capable of making consistent forward progress with limited oversight.
Key Responsibilities
• Write, optimise, and performance-tune complex SQL queries against large financial datasets
• Develop and maintain Python-based data pipelines and processing workflows
• Work with PostgreSQL, Parquet, and modern data engineering patterns on a daily basis
• Diagnose and resolve data quality issues, ensuring pipeline resiliency and reliability
• Apply orchestration best practices, including job partitioning to prevent full reruns on pipeline failures
• Handle financial and investment data including portfolios, reference data, positions, and analytics inputs
• Track tasks, manage your own workload, and keep progress moving in ambiguous or evolving situations
• Collaborate with internal stakeholders and operate independently within existing systems and documentation
Required Experience
• Strong, demonstrable SQL skills including performance tuning against large datasets, across PostgreSQL and/or SQL Server
• Solid Python experience (3+ years) for data processing and pipeline development
• Hands-on experience with PostgreSQL, Parquet, and medallion architecture
• Familiarity with Apache Spark and distributed data processing
• Exposure to Snowflake, including features such as time travel and partitioning strategies
• Proven experience working with financial or investment data: portfolios, reference data, positions, or analytics inputs
• Ability to operate independently within an existing codebase and data environment
• Data quality tooling experience, for example Great Expectations or equivalent
Highly Desirable
• FactSet experience, particularly across multiple modules including Portfolio Analytics (PA)
• Experience handling delayed or late-arriving data in financial or insurance workflows
• Familiarity with IBOR data and related import/partitioning patterns
• Comfort working in-office in a collaborative team environment






