

Links Technology Solutions
Solutions Consultant (Data Engineering)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Solutions Consultant (Data Engineering) in Chicago, IL, offering $100-$120/hr for a long-term contract. Requires 5+ years of experience, strong Databricks and Python skills, and financial market background. Data Engineering Associate certification is mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
960
-
ποΈ - Date
March 4, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Chicago, IL
-
π§ - Skills detailed
#Programming #Data Warehouse #Redshift #Cloud #Apache Spark #Linux #Spark (Apache Spark) #SQL (Structured Query Language) #Synapse #BigQuery #Python #Data Pipeline #AWS (Amazon Web Services) #Data Architecture #Scala #Data Quality #Data Engineering #Databricks #Consulting #Azure
Role description
Solutions Consultant (Data Engineering) Location: MUST be able to work ONSITE in Chicago, IL Experience Level: Mid-Senior (5+ Years Combined) Pay rate: $100/hr β $120/hr Contract Duration: Long term contract
Position Overview Our client in the trading industry is seeking a Solutions Consultant with a strong technical foundation in Data Engineering and a proven track record in client-facing roles. You will be responsible for helping their customers design and implement modern data architectures, specifically focusing on the Databricks Lakehouse platform. This role requires a blend of hands-on coding, architectural guidance, and the ability to navigate the nuances of cloud data warehouses like BigQuery, Synapse, or Redshift while migrating or integrating with Databricks.
Key Responsibilities
Technical Implementation: Lead the hands-on development of data pipelines, leveraging your experience with at least one major Databricks project delivery.
Architecture Guidance: Design and implement the Medallion (Multi-hop) Architecture to ensure data quality and reliability from raw ingestion to business-ready layers.
Optimization: Apply advanced Spark tuning techniques (Partitioning, Z-Ordering, Caching) to improve job performance and reduce cloud costs.
Cross-Platform Integration: Assist clients in bridging the gap between traditional cloud data warehouses (Redshift, BigQuery, Synapse) and the Databricks Lakehouse.
Client Advocacy: Act as a technical consultant, translating complex data requirements into scalable Python or Scala-based solutions.
Required Qualifications
Professional Experience: 3+ years of consulting experience paired with 2+ years of dedicated Data Engineering experience.
Platform Expertise: Hands-on development experience with Databricks (minimum 1 major project delivered).
Certifications: Completion of the Data Engineering Associate certification and all required prerequisite classes.
Technical Training: Must have successfully completed:
Apache Spark Programming with Databricks
Data Engineering with Databricks
Optimizing Apache Sparkβ’ on Databricks
Programming Proficiency: Strong proficiency in Python is required; knowledge of Scala is a significant advantage.
SQL & Cloud Data Warehousing: Proven experience delivering SQL-based solutions and familiarity with Google BigQuery, Azure Synapse, or AWS Redshift.
Experience with application development and Dataricks Apps
Comfort working in Linux environments
Background in financial markets, such as market making, trading, or related domains
Able to work independently, prioritize effectively, and proactively identify opportunities for improvement
Comfortable operating in a loosely defined scope, where they are expected not only to execute tasks but also help define priorities and drive work forward.
#IND1
Solutions Consultant (Data Engineering) Location: MUST be able to work ONSITE in Chicago, IL Experience Level: Mid-Senior (5+ Years Combined) Pay rate: $100/hr β $120/hr Contract Duration: Long term contract
Position Overview Our client in the trading industry is seeking a Solutions Consultant with a strong technical foundation in Data Engineering and a proven track record in client-facing roles. You will be responsible for helping their customers design and implement modern data architectures, specifically focusing on the Databricks Lakehouse platform. This role requires a blend of hands-on coding, architectural guidance, and the ability to navigate the nuances of cloud data warehouses like BigQuery, Synapse, or Redshift while migrating or integrating with Databricks.
Key Responsibilities
Technical Implementation: Lead the hands-on development of data pipelines, leveraging your experience with at least one major Databricks project delivery.
Architecture Guidance: Design and implement the Medallion (Multi-hop) Architecture to ensure data quality and reliability from raw ingestion to business-ready layers.
Optimization: Apply advanced Spark tuning techniques (Partitioning, Z-Ordering, Caching) to improve job performance and reduce cloud costs.
Cross-Platform Integration: Assist clients in bridging the gap between traditional cloud data warehouses (Redshift, BigQuery, Synapse) and the Databricks Lakehouse.
Client Advocacy: Act as a technical consultant, translating complex data requirements into scalable Python or Scala-based solutions.
Required Qualifications
Professional Experience: 3+ years of consulting experience paired with 2+ years of dedicated Data Engineering experience.
Platform Expertise: Hands-on development experience with Databricks (minimum 1 major project delivered).
Certifications: Completion of the Data Engineering Associate certification and all required prerequisite classes.
Technical Training: Must have successfully completed:
Apache Spark Programming with Databricks
Data Engineering with Databricks
Optimizing Apache Sparkβ’ on Databricks
Programming Proficiency: Strong proficiency in Python is required; knowledge of Scala is a significant advantage.
SQL & Cloud Data Warehousing: Proven experience delivering SQL-based solutions and familiarity with Google BigQuery, Azure Synapse, or AWS Redshift.
Experience with application development and Dataricks Apps
Comfort working in Linux environments
Background in financial markets, such as market making, trading, or related domains
Able to work independently, prioritize effectively, and proactively identify opportunities for improvement
Comfortable operating in a loosely defined scope, where they are expected not only to execute tasks but also help define priorities and drive work forward.
#IND1






