

Mindlance
Lead BigData Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead BigData Engineer in the Banking/Financial industry, based in Charlotte, NC. The 12+ month contract offers a pay rate of "unknown." Required skills include 5+ years in data engineering, Python, and PySpark. Financial crimes domain experience is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
April 2, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Documentation #Data Modeling #Scala #Security #Migration #Big Data #Cloud #BigQuery #Datasets #Agile #Compliance #"ETL (Extract #Transform #Load)" #Data Engineering #Automation #Data Pipeline #Storage #Kafka (Apache Kafka) #Data Architecture #Batch #Data Processing #Data Governance #Monitoring #Snowflake #Metadata #Apache Spark #Dremio #Python #Data Quality #SQL (Structured Query Language) #GIT #Data Transformations #GCP (Google Cloud Platform) #Google Cloud Storage #Spark (Apache Spark) #Jira #Airflow #PySpark #Linux #Databricks #DevOps
Role description
Please find details for this position below:
Client: Banking/Financial Industry
Title: Lead BigData Engineer / Lead BigData Architect / Lead BigData Platform Engineer / Big Data Platform Engineer - 4 Openings
Location: Charlotte, NC – Hybrid Roles
Duration: 12+ Month (s) Extend or Convert based on performances
Job Details - 3 Openings:
Required Qualifications:
• 5+ years of Software Engineering experience
• Financial Crimes Technology team is evolving toward more in-house build capabilities and reducing dependency on legacy vendor/tooling approaches.
• This role will support data engineering and platform development for financial crimes use cases (AML, investigations, sanctions, fraud, KYC) by building scalable pipelines, improving data quality, and enabling analytics/reporting and downstream applications.
Key Responsibilities:
• Build and maintain batch and/or streaming data pipelines supporting financial crimes initiatives.
• Develop data transformations using Python + PySpark and optimize performance for large datasets.
• Apply strong understanding of Apache Spark architecture (executors, partitions, shuffles, joins, caching) to improve performance
• Partner with business and technical stakeholders to translate requirements into data models, mappings, and curated datasets.
• Support ingestion from multiple sources (transactional systems, case management, reference data, etc.).
• Implement data quality checks, reconciliation, and controls to ensure auditability and reliability.
• Contribute to modernization efforts (legacy → in-house build) including migration planning and redesign.
• Create documentation for pipelines, logic, and operational runbooks.
• Work within Agile delivery (Jira), supporting sprint execution and delivery timelines.
Required Skills:
• 5+ years of experience in data engineering / ETL / data platform development
• Strong hands-on development in:
• Python
• PySpark / Apache Spark
• Advanced SQL
• Experience working with large-scale data sets and performance tuning.
• Strong understanding of data concepts: data modeling, lineage, metadata, governance
• Experience supporting regulated environments with emphasis on controls and audit readiness
• Strong communication skills (ability to work with both engineering + business partners)
Preferred Skills (Nice to Have):
• Experience running PySpark workloads on Google Cloud Platform (GCP)
• Dataproc, BigQuery, Google Cloud Storage (GCS), etc.
• Experience with cloud native Big Data platforms
• Knowledge of data governance, security, and compliance practices
• Experience with CI/CD pipelines for data engineering workloads
• Orchestration: Airflow (or similar scheduling tools)
• Streaming: Kafka
• Lakehouse/Warehouse: Databricks / Snowflake / BigQuery
• CI/CD + DevOps: Git, pipelines, automation, release management
• Data governance/security: encryption, access controls, data masking, PII handling
• Prior Financial Crimes domain: AML / sanctions / fraud / investigations / KYC
Domain Experience (Highly Valued):
• Experience supporting AML, Transaction Monitoring, Investigations, Sanctions screening, Fraud, or similar risk/compliance functions.
• Familiarity with regulatory expectations and strong documentation discipline.
Job Details - Big Data Platform Engineer - 1 Openings
Required Qualifications:
• 4+ years of Software Engineering
• Big Data Platform (Source, Conform, Curate)
• Big Data developer (Spark, Scala, Python, Autosys, Git, etc.)
• Tools Used (Spark, Python, Scala, Linux Shell, Autosys, etc.)
• Home Grown Tool (Flow Master, Edit Upload, etc.) - if prior WF
• Interfaces (Linux Command Line, Editors like PyCharm, Dremio, etc.)
• Soft Skills (Data processing experience, Big Data experience, Data Controls experience, working in JIRA Agile)
• Accountability (Lead and own work streams end to end)
• Fin Crime (Domain expertise with preparing data for use cases like Transaction Monitoring, Customer Risk, Investigations, etc.)
EEO:
Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.
Please find details for this position below:
Client: Banking/Financial Industry
Title: Lead BigData Engineer / Lead BigData Architect / Lead BigData Platform Engineer / Big Data Platform Engineer - 4 Openings
Location: Charlotte, NC – Hybrid Roles
Duration: 12+ Month (s) Extend or Convert based on performances
Job Details - 3 Openings:
Required Qualifications:
• 5+ years of Software Engineering experience
• Financial Crimes Technology team is evolving toward more in-house build capabilities and reducing dependency on legacy vendor/tooling approaches.
• This role will support data engineering and platform development for financial crimes use cases (AML, investigations, sanctions, fraud, KYC) by building scalable pipelines, improving data quality, and enabling analytics/reporting and downstream applications.
Key Responsibilities:
• Build and maintain batch and/or streaming data pipelines supporting financial crimes initiatives.
• Develop data transformations using Python + PySpark and optimize performance for large datasets.
• Apply strong understanding of Apache Spark architecture (executors, partitions, shuffles, joins, caching) to improve performance
• Partner with business and technical stakeholders to translate requirements into data models, mappings, and curated datasets.
• Support ingestion from multiple sources (transactional systems, case management, reference data, etc.).
• Implement data quality checks, reconciliation, and controls to ensure auditability and reliability.
• Contribute to modernization efforts (legacy → in-house build) including migration planning and redesign.
• Create documentation for pipelines, logic, and operational runbooks.
• Work within Agile delivery (Jira), supporting sprint execution and delivery timelines.
Required Skills:
• 5+ years of experience in data engineering / ETL / data platform development
• Strong hands-on development in:
• Python
• PySpark / Apache Spark
• Advanced SQL
• Experience working with large-scale data sets and performance tuning.
• Strong understanding of data concepts: data modeling, lineage, metadata, governance
• Experience supporting regulated environments with emphasis on controls and audit readiness
• Strong communication skills (ability to work with both engineering + business partners)
Preferred Skills (Nice to Have):
• Experience running PySpark workloads on Google Cloud Platform (GCP)
• Dataproc, BigQuery, Google Cloud Storage (GCS), etc.
• Experience with cloud native Big Data platforms
• Knowledge of data governance, security, and compliance practices
• Experience with CI/CD pipelines for data engineering workloads
• Orchestration: Airflow (or similar scheduling tools)
• Streaming: Kafka
• Lakehouse/Warehouse: Databricks / Snowflake / BigQuery
• CI/CD + DevOps: Git, pipelines, automation, release management
• Data governance/security: encryption, access controls, data masking, PII handling
• Prior Financial Crimes domain: AML / sanctions / fraud / investigations / KYC
Domain Experience (Highly Valued):
• Experience supporting AML, Transaction Monitoring, Investigations, Sanctions screening, Fraud, or similar risk/compliance functions.
• Familiarity with regulatory expectations and strong documentation discipline.
Job Details - Big Data Platform Engineer - 1 Openings
Required Qualifications:
• 4+ years of Software Engineering
• Big Data Platform (Source, Conform, Curate)
• Big Data developer (Spark, Scala, Python, Autosys, Git, etc.)
• Tools Used (Spark, Python, Scala, Linux Shell, Autosys, etc.)
• Home Grown Tool (Flow Master, Edit Upload, etc.) - if prior WF
• Interfaces (Linux Command Line, Editors like PyCharm, Dremio, etc.)
• Soft Skills (Data processing experience, Big Data experience, Data Controls experience, working in JIRA Agile)
• Accountability (Lead and own work streams end to end)
• Fin Crime (Domain expertise with preparing data for use cases like Transaction Monitoring, Customer Risk, Investigations, etc.)
EEO:
Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.






