LanceSoft, Inc.

Big Data Consultant

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Consultant with a contract length of 12+ months, located in Alpharetta, GA & New York (3 days onsite/week). Requires 7+ years in data engineering, strong skills in Python, Spark, SQL, and UNIX Shell.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 3, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#DevOps #Impala #Spark (Apache Spark) #Informatica #Python #Big Data #Automation #"ETL (Extract #Transform #Load)" #Informatica Cloud #Code Reviews #Migration #AWS (Amazon Web Services) #GitHub #Unix #Apache Spark #Data Integration #Cloud #Scripting #Snowflake #Batch #Data Engineering #SQL (Structured Query Language) #Azure #HDFS (Hadoop Distributed File System)
Role description
Role Name: Big Data Engineer Duration: 12+ months Location: Alpharetta, GA & New York (3 days onsite/week) Position Description: This position is for Big data engineer for Client Wealth Management? Framework CoE team at Client’s Alpharetta or New York offices. CoE team is responsible to define and govern the data platforms. We are looking for colleagues with strong sense of ownership and ability to drive solutions. The role is primarily responsible to automate the existing process and bring new ideas and innovation. The candidate is expected to code, conduct code reviews, and test framework as needed, along with participating in application architecture and design and other phases of the automation. The ideal candidate will be a self-motivated team player committed to delivering on time and should be able to work with or without minimal supervision. Responsibilities β€’ Design & Develop new automation framework for ETL processing β€’ Support existing framework and become technical point of contact for all related teams β€’ Enhance existing ETL automation framework as per user requirements β€’ Performance tuning of spark, snowflake ETL jobs β€’ New technology POC and suitability analysis for Cloud migration β€’ Process optimization with the help of automation and new utility development β€’ Work in collaboration for any issues and new features β€’ Support any batch issue β€’ Support application team teams with any queries Required Skills β€’ 7+ years of Data engineering experience β€’ Must be strong in UNIX Shell, Python scripting knowledge β€’ Must be strong in Spark β€’ Must have strong knowledge of SQL β€’ Hands-on knowledge on how HDFS/Hive/Impala/Spark works β€’ Strong in logical reasoning capabilities β€’ Should have working knowledge of Github, DevOps, CICD/ Enterprise code management tools β€’ Strong collaboration and communication skills β€’ Must possess strong team-player skills and should have excellent written and verbal communication skills β€’ Ability to create and maintain a positive environment of shared success β€’ Ability to execute and prioritize a tasks and resolve issues without aid from direct manager or project sponsor β€’ Good to have working experience on snowflake & any data integration tool i.e. informatica cloud Primary skills: Python, Big Data & Apache Spark Desired skills β€’ Snowflake/Azure/AWS any cloud β€’ IDMC/any ETL tool