Business Data Analyst

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Business Data Analyst in Pittsburgh, PA; Cleveland; or Dallas, with a contract length of unspecified duration at a pay rate of $47/hr W2. Key skills include MySQL, Oracle PLSQL, Hadoop, Spark, and experience in the banking domain.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
376
-
πŸ—“οΈ - Date discovered
August 19, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Spark (Apache Spark) #Shell Scripting #Code Reviews #Data Quality #Tableau #Data Integration #Data Mart #BI (Business Intelligence) #Oracle #Data Lake #Data Architecture #MySQL #Data Ingestion #Teradata #Informatica #Unix #Data Warehouse #Hadoop #Agile #Scala #Scripting #Data Migration #Data Governance #Cloud #Data Modeling #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Migration #Data Analysis
Role description
we are working with our client for the below New Requirement please have a look at the below Job Description and let me know if you would be having any matching skills for the same. Position- Business data Analyst Location- Pittsburgh, PA; Cleveland; Dallas Rate- $47/hr w2 JD: Skills: MySQL, Oracle PLSQL, Excellent communication, Data Lake, Informatica, Teradata, and Epic Leads the development of the enterprise-wide Data Architecture. Supports teams with daily data content needs and proper data modeling. Designs and implements ETL processes for data marts, data lake and warehouses. Analyzes data usage and aligns it with project requirements. Contributes to data governance and enforces data standards. Captures and consolidates source data with proper lineage and change control, including cloud and on-premises sources. Publishes vetted data sources for BI tools like Tableau, Crystal, and Epic. Strong experience in Hadoop, Spark, Scala Python Good experience in end-to-end implementation of data warehouse, Data Lake, data marts Strong knowledge and hands-on experience in SQL, Unix shell scripting Good understanding of data integration, data quality and data architecture Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data Good understanding of Agile software development frameworks. Experience in Banking domain Strong communication and Analytical skills Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams. Experience and desire to work in a global delivery environment Responsible to develop efficient data ingestion and data governance framework as per specification. Performance improvement of existing spark-based data ingestion, aggregation pipelines to meet SLA. Work proactively, independently with global teams to address project requirements, articulate issues challenges with enough lead time to address project delivery risks. Plan production implementation activities, execute change requests and resolve issues in production implementation. Plan and execute large data migration, history data rebuild activities Code reviews optimization, test case reviews. Demonstrate troubleshooting skill in resolving technical issues, bugs. Demonstrate ownership and initiative. Ability to bring-in best practices solutions which best fit for client problem and environment.