Garni Software, Inc

Data Performance & Optimization

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Performance & Optimization Specialist, offering a contract of unspecified length with a pay rate of $40.00 - $60.00 per hour. Key skills required include SQL, Python, ETL processes, and experience with AWS and big data technologies.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
480
-
πŸ—“οΈ - Date
December 4, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Remote
-
🧠 - Skills detailed
#Bash #Data Storage #Storage #Python #Database Schema #Spark (Apache Spark) #Data Warehouse #Agile #Schema Design #SQL (Structured Query Language) #Java #Data Processing #Oracle #Scala #Visualization #Data Pipeline #Indexing #Programming #Microsoft SQL Server #SQL Server #Tableau #Database Design #Hadoop #"ETL (Extract #Transform #Load)" #TensorFlow #Datasets #Unix #Data Analysis #Database Performance #Big Data #Microsoft SQL #Data Integration #Databases #MS SQL (Microsoft SQL Server) #Cloud #AWS (Amazon Web Services) #ML (Machine Learning) #Database Management
Role description
OverviewWe are seeking a highly skilled Data Performance & Optimization Specialist to join our team. This role involves analyzing, optimizing, and maintaining our data infrastructure to ensure high performance, scalability, and reliability. The ideal candidate will have a strong background in data warehousing, database management, and advanced analytics, with experience in various data technologies and programming languages. This position offers an exciting opportunity to work on cutting-edge data projects within a dynamic environment focused on innovation and continuous improvement. Responsibilities Design, develop, and optimize data warehouses and databases to support business analytics and reporting needs. Develop and maintain ETL processes using SQL, Python, Bash (Unix shell), and other tools to ensure efficient data integration. Implement performance tuning strategies for databases such as Oracle, Microsoft SQL Server, and Hadoop-based systems. Build and enhance dashboards and visualizations using Tableau for data analysis and reporting purposes. Utilize AWS cloud services to manage scalable data solutions and perform cloud-based data processing tasks. Develop machine learning models using Python, Spark, and related frameworks to extract insights from large datasets. Collaborate within Agile teams to deliver iterative improvements on data pipelines and analytics solutions. Work with Linked Data, Hadoop ecosystems, Spark clusters, and other big data technologies to process large-scale datasets efficiently. Design database schemas and optimize database performance through proper indexing, partitioning, and schema design. Conduct analysis to identify performance bottlenecks and implement solutions for continuous system improvement. Qualifications Proven experience with SQL databases such as Oracle, Microsoft SQL Server, or similar systems. Strong proficiency in programming languages including Python, Java, Bash (Unix shell), with experience in ETL development. Hands-on experience with Tableau for creating interactive dashboards and reports. Knowledge of AWS cloud services related to data storage and processing. Familiarity with big data tools such as Hadoop, Spark, Hive, or similar frameworks. Experience with machine learning techniques and frameworks like scikit-learn or TensorFlow is a plus. Understanding of data warehouse architecture, database design principles, and linked data concepts. Strong analysis skills with the ability to interpret complex datasets for actionable insights. Experience working in Agile environments with cross-functional teams. This role is ideal for candidates passionate about leveraging advanced data technologies to drive business performance through innovative solutions in a collaborative setting. Pay: $40.00 - $60.00 per hour Work Location: Remote