

TalentBurst, an Inc 5000 company
Data Analyst 3
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Analyst 3 in Santa Clara, CA, on a 4-month contract, offering a pay rate of "unknown." Candidates must have a Bachelor’s degree, proficiency in Python and C#, and experience with SQL, ETL tools, and cloud platforms.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 19, 2025
🕒 - Duration
3 to 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Mountain View, CA 94043
-
🧠 - Skills detailed
#Database Design #Scala #Data Processing #Batch #dbt (data build tool) #Tableau #Computer Science #Kafka (Apache Kafka) #Documentation #Programming #Databases #Data Engineering #Airflow #AWS Lambda #Snowflake #Cloud #Apache Airflow #S3 (Amazon Simple Storage Service) #GIT #Data Access #BI (Business Intelligence) #.Net #Docker #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #SQL Server #Kubernetes #Data Ingestion #Spark (Apache Spark) #Version Control #Data Pipeline #Data Analysis #AWS (Amazon Web Services) #Data Integration #Looker #C# #Azure #Microsoft Power BI #Lambda (AWS Lambda) #Python
Role description
Job Description: Data Engineer Interview Rounds : 1 round of interview Location : Santa Clara, CA Contract : 4 Months + We are seeking a skilled Data Application Engineer to design, build, and maintain data-driven applications and pipelines that enable seamless data integration, transformation, and delivery across systems. The ideal candidate will have a strong foundation in software engineering, database technologies, and cloud data platforms, with a focus on building scalable, robust, and efficient data applications. Key Responsibilities:
Develop Data Applications: Build and maintain data-centric applications, tools, and APIs to enable real-time and batch data processing.
Data Integration: Design and implement data ingestion pipelines, integrating data from various sources such as databases, APIs, and file systems.
Data Transformation: Create reusable ETL/ELT pipelines to process and transform raw data into consumable formats using tools like Snowflake, DBT, or Python.
Collaboration: Work closely with analysts, and stakeholders to understand requirements and translate them into scalable solutions.
Documentation: Maintain comprehensive documentation for data applications, workflows, and processes.
Required Skills and Qualifications:
Education: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
Programming: Proficiency in programming languages Python, C# , ASP.NET (Core)
Databases: Strong understanding of SQL, database design, and experience with relational (e.g., Snowflake, SQL Server) databases
Data Tools: Hands-on experience with ETL/ELT tools and frameworks such as Apache Airflow (DBT - Nice to Have)
Cloud Platforms: Familiarity with cloud platforms such as AWS, Azure, or Google Cloud, and their data services (e.g., S3, AWS Lambda etc.).
Data Pipelines: Experience with real-time data processing tools (e.g., Kafka, Spark) and batch data processing.
APIs: Experience designing and integrating RESTful APIs for data access and application communication.
• Version Control: Knowledge of version control systems like Git for code management.
• Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues. Preferred Skills:
Knowledge of containerization tools like Docker and orchestration platforms like Kubernetes.
Experience with BI tools like Tableau, Power BI, or Looker.
Soft Skills:
Excellent communication and collaboration skills to work effectively in cross-functional teams.
Ability to prioritize tasks and manage projects in a fast-paced environment.
Strong attention to detail and commitment to delivering high-quality results.
#TB\_EN #ZR
Job Description: Data Engineer Interview Rounds : 1 round of interview Location : Santa Clara, CA Contract : 4 Months + We are seeking a skilled Data Application Engineer to design, build, and maintain data-driven applications and pipelines that enable seamless data integration, transformation, and delivery across systems. The ideal candidate will have a strong foundation in software engineering, database technologies, and cloud data platforms, with a focus on building scalable, robust, and efficient data applications. Key Responsibilities:
Develop Data Applications: Build and maintain data-centric applications, tools, and APIs to enable real-time and batch data processing.
Data Integration: Design and implement data ingestion pipelines, integrating data from various sources such as databases, APIs, and file systems.
Data Transformation: Create reusable ETL/ELT pipelines to process and transform raw data into consumable formats using tools like Snowflake, DBT, or Python.
Collaboration: Work closely with analysts, and stakeholders to understand requirements and translate them into scalable solutions.
Documentation: Maintain comprehensive documentation for data applications, workflows, and processes.
Required Skills and Qualifications:
Education: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
Programming: Proficiency in programming languages Python, C# , ASP.NET (Core)
Databases: Strong understanding of SQL, database design, and experience with relational (e.g., Snowflake, SQL Server) databases
Data Tools: Hands-on experience with ETL/ELT tools and frameworks such as Apache Airflow (DBT - Nice to Have)
Cloud Platforms: Familiarity with cloud platforms such as AWS, Azure, or Google Cloud, and their data services (e.g., S3, AWS Lambda etc.).
Data Pipelines: Experience with real-time data processing tools (e.g., Kafka, Spark) and batch data processing.
APIs: Experience designing and integrating RESTful APIs for data access and application communication.
• Version Control: Knowledge of version control systems like Git for code management.
• Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues. Preferred Skills:
Knowledge of containerization tools like Docker and orchestration platforms like Kubernetes.
Experience with BI tools like Tableau, Power BI, or Looker.
Soft Skills:
Excellent communication and collaboration skills to work effectively in cross-functional teams.
Ability to prioritize tasks and manage projects in a fast-paced environment.
Strong attention to detail and commitment to delivering high-quality results.
#TB\_EN #ZR






