Data Engineer/Data Anlayst - W2 Position

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer/Data Analyst position for a W2 contract lasting unspecified duration, located in SAN RAMON/OAKLAND/SAN FRANCISCO, CA. Key skills include Python, cloud platforms (AWS, Azure, Google Cloud), and ETL processes. 2-5 years of relevant experience required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 25, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
San Jose, CA
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Amazon Redshift #Apache Airflow #Data Modeling #Azure Synapse Analytics #PySpark #BigQuery #NumPy #Azure Data Factory #AWS Glue #Data Storage #Storage #Compliance #S3 (Amazon Simple Storage Service) #Data Analysis #Data Integration #Azure #Consulting #Data Accuracy #Data Quality #Big Data #Data Wrangling #Spark (Apache Spark) #PostgreSQL #Data Manipulation #Database Management #Data Security #Python #Synapse #AWS (Amazon Web Services) #Automation #MySQL #Cloud #AWS Lambda #Libraries #Monitoring #Dataflow #Datasets #Consul #Security #Data Warehouse #Documentation #Lambda (AWS Lambda) #NoSQL #Pandas #Databases #Data Pipeline #AWS S3 (Amazon Simple Storage Service) #ML (Machine Learning) #Data Science #Scala #Schema Design #Computer Science #Airflow #GitHub #Data Integrity #Data Engineering #Redshift #GIT #Version Control
Role description

Hello All,

Greetings from Rootshell Inc.

Rootshell Enterprise Technologies Inc. is a recognized provider of professional IT Consulting services in the US. We are actively seeking Data Engineer/Data Anlayst for one of our client, Please share your resume with current location & full contact info

Role:Data Engineer/Data Anlayst

Locations: SAN RAMON /OAKLAND / SAN FRANCISCO, CA - 2 Days Onsite

Only W2

Job Summary:

Position Overview: We are looking for a talented Data Engineer with expertise in Python and Cloud Data technologies. The ideal candidate will have experience designing, building, and maintaining scalable data pipelines, working with large datasets, and integrating data from various cloud platforms (AWS, Azure, Google Cloud). The candidate will also be responsible for ensuring data integrity, optimization, and ensuring the availability of data for analytics and machine learning.

Responsibilities:

   • Design and Build Data Pipelines: Develop, optimize, and maintain scalable data pipelines to handle large datasets using Python and cloud services.

   • Data Integration: Integrate data from various cloud platforms (AWS, Google Cloud, Azure) and on-premise systems into centralized data warehouses and lakes.

   • Data Transformation: Utilize Python for data wrangling, cleaning, and transformation tasks, preparing data for analysis and reporting.

   • Cloud Data Technologies: Work with cloud-native tools and services (e.g., AWS Lambda, Google Cloud Functions, Azure Data Factory) to manage, process, and store data efficiently.

   • Automation & Monitoring: Build automated workflows for ETL processes and data synchronization between multiple sources and destinations.

   • Database Management: Work with both structured and unstructured data storage solutions such as Amazon Redshift, Google BigQuery, Azure Synapse Analytics, and NoSQL databases.

   • Data Quality and Integrity: Ensure data accuracy, consistency, and integrity through validation, testing, and implementing error handling processes.

   • Collaboration with Teams: Work with data scientists, analysts, and business teams to understand data requirements and provide necessary datasets for analytics and reporting.

   • Documentation and Reporting: Maintain clear documentation of data models, workflows, and processes, ensuring transparency and reproducibility.

Required Skills and Qualifications:

   • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.

   • 2-5 years of experience as a Data Engineer, Data Analyst, or in a similar role.

   • Proficiency in Python: Experience with libraries such as Pandas, NumPy, PySpark, and others for data manipulation and transformation.

   • Experience with Cloud Data Platforms: Experience working with at least one cloud platform (AWS, Google Cloud, or Azure). Familiarity with services like AWS S3, AWS Lambda, Google Cloud Functions, Azure Data Factory, and others.

   • Database Experience: Knowledge of relational databases (e.g., PostgreSQL, MySQL) and big data storage solutions like Redshift, BigQuery, or Azure Synapse.

   • ETL Processes: Strong understanding of ETL pipelines and experience with tools like Apache Airflow, Google Dataflow, AWS Glue, or custom Python-based solutions.

   • Data Warehousing: Familiarity with data warehouse architecture, schema design, and data modeling techniques.

   • Version Control: Experience with version control tools like Git and GitHub.

   • Data Security: Knowledge of data security practices, including encryption, access control, and compliance.

   • Collaboration: Strong communication skills to collaborate with cross-functional teams.

With regards

Naveen | Talent Acquisition

Rootshell Enterprise Technologies Inc.

Naveen@rootshellinc.com | www.rootshellinc.com