Intracruit Solutions

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills required include Azure Cloud expertise, Python, PySpark, Snowflake, and CI-CD platforms. A Master's degree or relevant certifications are mandatory, along with a minimum of 4 years of experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 2, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Dallas-Fort Worth Metroplex
-
🧠 - Skills detailed
#DevOps #GCP (Google Cloud Platform) #Scala #Spark SQL #Docker #Data Storage #Azure cloud #Azure Data Factory #Angular #Azure #PySpark #Data Engineering #Datasets #JavaScript #API (Application Programming Interface) #Spark (Apache Spark) #jQuery #Kubernetes #Django #Python #SQL Queries #Storage #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Airflow #Flask #React #Computer Science #Redshift #Snowpark #Data Pipeline #ADF (Azure Data Factory) #SQL (Structured Query Language) #Snowflake #Data Processing #Programming #HTML (Hypertext Markup Language) #Cloud
Role description
W2 ROLE - NO THIRD PARTY OR C2C. Data Engineer Remote mandatory items: β€’ 4 years minimum direct experience β€’ Masters Degree or if no advanced degree then all the proper certifications. β€’ Expert Knowledge of Azure Cloud β€’ Python and Pyspark β€’ Data Breach remediation knowledge β€’ Snowflake β€’ Docker β€’ CI-CD platforms Data Engineer Skills (must have) Β· Bachelor's or master's degree in computer science, Engineering, or a related field. Β· Strong programming skills in languages such as Python, PySpark, SQL etc. Β· Experience in Build and optimize ETL workflows using tools/technologies such as Spark, Snowflake, Airflow and/or Azure Data factory, Glue, Redshift etc. Β· Craft and optimize complex SQL queries and stored procedures for data transformation, aggregation, and analysis. Β· Develop and maintain data models ensuring scalability and optimal performance. Β· Utilize Snowpark for data processing within the Snowflake platform. Β· Integrate Snowflake for efficient data storage and retrieval. Β· Exposure to API integrations to facilitate data workflows. Β· Experience in implementing CI-CD pipelines through DevOps platforms. Β· Good experience in cloud infrastructure such as Azure, AWS or GCP Good to have experience: Docker, Kubernetes etc Exposure in HTML, CSS, Javascript/JQuery, Node.js, Angular/React Experience in API development, Flask/Django is a bonus Responsibilities Β· Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects. Β· Develop, implement, and deploy ETL solutions. Β· Preprocess and analyze large datasets to identify patterns, trends, and insights. Β· Evaluate, validate, and optimize data models to ensure efficiency, and generalizability. Β· Monitor and maintain the performance of data pipeline, data models in production environments, identifying opportunities for improvement and update as needed. Β· Document development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement.