

Data Analyst / Engineer (W2 Contract - Remote Role)
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Mid-level Data Engineer (W2 Contract, Remote) for 3 to 5 years of data engineering experience, proficient in Python, SQL, and cloud platforms (AWS, GCP, Azure). Requires familiarity with ETL processes and Agile methodologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Spark (Apache Spark) #Jira #Luigi #Airflow #Pandas #AWS (Amazon Web Services) #Kafka (Apache Kafka) #S3 (Amazon Simple Storage Service) #Cloud #ML (Machine Learning) #Redshift #Logging #Data Modeling #Storage #Data Pipeline #Data Lake #Data Integrity #"ETL (Extract #Transform #Load)" #Snowflake #Version Control #Databases #Data Transformations #BigQuery #Data Analysis #SQL (Structured Query Language) #Agile #GIT #Python #Schema Design #Scala #Computer Science #Data Engineering #Code Reviews #Data Architecture #Azure #Data Science #Apache Spark #GCP (Google Cloud Platform) #PySpark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Kani Solutions, is seeking the following. Apply via Dice today!
Job Title: Mid-level Data Engineer (W2 Contract) Location: [Onsite / Remote / Hybrid] United States Duration: (W2 only ) Work Authorization: Must be authorized to work in the U.S. without sponsorship
Job Summary:
We are seeking a Mid-level Data Engineer to support data pipeline development, transformation, and integration initiatives within our growing data ecosystem. You will work closely with data architects, analysts, and business stakeholders to build and optimize scalable, reliable data solutions on modern cloud platforms.
Responsibilities:-
β’ Design, build, and maintain robust data pipelines and ETL/ELT workflows using tools such as Apache Spark, PySpark, or SQL-based frameworks
β’ Ingest and process structured and unstructured data from various sources (APIs, databases, files, cloud storage)
β’ Develop and optimize queries, data transformations, and aggregations for analytics use cases
β’ Ensure data integrity, quality, and consistency across multiple environments
β’ Collaborate with data analysts, data scientists, and application developers to support reporting, ML models, and APIs
β’ Monitor pipeline performance, troubleshoot failures, and implement logging and alerting
β’ Work within Agile teams and contribute to sprint planning, story writing, and code reviews
Required Skills:
β’ 3 to 5 years of experience in data engineering or ETL development
β’ Proficiency in Python, SQL, and data transformation frameworks (e.g., PySpark, Pandas)
β’ Experience with cloud platforms such as AWS, Google Cloud Platform, or Azure (e.g., S3, BigQuery, Redshift, Azure Data Lake)
β’ Familiarity with data warehousing concepts and tools (e.g., Snowflake, BigQuery, Redshift)
β’ Experience with workflow orchestration tools like Airflow, Prefect, or Luigi
β’ Knowledge of data modeling, schema design, and performance tuning
β’ Solid understanding of version control systems (Git) and CI/CD practices
Preferred Qualifications:
β’ Bachelor s degree in Computer Science, Engineering, or related field
β’ Familiarity with Kafka, Pub/Sub, or real-time data streaming technologies
β’ Experience working in Agile environments with tools like JIRA, Confluence., Bachelor s degree in Computer Science, Engineering, or related field
β’ Familiarity with Kafka, Pub/Sub, or real-time data streaming technologies
β’ Experience working in Agile environments with tools like JIRA, Confluence.
Contract Details: W2 Only No C2C or third-party submissions