

VForce Infotech
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract of over 6 months, with a pay rate of $60,000 - $80,000 per year. Key skills required include cloud platforms (AWS, Azure), big data technologies (Hadoop, Spark), and proficiency in Python and Java.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
363
-
ποΈ - Date
November 6, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Edison, NJ 08837
-
π§ - Skills detailed
#Looker #Shell Scripting #Python #Data Warehouse #Scala #SQL Server #VBA (Visual Basic for Applications) #Cloud #Big Data #SQL (Structured Query Language) #Azure #Database Management #Oracle #Database Design #Bash #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #Informatica #Unix #Data Access #Apache Hive #Data Storage #Scripting #BI (Business Intelligence) #Microsoft SQL #Spark (Apache Spark) #Talend #Data Lake #Storage #Hadoop #Data Integration #Datasets #Database Schema #Agile #AWS (Amazon Web Services) #Schema Design #Java #Data Processing #Linux #Automation #MS SQL (Microsoft SQL Server) #Microsoft SQL Server #API (Application Programming Interface) #Programming #Databases #ML (Machine Learning) #HDFS (Hadoop Distributed File System)
Role description
<Job Overview>Ignite your passion for data and innovation as a Data Engineer! In this dynamic role, you will be at the forefront of designing, developing, and maintaining robust data pipelines and architectures that empower data-driven decision-making across the organization. Your expertise will enable seamless integration of diverse data sources, optimize data storage solutions, and facilitate insightful analytics. Join us to leverage cutting-edge technologies and contribute to transformative projects that shape our future.
<Responsibilities>
Design, build, and maintain scalable data pipelines using ETL (Extract, Transform, Load) processes to ensure efficient data flow across systems.
Develop and optimize data models and database schemas for data warehouses and lakes, including platforms such as Azure Data Lake and Hadoop-based environments.
Implement and manage big data solutions utilizing tools like Apache Hive, Spark, and Hadoop to process large volumes of structured and unstructured data.
Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver innovative data solutions.
Integrate diverse data sources including relational databases like Microsoft SQL Server, Oracle, and linked data sources to create comprehensive datasets for analysis.
Develop APIs using RESTful principles to enable secure and efficient data access for internal applications and external partners.
Utilize programming languages such as Python, Java, Bash (Unix shell), Shell Scripting, and VBA to automate workflows, perform analysis, and enhance system functionalities.
Support model training activities by preparing datasets, validating results, and deploying machine learning models into production environments.
Conduct thorough analysis to identify patterns, trends, and insights that inform strategic decisions.
Ensure adherence to Agile methodologies by participating in sprint planning, stand-ups, reviews, and continuous improvement initiatives.
<Skills>
Extensive experience with cloud platforms such as AWS and Azure Data Lake for scalable storage solutions.
Proficiency in programming languages including Python and Java for data processing and automation tasks.
Strong knowledge of big data technologies like Hadoop ecosystem components (HDFS, Hive) and Spark for large-scale data processing.
Hands-on experience with ETL tools such as Informatica and Talend for efficient data integration workflows.
Expertise in SQL-based systems including Microsoft SQL Server, Oracle Database design, and SQL query optimization.
Familiarity with linked data concepts for connecting disparate datasets across platforms.
Experience in developing dashboards or reports using Looker or similar BI tools to visualize insights effectively.
Knowledge of RESTful API development for secure data sharing between applications.
Ability to design robust Data Warehouse architectures that support analytics initiatives.
Skills in database modeling, schema design, and performance tuning for optimal storage efficiency.
Experience with shell scripting (Bash), Unix/Linux environments for automation tasks.
Strong analysis skills to interpret complex datasets accurately.
Familiarity with Agile development practices to foster collaborative project execution. Join us as a Data Engineer where your technical prowess fuels innovation! Bring your expertise in cloud computing, big data processing, database management, and analytics to craft powerful solutions that drive business successβall while working in a vibrant environment committed to growth and excellence!
Job Types: Full-time, Contract
Pay: $60,000.00 - $80,000.00 per year
Benefits:
Dental insurance
Health insurance
Work Location: In person
<Job Overview>Ignite your passion for data and innovation as a Data Engineer! In this dynamic role, you will be at the forefront of designing, developing, and maintaining robust data pipelines and architectures that empower data-driven decision-making across the organization. Your expertise will enable seamless integration of diverse data sources, optimize data storage solutions, and facilitate insightful analytics. Join us to leverage cutting-edge technologies and contribute to transformative projects that shape our future.
<Responsibilities>
Design, build, and maintain scalable data pipelines using ETL (Extract, Transform, Load) processes to ensure efficient data flow across systems.
Develop and optimize data models and database schemas for data warehouses and lakes, including platforms such as Azure Data Lake and Hadoop-based environments.
Implement and manage big data solutions utilizing tools like Apache Hive, Spark, and Hadoop to process large volumes of structured and unstructured data.
Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver innovative data solutions.
Integrate diverse data sources including relational databases like Microsoft SQL Server, Oracle, and linked data sources to create comprehensive datasets for analysis.
Develop APIs using RESTful principles to enable secure and efficient data access for internal applications and external partners.
Utilize programming languages such as Python, Java, Bash (Unix shell), Shell Scripting, and VBA to automate workflows, perform analysis, and enhance system functionalities.
Support model training activities by preparing datasets, validating results, and deploying machine learning models into production environments.
Conduct thorough analysis to identify patterns, trends, and insights that inform strategic decisions.
Ensure adherence to Agile methodologies by participating in sprint planning, stand-ups, reviews, and continuous improvement initiatives.
<Skills>
Extensive experience with cloud platforms such as AWS and Azure Data Lake for scalable storage solutions.
Proficiency in programming languages including Python and Java for data processing and automation tasks.
Strong knowledge of big data technologies like Hadoop ecosystem components (HDFS, Hive) and Spark for large-scale data processing.
Hands-on experience with ETL tools such as Informatica and Talend for efficient data integration workflows.
Expertise in SQL-based systems including Microsoft SQL Server, Oracle Database design, and SQL query optimization.
Familiarity with linked data concepts for connecting disparate datasets across platforms.
Experience in developing dashboards or reports using Looker or similar BI tools to visualize insights effectively.
Knowledge of RESTful API development for secure data sharing between applications.
Ability to design robust Data Warehouse architectures that support analytics initiatives.
Skills in database modeling, schema design, and performance tuning for optimal storage efficiency.
Experience with shell scripting (Bash), Unix/Linux environments for automation tasks.
Strong analysis skills to interpret complex datasets accurately.
Familiarity with Agile development practices to foster collaborative project execution. Join us as a Data Engineer where your technical prowess fuels innovation! Bring your expertise in cloud computing, big data processing, database management, and analytics to craft powerful solutions that drive business successβall while working in a vibrant environment committed to growth and excellence!
Job Types: Full-time, Contract
Pay: $60,000.00 - $80,000.00 per year
Benefits:
Dental insurance
Health insurance
Work Location: In person






