World Services LLC of Virginia

Data Engineer - Visualization

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Visualization, offering a full-time, hybrid position in Washington DC for over 6 months, with a pay rate of $160,000 - $170,000. Requires 15 years of data engineering experience, proficiency in SQL and Python, and familiarity with cloud-native architectures.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
772
-
πŸ—“οΈ - Date
December 2, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Washington, DC 20032
-
🧠 - Skills detailed
#ML (Machine Learning) #DevOps #Consulting #Scala #Data Governance #Databricks #GIT #AI (Artificial Intelligence) #Azure #Version Control #Data Engineering #Big Data #Spark (Apache Spark) #Data Quality #Python #"ACID (Atomicity #Consistency #Isolation #Durability)" #Documentation #GitHub #SQL Queries #Storage #AWS (Amazon Web Services) #Batch #Jupyter #Hadoop #Qlik #"ETL (Extract #Transform #Load)" #Airflow #Visualization #AWS Glue #Computer Science #Security #Compliance #Kafka (Apache Kafka) #Apache Spark #Delta Lake #Data Pipeline #Collibra #Data Science #SQL (Structured Query Language) #GitLab #Automation #Metadata #Deployment #Cloud
Role description
Job Title: Data Engineer – Visualization Location: Hybrid with up to 4 days at Federal Client – Washington DC, SEClearance: Ability to Obtain Public Trust Clearance (U.S. Citizenship Required)Position Type: Full-TimeSalary: $160,000 - $170,000 Position Overview World Services LLC, an IT engineering, solutions, and management consulting firm, is seeking an experienced Data Engineer with strong data visualization experience to support a federal agency on an enterprise-wide digital CI/CD initiative. The Data Engineer will design, build, and maintain secure, high-performance data pipelines that integrate diverse systems into the agency’s Integrated Data Environment. This role emphasizes expertise in ETL orchestration, cloud-native architectures, and metadata governance to enable enterprise-scale analytics and AI. The position offers the opportunity to work with modern platforms such as Databricks, Apache Spark, Enlighten BDP, and Collibra, directly contributing to the agency’s digital modernization, compliance readiness, and mission support. Key Responsibilities Data Pipeline Development Build and orchestrate ETL/ETI workflows using Apache Spark, Airflow, Kafka, and AWS Glue. Develop pipelines in Databricks to process both batch and streaming data. Translate mission requirements into optimized SQL queries and reusable data structures. Identify data gaps, design test cases, and perform data quality assessments to ensure reliable pipelines. Architecture & Governance Implement lakehouse architecture using Enlighten BDP and Delta Lake for secure, ACID-compliant storage. Manage metadata, data tagging, and lineage through Collibra to ensure governance and interoperability. Support the identification and maintenance of authoritative and trusted data sources across the enterprise. Cloud & DevOps Integration Develop and maintain CI/CD pipelines using GitHub or GitLab for automated deployment. Integrate and secure APIs for scalable data exchange across platforms. Support A&A compliance activities, including eMASS security assessments, POA&M tracking, and adherence to FedRAMP and FISMA standards. Collaboration & Support Partner with Data Scientists and Automation/AI Engineers to operationalize AI/ML models within production pipelines. Leverage Jupyter Notebooks and Anaconda for prototyping and collaborative development. Provide technical documentation, data dictionaries, and reproducible workflows. Contribute to stakeholder deliverables including reports, dashboards, and strategic briefings, with visualization support using Qlik where applicable. Required Qualifications Bachelor’s degree in Computer Science, Data Engineering, or related field. Minimum of 15 years of experience in data engineering, ETL/ETI processes, or big data environments. Proficiency with SQL, Python, and Spark-based frameworks. Experience with Databricks, Hadoop, Kafka, Airflow, or AWS Glue. Familiarity with Git-based version control and CI/CD practices. Preferred Qualifications Certifications such as AWS Data Engineer Associate, AWS Certified Data Analytics, or Microsoft Certified: Azure Data Engineer Associate. Knowledge of Collibra and Enlighten BDP environments. Experience supporting federal data governance, security compliance, and ATO readiness. Pay: $160,000.00 - $170,000.00 per year Benefits: 401(k) Dental insurance Flexible spending account Health insurance Paid time off Vision insurance Work Location: Hybrid remote in Washington, DC 20032