BeaconFire Inc.

Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect in New York, NY, on a long-term contract. Requires 12+ years in data technology, 5+ years as a Data Engineer, and expertise in SQL, Oracle, Snowflake, Python, and cloud-based data warehousing.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 19, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York, NY
-
🧠 - Skills detailed
#Data Management #Scala #Data Quality #Data Processing #Batch #dbt (data build tool) #Computer Science #Kafka (Apache Kafka) #Programming #Databases #Java #AI (Artificial Intelligence) #Data Engineering #Oracle #Snowflake #Cloud #BI (Business Intelligence) #Monitoring #Databricks #"ETL (Extract #Transform #Load)" #Leadership #SQL (Structured Query Language) #SQL Server #Data Modeling #Data Architecture #Data Ingestion #Qlik #Microsoft Power BI #MS SQL (Microsoft SQL Server) #Python
Role description
Hi, I hope you are doing well! We have an opportunity for Data Architect with one of our clients for NYC, NY. Please see the job details below and let me know if you would be interested in this role. If interested, please send me a copy of your resume, contact details, availability, and a good time to connect with you. Title: Data Architect Location: New York, New York - Onsite Terms: Long Term Contract Job Details: Primary Skills: SQL,ORACLE,Snowflake,12+ years of experience in data technology At least 5 years as a Data Engineer with hands-on experience in cloud environments 8+ years of Python programming focused on data processing and distributed systems 8+ years working with relational databases, dimensional modeling, and DBT 8+ years designing and administering cloud-based data warehousing solutions (e.g., Databricks) 8+ years' experience with Kafka or other streaming platforms Exposure to AI based advance techniques and tools Strong understanding of database fundamentals, including data modeling, advanced SQL development and optimization, ELT/ETL processes and DBT. Experience with Java, MS SQL Server, Druid, Qlik/Golden Gate CDC, and Power BI is a plus Responsibilities: β€’ Architect streaming data ingestion and integration with downstream systems β€’ Implement AI-driven controller to orchestrate tens of millions of streams and micro-batches β€’ Design AI-powered onboarding of new data sources β€’ Develop AI-powered compute engine and data serving semantic layer β€’ Deliver scalable cloud data services and APIs with sub-second response times over petabytes of data β€’ Develop a unified alerting and monitoring framework supporting streaming transformations and compute across thousands of institutional clients and hundreds of external data sources β€’ Build a self-service data management and operations platform β€’ Implement a data quality monitoring framework Qualifications: β€’ Bachelor’s degree in Computer Science, related field; advanced degree preferred β€’ 12+ years of experience in data technology β€’ At least 5 years as a Data Engineer with hands-on experience in cloud environments β€’ 8+ years of Python programming focused on data processing and distributed systems β€’ 8+ years working with relational databases, SQL, dimensional modeling, and DBT β€’ 8+ years designing and administering cloud-based data warehousing solutions (e.g., Snowflake, Databricks) β€’ 8+ years experience with Kafka or other streaming platforms β€’ Exposure to AI based advance techniques and tools β€’ Strong understanding of database fundamentals, including data modeling, advanced SQL development and optimization, ELT/ETL processes and DBT. β€’ Experience with Java, Oracle, MS SQL Server, Druid, Qlik/Golden Gate CDC, and Power BI is a plus β€’ Strong leadership abilities and excellent communication skills. Thanks Amit Jha Senior Recruiter at BeaconFire Inc. Email : amitj@beaconfireinc.com