

Data Architect – Python
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect – Python in Dearborn, MI, on a long-term project with a pay rate of $60-$70. Key skills include cloud data solutions, ETL, BI, GCP data lake house solutions, and PySpark API processing.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date discovered
July 1, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Dearborn, MI
-
🧠 - Skills detailed
#Data Science #GCP (Google Cloud Platform) #Database Administration #Data Warehouse #Unix #Data Pipeline #ML (Machine Learning) #PySpark #BI (Business Intelligence) #Groovy #Data Lake #Cloud #Spark (Apache Spark) #REST API #REST (Representational State Transfer) #Monitoring #AI (Artificial Intelligence) #Data Architecture #Datasets #PostgreSQL #Bash #Visualization #Python #Databases #"ETL (Extract #Transform #Load)" #API (Application Programming Interface)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Positon: Data Architect – Python
Location: Dearborn, MI – Hybrid opportunity
Duration: Long term project
Pay rate: $60-$70 with all benefits
About Kyyba:
Founded in 1998 and headquartered in Farmington Hills, MI, Kyyba has a global presence delivering high-quality resources and top-notch recruiting services, enabling businesses to effectively respond to organizational changes and technological advances.
At Kyyba, the overall well-being of our employees and their families is important to us. We are proud of our work culture which embodies our core values; incorporating value, passion, excellence, empowerment, and happiness, creates a vibrant and productive atmosphere. We empower our employees with the resources, incentives, and flexibility that they need to support a healthy, balanced, and fulfilling career by providing many valuable benefits and a balanced compensation structure combined with career development.
Position Description:
• Design data solutions in the cloud or on premises, using the latest data services, products, technology, and industry best practices
• Experience migrating legacy data environments with a focus performance and reliability
• Data Architecture contributions include assessing and understanding data sources, data models and schemas, and data workflows
• Ability to assess, understand, and design ETL jobs, data pipelines, and workflows
• BI and Data Visualization include assessing, understanding, and designing reports, creating dynamic dashboards, and setting up data pipelines in support of dashboards and reports
• Data Science focus on designing machine learning, AI applications, MLOps pipelines
• Addressing technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
• Experience in crafting data lake house solutions in GCP. This includes relational & vector databases, data warehouses, data lakes, and distributed data systems.
• Must have PySpark API processing knowledge utilizing resilient distributed datasets (RDSS) and data frames
Skills Required:
• Design data solutions in the cloud or on premises, using the latest data services, products, technology, and industry best practices
• Experience migrating legacy data environments with a focus performance and reliability
• Data Architecture contributions include assessing and understanding data sources, data models and schemas, and data workflows
• Ability to assess, understand, and design ETL jobs, data pipelines, and workflows
• BI and Data Visualization include assessing, understanding, and designing reports, creating dynamic dashboards, and setting up data pipelines in support of dashboards and reports
• Data Science focus on designing machine learning, AI applications, MLOps pipelines
• Addressing technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
• Experience in crafting data lake house solutions in GCP. This includes relational & vector databases, data warehouses, data lakes, and distributed data systems.
• Must have PySpark API processing knowledge utilizing resilient distributed datasets (RDSS) and data frames
Skills Preferred:
• Ability to write bash, python and groovy scripts to help configure and administer tools
• Experience installing applications on VMs, monitoring performance, and tailing logs on Unix
• PostgreSQL Database administration skills are preferred
• Python experience and experience developing REST APIs