Enterprise · AI Design · Siemens · 0 to 1
How I took an open-ended AI brief, partnered across teams, and designed a phased intelligence roadmap for a complex enterprise platform serving 8 user types across thousands of facilities.
Company
Siemens
Platform
Climatix IC 2.0
Role
Solo Product Designer
Status
Live in pilot
THE BRIEF
The PM came to me with an open brief. Siemens wanted AI integrated into Climatix IC and its ecosystem products. No defined scope. No specific feature. Just a direction and a question: what should this look like?
At the same time, I was brought in to collaborate with a partner team that had submitted two patents on predictive analysis for smart infrastructure. Their models could detect a potential equipment failure 1 to 2 hours before it happened. The technical capability existed. The design challenge was figuring out where to start and why.
I did not open Figma for three weeks.
THE PROBLEM
Climatix IC served 8 to 9 distinct user types across thousands of facilities. But the interface treated them all the same. One monolithic system. One dashboard. Access permissions masquerading as role-based design.
Responsible for health of all plants across a tenant portfolio. Spent the majority of their time manually checking every device, every alarm, one by one.
Responsible for diagnosing and fixing issues on specific plants. Relied on colleagues and training manuals because the platform gave data without context and alerts without guidance.
With 123 active alarms on any given day, manual checking was not a workflow. It was a full time job inside their actual job.
THE DISCOVERY
I collaborated with the partner team to explore where their predictive models could create value inside Climatix IC. Their capability was precise: detect a potential equipment failure 1 to 2 hours before it happens. That is not a notification feature. That is a decision support window.
I researched AI applications across enterprise platforms, industrial monitoring tools, and operational intelligence products. From three weeks of exploration I built directional concepts to initiate a stakeholder conversation about priorities, possibilities, and feasibility.
If AI can absorb 30 to 40% of that manual monitoring work, Service Managers stop reacting to failures and start preventing them.
That became the hypothesis the entire AI roadmap was built around.
THE PRIORITISATION
After stakeholder discussions and feasibility conversations we had a long list of what AI could do. We shipped none of it. Instead we asked: what does a Service Manager look at first every morning? What can they not afford to miss?
Surface the three most critical actionable insights at the top of the dashboard. Not 123 alarms ranked by severity. Three specific recommendations in plain language with a clear next action. A Service Manager who sees three clear actions can move immediately.
Surface pattern intelligence across the portfolio as a frequency versus impact matrix. Not what is alarming right now. What keeps alarming, and how critical is it when it does. Refrigerant leaks sitting top right tells a Service Manager this is a systemic problem requiring a portfolio-wide response, not just a single technician dispatch.

THE INTELLIGENCE LAYER
Before this existed, diagnosing a compressor power spike required opening multiple tabs, cross-referencing reports, manually comparing historical data, and relying on experience to piece together a diagnosis. Average time: 45 minutes. Accuracy: dependent on the engineer's experience level.
Three columns: anomaly detected, possible cause, recommended action. The engineer does not need to know why the compressor spiked. The platform tells them: sudden valve closure causing load surge. They do not need to figure out what to do. The platform tells them: inspect and recalibrate EEV valve motor.

Three layers of intelligence in one panel. Past Similar Events shows historical pattern: same fault occurred 3 times in 6 months, average time to failure 3.5 days after spike. Impact quantifies consequence: 12% increased energy usage during anomaly window. AI Tip goes beyond the fix: consider predictive EEV control algorithm to avoid future overshoot events.
Together these three layers answer every question a Service Engineer has: what is happening, how serious is it, and how do I make sure it does not happen again.

THE ROADMAP
Predictive intelligence surfaces what needs attention. But what happens when a Service Engineer does not understand what the platform is telling them? The training manual still had the answer. Aeris was designed to change that.
I spent one week researching AI interaction patterns across enterprise tools, developer products, and emerging agentic systems. The output was a phased roadmap for Aeris, Climatix IC's AI assistant.
You cannot give users more AI capability before they trust the AI they already have. Each phase had to earn the next one.
Phase 01
RAG-based document assistant. Engineers ask Aeris anything about the platform, a device, an error code, or a setup process and get plain language guidance with step-by-step instructions. The knowledge the platform assumed engineers had is now available on demand.
Phase 02
Contextual intelligence surfaced directly inside components. Pattern recognition, predicted failures, anomaly detection with possible causes and recommended actions. The platform does not wait for the engineer to come looking.
Phase 03
Inline conversational assistance without leaving the workflow. Engineers comment directly on a component, ask for an explanation, request a fix, or get contextual guidance from within the app UI. The assistant becomes part of the working surface.
Phase 04
Plan-first agentic execution. Aeris proposes a course of action, the engineer reviews and approves, and the platform executes. Think Cursor or Claude Code but for Climatix IC's operational environment. The engineer stays in control. The platform does the work.
DESIGNING AERIS
Aeris sits in the global header. Persistent across every screen. An AI assistant buried in a menu gets used once and forgotten. An assistant in the navigation layer becomes part of the workflow.
When triggered, Aeris opens as a right-side panel overlay. The platform stays visible behind it. The engineer does not lose their context to ask a question. They ask, get an answer, and return to the exact screen they were on.
Three suggested prompts on the opening screen: "Show me critical alarms", "Why is this alarm triggered?", "Suggest troubleshooting steps." These are the three most common questions a Service Engineer has when they open the alarms dashboard. Aeris does not make the engineer figure out what to ask.
Every word Aeris says was designed to communicate confidence. Specific, direct, actionable guidance based on platform data and the RAG knowledge base. The disclaimer at the bottom, "AI-generated content may be incorrect," is not a legal footnote. It is a design decision that keeps the engineer in the decision loop without undermining Aeris's authority.

AERIS IN ACTION
What previously required 45 minutes of manual investigation across multiple tabs now happens inside a single conversational panel. This is a real interaction sequence showing Aeris moving from question to diagnosis to recommended action.
Turn 01

Turn 02

Turn 03

Turn 04

REFLECTION
01
Frank Ross came to me with the wrong solution. I could push back with data. Climatix IC came to me with no solution at all. The hardest part of this project was not the design. It was deciding what to design and in what order.
02
The partner team had two patents on predictive failure detection. But an engineer who cannot interpret what the platform is telling them cannot act on a prediction. Trust has to be earned before capability can be expanded. That sequencing was not a technical constraint. It was a design principle.
03
In a consumer app a cautious AI feels safe. In an operational environment where equipment failure costs thousands per hour, a cautious AI is useless. Finding the line between confident enough to be trusted and honest enough to keep the engineer in the loop was the most nuanced design problem in this project.
04
The goal was never to make AI visible. It was to make the engineer's job easier. When Aeris works well, the engineer does not think about the AI. They think about the alarm they diagnosed in 2 minutes instead of 45. The AI disappears into the outcome. That is the standard I held every design decision to.