Enterprise · AI Design · Siemens · 0 to 1

Designing AI That Gives Attention Back

How I took an open-ended AI brief, partnered across teams, and designed a phased intelligence roadmap for a complex enterprise platform serving 8 user types across thousands of facilities.

Company

Siemens

Platform

Climatix IC 2.0

Role

Solo Product Designer

Status

Live in pilot

THE BRIEF

We want AI. Figure out what that means.

The PM came to me with an open brief. Siemens wanted AI integrated into Climatix IC and its ecosystem products. No defined scope. No specific feature. Just a direction and a question: what should this look like?

At the same time, I was brought in to collaborate with a partner team that had submitted two patents on predictive analysis for smart infrastructure. Their models could detect a potential equipment failure 1 to 2 hours before it happened. The technical capability existed. The design challenge was figuring out where to start and why.

I did not open Figma for three weeks.

THE PROBLEM

The platform had all the data. None of the intelligence.

Climatix IC served 8 to 9 distinct user types across thousands of facilities. But the interface treated them all the same. One monolithic system. One dashboard. Access permissions masquerading as role-based design.

Service Manager

Responsible for health of all plants across a tenant portfolio. Spent the majority of their time manually checking every device, every alarm, one by one.

Service Engineer

Responsible for diagnosing and fixing issues on specific plants. Relied on colleagues and training manuals because the platform gave data without context and alerts without guidance.

With 123 active alarms on any given day, manual checking was not a workflow. It was a full time job inside their actual job.

THE DISCOVERY

Three weeks before a single screen.

I collaborated with the partner team to explore where their predictive models could create value inside Climatix IC. Their capability was precise: detect a potential equipment failure 1 to 2 hours before it happens. That is not a notification feature. That is a decision support window.

I researched AI applications across enterprise platforms, industrial monitoring tools, and operational intelligence products. From three weeks of exploration I built directional concepts to initiate a stakeholder conversation about priorities, possibilities, and feasibility.

If AI can absorb 30 to 40% of that manual monitoring work, Service Managers stop reacting to failures and start preventing them.

That became the hypothesis the entire AI roadmap was built around.

THE PRIORITISATION

From infinite AI possibilities to two use cases that mattered.

After stakeholder discussions and feasibility conversations we had a long list of what AI could do. We shipped none of it. Instead we asked: what does a Service Manager look at first every morning? What can they not afford to miss?

AI Recommended Actions

Surface the three most critical actionable insights at the top of the dashboard. Not 123 alarms ranked by severity. Three specific recommendations in plain language with a clear next action. A Service Manager who sees three clear actions can move immediately.

Most Frequent Alarm Causes

Surface pattern intelligence across the portfolio as a frequency versus impact matrix. Not what is alarming right now. What keeps alarming, and how critical is it when it does. Refrigerant leaks sitting top right tells a Service Manager this is a systemic problem requiring a portfolio-wide response, not just a single technician dispatch.

AI Recommended Actions and Most Frequent Alarm Causes surfaced on the home dashboard. Portfolio-level intelligence in a single view.

THE INTELLIGENCE LAYER

From 45 minutes of investigation to under 2 minutes.

Before this existed, diagnosing a compressor power spike required opening multiple tabs, cross-referencing reports, manually comparing historical data, and relying on experience to piece together a diagnosis. Average time: 45 minutes. Accuracy: dependent on the engineer's experience level.

Predictions tab

Three columns: anomaly detected, possible cause, recommended action. The engineer does not need to know why the compressor spiked. The platform tells them: sudden valve closure causing load surge. They do not need to figure out what to do. The platform tells them: inspect and recalibrate EEV valve motor.

The full diagnostic workflow compressed into a scannable three-column table. No training manual. No colleague. No 45 minutes of manual investigation.

Insights tab

Three layers of intelligence in one panel. Past Similar Events shows historical pattern: same fault occurred 3 times in 6 months, average time to failure 3.5 days after spike. Impact quantifies consequence: 12% increased energy usage during anomaly window. AI Tip goes beyond the fix: consider predictive EEV control algorithm to avoid future overshoot events.

Together these three layers answer every question a Service Engineer has: what is happening, how serious is it, and how do I make sure it does not happen again.

Pattern memory, consequence framing, and prescriptive recommendation in a single right panel. The platform does the cognitive work so the engineer can do the human work.

THE ROADMAP

From static flows to agentic intelligence: Aeris.

Predictive intelligence surfaces what needs attention. But what happens when a Service Engineer does not understand what the platform is telling them? The training manual still had the answer. Aeris was designed to change that.

I spent one week researching AI interaction patterns across enterprise tools, developer products, and emerging agentic systems. The output was a phased roadmap for Aeris, Climatix IC's AI assistant.

You cannot give users more AI capability before they trust the AI they already have. Each phase had to earn the next one.

Phase 01

Understand

RAG-based document assistant. Engineers ask Aeris anything about the platform, a device, an error code, or a setup process and get plain language guidance with step-by-step instructions. The knowledge the platform assumed engineers had is now available on demand.

Live in pilot

Phase 02

Anticipate

Contextual intelligence surfaced directly inside components. Pattern recognition, predicted failures, anomaly detection with possible causes and recommended actions. The platform does not wait for the engineer to come looking.

Partially in closed testing

Phase 03

Interact

Inline conversational assistance without leaving the workflow. Engineers comment directly on a component, ask for an explanation, request a fix, or get contextual guidance from within the app UI. The assistant becomes part of the working surface.

Designed, in pipeline

Phase 04

Execute

Plan-first agentic execution. Aeris proposes a course of action, the engineer reviews and approves, and the platform executes. Think Cursor or Claude Code but for Climatix IC's operational environment. The engineer stays in control. The platform does the work.

Designed, in pipeline

DESIGNING AERIS

The assistant that knows where you are.

Placement: always one click away

Aeris sits in the global header. Persistent across every screen. An AI assistant buried in a menu gets used once and forgotten. An assistant in the navigation layer becomes part of the workflow.

Interaction model: a panel, not a page

When triggered, Aeris opens as a right-side panel overlay. The platform stays visible behind it. The engineer does not lose their context to ask a question. They ask, get an answer, and return to the exact screen they were on.

Contextual prompts: meeting engineers where their attention is

Three suggested prompts on the opening screen: "Show me critical alarms", "Why is this alarm triggered?", "Suggest troubleshooting steps." These are the three most common questions a Service Engineer has when they open the alarms dashboard. Aeris does not make the engineer figure out what to ask.

Personality: confident, not cautious

Every word Aeris says was designed to communicate confidence. Specific, direct, actionable guidance based on platform data and the RAG knowledge base. The disclaimer at the bottom, "AI-generated content may be incorrect," is not a legal footnote. It is a design decision that keeps the engineer in the decision loop without undermining Aeris's authority.

Aeris opens as a persistent right panel from the global header. Contextual prompts meet the engineer at their most common questions without making them think about what to ask.

AERIS IN ACTION

A complete diagnostic conversation in four turns.

What previously required 45 minutes of manual investigation across multiple tabs now happens inside a single conversational panel. This is a real interaction sequence showing Aeris moving from question to diagnosis to recommended action.

Turn 01

Engineer asks "Why is this alarm triggered?" Aeris surfaces the three most recent alarms for disambiguation. Context-aware, not generic.

Turn 02

Engineer selects Pressure Drop. Aeris returns structured context: alarm type, location, event state, three ranked possible causes, and three action options.

Turn 03

Engineer taps Run Diagnostics. Aeris runs the system check in the background. The platform does the investigative work that previously required manual cross-referencing.

Turn 04

Diagnostics complete. Blockage found in filter. Aeris surfaces a specific recommendation with two clear paths: Schedule Maintenance or See Troubleshooting Steps.

REFLECTION

What designing AI for high-stakes environments actually taught me.

01

An open brief is harder than a wrong brief

Frank Ross came to me with the wrong solution. I could push back with data. Climatix IC came to me with no solution at all. The hardest part of this project was not the design. It was deciding what to design and in what order.

02

AI capability without comprehension is just faster confusion

The partner team had two patents on predictive failure detection. But an engineer who cannot interpret what the platform is telling them cannot act on a prediction. Trust has to be earned before capability can be expanded. That sequencing was not a technical constraint. It was a design principle.

03

Confidence is a design decision

In a consumer app a cautious AI feels safe. In an operational environment where equipment failure costs thousands per hour, a cautious AI is useless. Finding the line between confident enough to be trusted and honest enough to keep the engineer in the loop was the most nuanced design problem in this project.

04

The best AI design is invisible infrastructure

The goal was never to make AI visible. It was to make the engineer's job easier. When Aeris works well, the engineer does not think about the AI. They think about the alarm they diagnosed in 2 minutes instead of 45. The AI disappears into the outcome. That is the standard I held every design decision to.