Phygrid – Analytics Dashboard 2.0 (AI Narratives)



(About)
Client:
PHYGRID
Industry:
SAAS Platform
Role:
UX/UI Designer, User Researcher
Duration:
3 weeks
Client:
PHYGRID
Industry:
SAAS Platform
Role:
UX/UI Designer, User Researcher
Duration:
3 weeks
Client:
PHYGRID
Industry:
SAAS Platform
Role:
UX/UI Designer, User Researcher
Duration:
3 weeks
(Process)
(01)
Overview
As both designer and researcher, I developed a new model for decision-oriented dashboards where structure and storytelling worked hand in hand. Each column represented a part of the customer journey - from activity and ticket flow to service efficiency and customer satisfaction - while each row defined a level of perspective, from the high-level overview down to staff-level detail. This allowed users to explore patterns both vertically (system-wide trends) and horizontally (within one KPI stream), with AI-generated insights summarizing the most critical changes.

(01)
Overview
As both designer and researcher, I developed a new model for decision-oriented dashboards where structure and storytelling worked hand in hand. Each column represented a part of the customer journey - from activity and ticket flow to service efficiency and customer satisfaction - while each row defined a level of perspective, from the high-level overview down to staff-level detail. This allowed users to explore patterns both vertically (system-wide trends) and horizontally (within one KPI stream), with AI-generated insights summarizing the most critical changes.

(01)
Overview
As both designer and researcher, I developed a new model for decision-oriented dashboards where structure and storytelling worked hand in hand. Each column represented a part of the customer journey - from activity and ticket flow to service efficiency and customer satisfaction - while each row defined a level of perspective, from the high-level overview down to staff-level detail. This allowed users to explore patterns both vertically (system-wide trends) and horizontally (within one KPI stream), with AI-generated insights summarizing the most critical changes.

(02)
Challenge
Previous dashboards often failed at what they were meant to do - communicate insights at a glance. They were data-heavy, visually inconsistent, and cognitively demanding. Users spent more time interpreting charts than acting on them. The new design had to: - Provide a clear story flow (not just numbers) - Adapt to different decision-making contexts - Integrate AI summarization that highlighted anomalies, alerts, and recommendations - Support custom thresholds and KPIs for each tenant - Offer quick export and reporting (PDF, Excel, JSON) for cross-team use

(02)
Challenge
Previous dashboards often failed at what they were meant to do - communicate insights at a glance. They were data-heavy, visually inconsistent, and cognitively demanding. Users spent more time interpreting charts than acting on them. The new design had to: - Provide a clear story flow (not just numbers) - Adapt to different decision-making contexts - Integrate AI summarization that highlighted anomalies, alerts, and recommendations - Support custom thresholds and KPIs for each tenant - Offer quick export and reporting (PDF, Excel, JSON) for cross-team use

(02)
Challenge
Previous dashboards often failed at what they were meant to do - communicate insights at a glance. They were data-heavy, visually inconsistent, and cognitively demanding. Users spent more time interpreting charts than acting on them. The new design had to: - Provide a clear story flow (not just numbers) - Adapt to different decision-making contexts - Integrate AI summarization that highlighted anomalies, alerts, and recommendations - Support custom thresholds and KPIs for each tenant - Offer quick export and reporting (PDF, Excel, JSON) for cross-team use

(03)
Process
I started by mapping how tenants interacted with analytics today - when, why, and what type of data triggered action. From there, I created a narrative grid with seven perspectives: Rows (user journeys) - Core Performance — the customer journey (total tickets, waiting time, serving time, satisfaction) - System Efficiency — how well the system manages load (footfall, transfers, queue performance, customer feedback) - Ticket & Staff Performance — detailed ticket flow and operator performance (ticket type, ticket status, staff performance, service completion) Columns (insight themes) - Activity & Volume — traffic and demand signals - Ticket Flow & Progress — how items move through the system - Service Efficiency — execution quality and throughput - Customer Experience & Results — outcomes and sentiment AI summaries were placed at the end of each column, offering contextual micro-insights (e.g., “Wait time increased by 12% — consider redistributing staff during peak hours”). Users could toggle AI explanations, customize KPI thresholds, and export summarized reports.

(03)
Process
I started by mapping how tenants interacted with analytics today - when, why, and what type of data triggered action. From there, I created a narrative grid with seven perspectives: Rows (user journeys) - Core Performance — the customer journey (total tickets, waiting time, serving time, satisfaction) - System Efficiency — how well the system manages load (footfall, transfers, queue performance, customer feedback) - Ticket & Staff Performance — detailed ticket flow and operator performance (ticket type, ticket status, staff performance, service completion) Columns (insight themes) - Activity & Volume — traffic and demand signals - Ticket Flow & Progress — how items move through the system - Service Efficiency — execution quality and throughput - Customer Experience & Results — outcomes and sentiment AI summaries were placed at the end of each column, offering contextual micro-insights (e.g., “Wait time increased by 12% — consider redistributing staff during peak hours”). Users could toggle AI explanations, customize KPI thresholds, and export summarized reports.

(03)
Process
I started by mapping how tenants interacted with analytics today - when, why, and what type of data triggered action. From there, I created a narrative grid with seven perspectives: Rows (user journeys) - Core Performance — the customer journey (total tickets, waiting time, serving time, satisfaction) - System Efficiency — how well the system manages load (footfall, transfers, queue performance, customer feedback) - Ticket & Staff Performance — detailed ticket flow and operator performance (ticket type, ticket status, staff performance, service completion) Columns (insight themes) - Activity & Volume — traffic and demand signals - Ticket Flow & Progress — how items move through the system - Service Efficiency — execution quality and throughput - Customer Experience & Results — outcomes and sentiment AI summaries were placed at the end of each column, offering contextual micro-insights (e.g., “Wait time increased by 12% — consider redistributing staff during peak hours”). Users could toggle AI explanations, customize KPI thresholds, and export summarized reports.

(04)
Result & Conclusion
The prototype received strong internal feedback for clarity, logic, and adaptability. Stakeholders noted how the column–row storytelling made performance tracking intuitive, especially for teams without data literacy backgrounds. AI narratives added a “human layer” to analytics - bridging the gap between dashboards and actionable decision-making. Although the project never went live, it established a new visual and structural direction for Phygrid’s analytics ecosystem and became a blueprint for future tenant dashboards emphasizing narrative, AI, and clarity.

(04)
Result & Conclusion
The prototype received strong internal feedback for clarity, logic, and adaptability. Stakeholders noted how the column–row storytelling made performance tracking intuitive, especially for teams without data literacy backgrounds. AI narratives added a “human layer” to analytics - bridging the gap between dashboards and actionable decision-making. Although the project never went live, it established a new visual and structural direction for Phygrid’s analytics ecosystem and became a blueprint for future tenant dashboards emphasizing narrative, AI, and clarity.

(04)
Result & Conclusion
The prototype received strong internal feedback for clarity, logic, and adaptability. Stakeholders noted how the column–row storytelling made performance tracking intuitive, especially for teams without data literacy backgrounds. AI narratives added a “human layer” to analytics - bridging the gap between dashboards and actionable decision-making. Although the project never went live, it established a new visual and structural direction for Phygrid’s analytics ecosystem and became a blueprint for future tenant dashboards emphasizing narrative, AI, and clarity.

Download full report

