LearningPulse
Operationalizing trust in AI-assisted educator workflows.
At LearningPulse (NSF-funded), I partnered with engineering to translate LLM + spaCy analysis outputs into clear, testable UI states and UI-ready data structures, so insights rendered consistently and stayed explainable. I also owned iteration on the website + demo experience, helping drive 6 NJ district pilot signups, 3 conference invitations (NJ + Alaska), and an 84.6% demo completion rate.
Case Study Context
Role: UX Developer / Product Analyst
Industry: GenAI EdTech
Timeline: 7 months (Sept. 2025 - Jan. 2026)
Tools: Vue, TypeScript, GenAI APIs
The Challenge
Educators need AI insights that feel clear, predictable, and explainable, especially when inputs are messy and AI outputs can vary significantly.
What is LearningPulse?
LearningPulse is a GenAI-assisted EdTech product funded by the NSF that helps educators review qualitative student work such as documents and writing samples. The platform generates structured insights to support reflection and instructional decision-making.
In our discussions, we felt that this problem stems from factors like geographical distance post-graduation, unsatisfactory experiences, and challenges like those brought on by the COVID-19 pandemic and remote schooling.
This meant working directly in Vue and TypeScript, using selective high-fidelity design only where needed to establish patterns and ensure consistency across the platform.
Constraints

Early-stage product velocity
Shipped directly in Vue/TypeScript with selective hi-fi design (not every screen)

Unstructured inputs
Inputs are inconsistent in format, completeness, and quality

Variable AI outputs
AI outputs can vary in length and structure → UI needs stable rendering rules

First-run experience clarity
Must reduce "what do I do next?" confusion for new users

Edge case handling
Must handle missing docs, partial uploads, and no results without breaking trust
Reference Implementation
One hi-fi screen (light + dark) used as reference implementation
Defining the Data → UI Contract
AI output → renderable UI

Unstructured inputs
Student work documents uploaded by educators (varied format and quality)

UI-ready insight
structure
sections[] (ordered)
section_title
summary (plain language)
evidence[] with document_id
status (complete / partial /
no_signal)
notes_for_ui (optional)

Consistent UI modules
Predictable cards/blocks in the interface so teachers can scan and drill down

What I owned
Partnered with engineering to translate LLM + spaCy outputs into clear, testable UI states and UI-ready data structures
Defined how insight sections should be structured so the front end can reliably parse and render
Clarified empty/error/partial states so educators always had an understandable next step

Why it matters
Prevents UI ambiguity when outputs change
Makes insights scannable and explainable
Enables consistent iteration without redesigning every screen
Structure in practice
How the data contract translates to UI components
Data Structure
Input
section:
title: "Writing Quality"
summary: "Strong evidence..."
status: "complete"
evidence: [doc_1, doc_3]
UI Component
Output
Writing Quality
Complete
Strong evidence...
Doc 1
Doc 3
Edge state clarity
Every possible state has a clear UI representation

Complete
All evidence found, full summary generated

Partial
Some evidence found, limited summary available

No signal
No relevant evidence detected in documents

Decision enabled
Educators can trust the interface to handle variable AI outputs gracefully, reducing
cognitive load and enabling faster pattern recognition across student work
Designing for resilience: State-Driven UI
Why This Matters:
I treated edge cases as first-class UX requirements because AI workflows rarely follow the "happy path."
The goal was that teachers always understand what happened, what it means, and what to do next.
Design Principle
Implementation Notes

State Management
Vue composables track upload status, analysis progress, and error states

Messaging Strategy
All copy emphasizes user value and next steps, never technical jargon

Component Library
Reusable EmptyState, ProgressIndicator, and
ErrorBoundary components

Testing Coverage

Outcome
By designing for all six states from the start, we reduced "what do I do next?"
confusion and built trust through transparency. Teachers could see exactly where
they were in the process and what to expect.
Information Hierarchy
From "Next Step" → Insight Review
New Analysis (Step-based flow)

Design Process Note
Most screens were designed as mid-fi system blueprints and implemented directly in code to move quickly; one hi-fi screen (light/dark) served as the visual reference.
Reusable UI Patterns
Reusable UI patterns to keep the experience consistent across screens
Key Design Decisions

Step-based Navigation
Breaking the workflow into clear steps reduces cognitive load for first-time users and makes the "next action" obvious at every stage.

Educator-Friendly Language
Avoided technical jargon and data terminology, using terms like "themes," "strengths," and "areas for growth" that resonate with teaching practice.

Progressive Disclosure
Insights are presented as scannable summaries first, with detailed evidence and supporting data available through drill-down interactions.

Consistent UI Patterns
Established reusable components and interaction patterns that work across all screens, ensuring predictability and reducing implementation time.
Market Validation: Website + Demo Narrative
Outcomes from website + demo funnel work

6
New NJ District Pilot Signups
From improved website positioning

3
Conference Invitations
NJ + Alaska education conferences

84.6%
Demo Completion Rate
11 of 13 completed full demo flow
Metrics reflect outcomes from the website + demo funnel work; phrasing should indicate I contributed to / helped drive these results.
What I Did

Owned website + demo experience iteration
Iterated on the website and demo experience in Webflow, refining the product narrative to better communicate value and use cases to prospective pilot districts.

Improved clarity of positioning and flow
Refined messaging and navigation flow so prospects could quickly understand "what LearningPulse does" and how it supports their instructional goals.

Supported pilot momentum and word-of-mouth interest
Made the demo experience easier to follow and more compelling, contributing to pilot signups and conference invitations through clearer value communication.
Before / After Messaging
Narrative direction examples (final copy varied by audience)
Before
Generic, tech-first positioning that didn't communicate clear educational value
After
"Evidence-based insights for student growth"
Outcome-focused messaging that resonates with educators' instructional goals
Impact: Clearer positioning helped prospects understand product value faster, reducing friction in the signup and demo flow.
Outcomes & Reflection
Learnings from data-to-decision products in regulated + GenAI contexts

What Worked

System-level thinking
Stable output structure → predictable UI rendering

Edge-state clarity
Improved trust in AI-assisted workflows

Reusable patterns
Sped up iteration in a startup environment

What I'd Improve Next

Product instrumentation
Lightweight tracking aligned with value signals, not click counting

First-run onboarding
Intentional landing state + quick role-based guidance

Self-serve support
Help center patterns to reduce support bottlenecks

What I Can Demo Live (Sanitized)
Interactive examples and documentation available during interview

Output Schema → Rendering
How structured output enables predictable UI states
No student content shown

Password Reset Flow Spec
Complete requirements + UI state documentation
Production-ready specs

AI Workflow State Matrix
Edge case handling for GenAI workflows
Sanitized examples only

Webflow Page Iteration
Before/after narrative structure improvements
Sanitized content versions

Approach to Portfolio Work
All examples shown are truthful representations of completed work. Metrics and outcomes reflect actual project results without embellishment. Student content and proprietary information have been sanitized or replaced with representative examples.
Verification
References available upon request
Evidence
GitHub + live demos during interview







