Project Detail

Deeds

Wellbeing journal app with AI insights

Challenge: Reliable UX around AI latency and failures

React Native · Next.js API · PostgreSQL · Hugging Face

Deeds app preview
Demo

Short walkthrough of writing an entry and receiving AI-generated wellbeing feedback.

Problem

Most journaling apps simply store text without providing meaningful feedback. While modern AI models can analyze text and generate insights, integrating them into a product introduces challenges such as latency, inconsistency, and failure handling.

What I Built

I built a journaling product that turns raw entries into structured, visual insight:

  • Daily journaling interface with activity tags
  • Weekly calendar view for tracking entries
  • AI-generated wellbeing scores across multiple life areas
  • Dynamic visual feedback through companion characters
  • Personalized summaries based on journal content
Technical Highlight

Trustworthy UX around async LLM analysis

The core challenge was designing a reliable user experience around LLM-based analysis. The system depends on an external AI pipeline where user input is processed asynchronously and results are returned to drive UI updates. This ensures that users always see consistent and trustworthy feedback despite the variability of AI systems.

I implemented a network-dependent processing flow:

  • Journal entries are analyzed before being persisted
  • The UI explicitly handles loading and error states
  • No fallback or fake data is shown to avoid misleading users
  • Integrated an LLM + RAG pipeline for text analysis
  • Extracted structured scores from unstructured input
  • Mapped results directly to UI components
Architecture & Decisions

End-to-end flow: user entry → Next.js API → LLM analysis (Hugging Face) → structured output → UI update, with local journal storage on the client and PostgreSQL via Prisma for persistence.

Client

React Native app with local journal storage.

Server

Next.js API handling analysis and business logic.

Data & external

PostgreSQL via Prisma. Hugging Face LLM for journal analysis. Flow: user entry → API → LLM processing → structured output → UI update.

Key Engineering Decisions

Analysis-first persistence

Prioritized data consistency; entries are not stored until analysis completes successfully.

Explicit error handling

Replaced silent failures with visible error states, trading polish in edge cases for user trust.

Local storage vs server-side analysis

Kept the journal on-device for fast capture while running AI work on the server, adding a network dependency.

Trade-offs & Next Steps

Current Limitations

  • Saving entries requires network connectivity.
  • No offline-first support for AI processing.
  • Latency affects user interaction flow.

Planned Improvements

  • Add offline draft mode with background sync.
  • Improve retrieval with embedding-based similarity.
  • Add explainability for AI-generated scores.
  • Improve retry handling and timeout UX.