Case Study: LINE PM Assistant — AI-Powered Project Communication Intelligence Platform
Project Code: line-pm-assistant
Category: AI Agent / Communication Intelligence / Developer Tooling
Status: Active Development (MVP Sprint)
Industry: Software Development / Project Management Tooling
1. Project Name
LINE PM Assistant — An AI-powered communication intelligence platform that ingests LINE messaging conversations, integrates with GitHub work items, and uses Retrieval-Augmented Generation (RAG) with the Claude API to surface project context, generate reply drafts, and produce structured timeline summaries — transforming transient chat into a persistent, searchable project memory.
2. Core Technology Stack
| Layer | Technology |
|---|---|
| Backend Framework | Django 5.1, Python 3.12 |
| API Layer | Django REST Framework 3.15 |
| Database | PostgreSQL + pgvector 0.3.6 (vector similarity search) |
| AI Engine | Anthropic Claude API (claude-sonnet-4-6) |
| Embedding / RAG | pgvector (semantic chunking & retrieval) |
| Chat Integration | LINE Messaging API (Webhook ingestion) |
| Code Integration | GitHub API (work item sync) |
| Frontend | LIFF (LINE Front-end Framework) for in-chat UI |
| Server | Gunicorn |
| Hosting | Docker (djangoDev container) |
Custom Apps Built:
- projects — Project, Contact, ProjectMembership
- communication — Conversation, MessageEvent, OutboundDraft
- context_memory — ContextDocument, ContextChunk, ContextRetrievalLog (pgvector RAG)
- workitems — WorkItem, WorkItemSourceLink, ExternalLink, SyncLog
- integrations — GitHub sync services
- dashboard — LIFF UI, summary & timeline APIs
- core — claude_service.py (Claude API wrapper)
Key API Endpoints:
- POST /api/line/webhook — LINE OA webhook ingestion
- POST /api/context/retrieve — RAG context retrieval
- GET /api/dashboard/summary?project_id=<id> — AI-generated project summary
- GET /api/dashboard/timeline?project_id=<id>&q=<keyword> — Semantic timeline query
- POST /api/line/draft/approve — Human approval + send for AI-generated draft replies
3. The Challenge (The Problem)
Software development teams in Taiwan — and globally — increasingly conduct project communication on LINE, a platform not designed for structured project management. This creates three acute problems:
- Ephemeral communication: Critical project decisions, client requirements changes, and technical agreements made in LINE group chats are buried within days of their creation. When a dispute arises, "what did we actually agree?" requires hours of manual scroll-archaeology. There is no searchable record.
- Context switching overhead: A project manager must simultaneously maintain LINE conversations, GitHub issues, and email threads — three separate contexts with no structured linkage. Synthesising a coherent project status requires manually aggregating from all three, a task that typically takes 30–60 minutes per weekly update.
- Response quality inconsistency: Non-technical project coordinators managing technical discussions on behalf of clients frequently produce replies that are technically imprecise or miss important implications buried in earlier conversation context. There is no tooling to help a coordinator give a high-quality, context-aware response without deep technical background.
4. The Solution (The Implementation)
Feature 1: LINE Webhook Ingestion & Persistent Message Store
All messages from integrated LINE Official Accounts and group chats are ingested via webhook into the communication app. MessageEvent records store the full message content, sender identity (via Contact), conversation context, and timestamp — creating a permanent, queryable record of all project communication. No message is lost.
Feature 2: pgvector RAG Context Memory
The context_memory app implements a full RAG pipeline:
1. Incoming messages and linked documents are chunked semantically.
2. Each chunk is embedded and stored as a vector in PostgreSQL via the pgvector extension.
3. When a query or draft request is initiated, the system performs cosine similarity search to retrieve the most contextually relevant chunks.
4. Retrieved context is assembled into a structured prompt for the Claude API.
This architecture allows the AI to answer questions about project history that span weeks or months of conversation — well beyond the context window of any single API call.
Feature 3: Claude-Powered Reply Draft Generation
When a project coordinator receives a LINE message requiring a response, they can trigger the draft generation pipeline: the system retrieves relevant conversation history and linked GitHub work items via RAG, constructs a context-rich prompt, calls claude-sonnet-4-6, and returns a draft reply. The draft is held in OutboundDraft with status=pending — a human coordinator reviews and approves before sending via POST /api/line/draft/approve. This human-in-the-loop design ensures AI output quality without removing human accountability.
Feature 4: GitHub Work Item Synchronisation
The integrations app syncs GitHub Issues and Pull Requests into the workitems app as local WorkItem records, creating bidirectional links between code-level work and conversation-level discussion. A WorkItemSourceLink connects every work item to the LINE messages that originated or discussed it — enabling the system to answer "which LINE conversations led to this GitHub issue?" and vice versa.
Feature 5: LIFF Timeline & Summary Dashboard
The dashboard app exposes a LIFF (LINE Front-end Framework) interface accessible inside LINE itself — no context switch required. The timeline view supports semantic keyword search across all project history. The summary endpoint calls Claude to generate a structured project status briefing from recent activity, reducing a 30–60 minute weekly summary task to seconds.
5. Business Impact (The Result)
- Communication permanence: Every LINE project message becomes a searchable, retrievable record. The "we discussed this three weeks ago but can't find it" failure mode is eliminated.
- Project summary time: AI-generated project summaries from the
/api/dashboard/summaryendpoint reduce a manual 30–60 minute compilation to a sub-10-second automated synthesis. - Draft quality uplift: Context-aware Claude-generated reply drafts give non-technical coordinators a high-quality starting point, reducing the rate of technically imprecise or context-incomplete responses to clients.
- Compliance-friendly audit trail: Every message, every AI draft, and every approval action is stored with timestamps and user identity — providing a defensible record in case of project dispute.
- [Needs Manual Input]: Number of projects managed, average conversation volume per project, measured time savings from AI summary generation.
6. AI / Innovation Factor
This project is the most AI-native of the portfolio — AI is not a feature but the core mechanism:
- pgvector Retrieval-Augmented Generation (RAG): The system implements production-grade RAG using PostgreSQL as the vector store (via the pgvector extension), avoiding the operational overhead of a dedicated vector database while achieving semantic similarity retrieval at project scale.
- Claude API Integration (claude-sonnet-4-6): The
core/claude_service.pymodule wraps Anthropic's Claude API with context assembly logic, token budget management, and structured output parsing. The model is the current frontier: Anthropic's Sonnet 4.6, the same model used to generate this case study. - Human-in-the-Loop Architecture: The
OutboundDraftapproval workflow is a deliberate architectural choice — AI generates, human approves. This pattern matches the responsible AI deployment standard that enterprise clients increasingly require for client-facing communications. - LIFF Integration: The dashboard is embedded inside LINE itself via LIFF — meeting users where they already are rather than requiring adoption of a separate tool, the most common barrier to project management software adoption.
- Agentic Potential: The architecture (ingestion → RAG → Claude → structured output → human approval → action) is the canonical design for an AI agent pipeline. Future iterations can expand from "draft replies" to "create GitHub issues from chat" or "flag client sentiment changes" without architectural change.
Document generated: 2026-05-03 | Maintained by Tom Lai / You Er Ta Mu She Ji
