Is the LLM/MCP approach feasible?
✅ YES - This is an IDEAL use case for an LLM-driven architecture with MCP servers.
Your vision is not only doable, it's actually a better architectural choice than a traditional API-driven system for your specific needs. Here's why this approach excels:
- Natural Language is Your Primary Interface: Your example queries ("Is Jane Smith registered?", "How many attended last year?") are exactly what LLMs excel at interpreting
- Structured Data + Unstructured Queries: The underlying data is highly structured (contacts, events, ratings), but your queries are conversational - LLMs bridge this gap perfectly
- Scale: For events with ~300 attendees and ~20-30 speakers, this architecture handles it effortlessly
- Contextual Intelligence: LLMs provide rich, context-aware responses ("Jane attended 2023 and 2024 as an attendee")
┌─────────────────────────────────────────────────────────────┐
│ USER INTERFACE LAYER │
│ │
│ Phase 1: Claude.ai / Claude Code (MCP-enabled) │
│ Phase 2: Web App with Chat Interface → Claude API │
│ │
└───────────────────────────┬─────────────────────────────────┘
│ Natural Language Queries
▼
┌─────────────────────────────────────────────────────────────┐
│ LLM ORCHESTRATION │
│ (Claude Sonnet/Opus) │
│ │
│ • Interprets natural language queries │
│ • Calls appropriate MCP server tools │
│ • Composes intelligent, context-aware responses │
│ • Handles multi-step workflows with confirmations │
│ │
└───────────────────────────┬─────────────────────────────────┘
│ MCP Protocol (Tool Calls)
▼
┌─────────────────────────────────────────────────────────────┐
│ MCP SERVER LAYER │
│ │
│ ┌─────────────────┐ ┌──────────────┐ ┌────────────────┐ │
│ │ Contact/DB MCP │ │ Email MCP │ │ Import MCP │ │
│ │ │ │ │ │ │ │
│ │ • Contact CRUD │ │ • Send emails│ │ • Parse CSVs │ │
│ │ • Event CRUD │ │ • Preview │ │ • Validate data│ │
│ │ • Participation │ │ recipients │ │ • Bulk inserts │ │
│ │ • Speaker │ │ • Templates │ │ • Quicket fmt │ │
│ │ sessions │ │ • Email log │ │ • Sessionize │ │
│ │ • Ratings │ │ │ │ format │ │
│ │ • Analytics │ │ │ │ • Rating files │ │
│ │ queries │ │ │ │ │ │
│ └─────────────────┘ └──────────────┘ └────────────────┘ │
│ │
└───────────────────────────┬─────────────────────────────────┘
│ SQL / SMTP
▼
┌─────────────────────────────────────────────────────────────┐
│ DATA & SERVICES LAYER │
│ │
│ ┌──────────────────┐ ┌──────────────────────────┐ │
│ │ PostgreSQL │ │ Email Service │ │
│ │ │ │ (SMTP/SendGrid/Brevo) │ │
│ │ • contacts │ │ │ │
│ │ • events │ └──────────────────────────┘ │
│ │ • participation │ │
│ │ • sessions │ │
│ │ • ratings │ │
│ │ • email_log │ │
│ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Core Tables:
- contacts: Unified person records (attendees + speakers)
- id, email, name, phone, company, contact_type, created_date
- events: Event definitions
- id, name, event_date, location, description, year
- participation: Who registered/attended which events
- id, contact_id, event_id, registration_date, checked_in, attendance_status
- speaker_sessions: Speaker presentations
- id, event_id, contact_id (speaker), topic, session_title, track
- speaker_ratings: Session ratings
- id, session_id, topic_rating, presentation_rating, overall_score, comments
- email_log: Communication history
- id, sent_date, recipient_count, subject, template_used, sent_by
Why PostgreSQL:
- Excellent relational data modeling (contacts ↔ events ↔ participation)
- Strong analytics capabilities (aggregations, window functions)
- JSON support for flexible data (Quicket/Sessionize import metadata)
- Rock-solid for your data volume (~300-500 contacts)
The primary data interface - provides tools for:
Contact Management:
search_contacts(query)- Find contacts by name/email/companyget_contact(id)- Get full contact details + historycreate_contact(...)- Add new contactupdate_contact(...)- Modify contact infomerge_contacts(...)- Resolve duplicates
Event Management:
list_events(year?)- Get eventsget_event(id)- Event details + statisticscreate_event(...)- New event
Participation Tracking:
record_registration(contact_id, event_id, ...)record_attendance(contact_id, event_id)get_participation_history(contact_id)
Speaker Management:
record_session(event_id, speaker_id, topic, ...)record_rating(session_id, ratings, ...)get_speaker_history(contact_id)
Analytics Queries:
get_attendance_stats(event_id)- Attendance vs registrationget_speaker_rankings(event_id, limit)- Top speakersget_retention_analysis(year1, year2)- Attendee retentionget_first_time_attendees(event_id)- New attendees countget_topic_popularity()- Topic trends
Handles all communications:
Tools:
preview_recipients(filter)- Show who will receive email- Example:
filter="speakers_from_year:2024"
- Example:
send_email(recipient_filter, subject, body, template?)- Send to groupsend_individual_email(contact_id, subject, body)- Send to oneget_email_templates()- List available templatesget_email_history(event_id?, contact_id?)- Past emails
Features:
- Template support (CFP announcement, event reminder, thank you)
- Preview-before-send workflow (safety)
- Email history logging
- Unsubscribe handling
Batch data ingestion:
Tools:
import_quicket_registrations(file_path)- Parse Quicket CSV → contacts + participationimport_quicket_checkins(file_path)- Update attendance statusimport_sessionize_speakers(file_path)- Parse Sessionize JSON → speakers + sessionsimport_speaker_ratings(file_path)- Internal rating system → ratings tablepreview_import(file_path, import_type)- Show what would be importedvalidate_import_data(file_path, import_type)- Check for errors/conflicts
Features:
- Data validation (required fields, email format)
- Duplicate detection (match on email)
- Conflict resolution strategies (update vs skip vs merge)
- Import summary reporting
User: "Is Jane Smith registered for the upcoming conference?"
Flow:
- LLM interprets query → needs contact lookup + event lookup
- Calls
search_contacts(query="Jane Smith") - Calls
list_events()to find "upcoming" event - Calls
get_participation_history(jane_id) - LLM composes response:
Response: "Yes, Jane Smith (jane.smith@example.com) registered on December 15, 2024 for the 2025 conference. She previously attended the 2023 and 2024 conferences as an attendee."
User: "Send an email to all 2024 speakers announcing the call for papers for 2025."
Flow:
- LLM interprets → needs speaker list + email send
- Calls
preview_recipients(filter="speakers_from_year:2024") - LLM responds: "I've identified 23 speakers from 2024. Would you like to review the list before sending?"
- User: "Yes, show me"
- LLM displays list with names/emails
- User: "Looks good, send it"
- Calls
send_email(filter="speakers_from_year:2024", subject="...", body="...", template="cfp_announcement") - LLM confirms: "Email sent to 23 speakers. Logged in email history."
User: "How many people attended last year's event compared to 2023?"
Flow:
- LLM interprets → needs attendance comparison
- Calls
get_attendance_stats(event_id=2024_event) - Calls
get_attendance_stats(event_id=2023_event) - LLM calculates percentage change
- LLM responds:
Response: "2024 had 287 attendees (92% of 312 registrations). 2023 had 245 attendees (88% of 278 registrations). That's a 17% increase in attendance and 12% increase in registrations year-over-year."
User: "Import this Quicket registration file" (attaches CSV)
Flow:
- LLM calls
validate_import_data(file_path, type="quicket_registration") - Import MCP validates, finds 145 rows, 3 potential duplicates
- LLM reports: "Found 145 registrations. 3 appear to be existing contacts. How should I handle duplicates?"
- User: "Update existing records"
- LLM calls
import_quicket_registrations(file_path, duplicate_strategy="update") - Import MCP processes: 142 new contacts, 3 updated, 145 participation records
- LLM confirms: "Import complete. Added 142 new contacts, updated 3 existing, created 145 registration records for Event 2025."
- No need to remember SQL queries or API endpoints
- Ask questions the way you think about them
- LLM handles ambiguity ("last year" vs "2024", "Jane" vs "Jane Smith")
- LLM enriches responses with relevant history
- "Jane Smith attended 2023 and 2024" without being asked
- Proactive insights ("That's a 17% increase")
- Preview-before-send for emails
- Validation before data imports
- LLM confirms destructive operations
- Add new MCP servers easily (reporting, social media, surveys)
- LLM automatically learns new tools
- No frontend changes needed for new capabilities
- Same MCP servers work with Claude.ai, Claude Code, or web app
- Gradual migration: start with Claude.ai, add web UI later
- MCP protocol is vendor-neutral
Issue: LLM conversations are stateless between turns
Solution:
- MCP servers maintain short-term state (preview cache)
- LLM conversation history provides context
- For Phase 2 web app: session management
Issue: Quicket/Sessionize exports may have duplicates, format variations
Solution:
- Import MCP validates data before insertion
- Fuzzy matching on email/name for duplicates
- LLM asks user to resolve conflicts
- Import preview mode (dry-run)
Issue: Some queries require complex SQL (retention cohorts, trend analysis)
Solution:
- Pre-build common analytics queries in MCP server
- LLM composes SQL for ad-hoc queries (with safeguards)
- Gradually expand pre-built analytics based on usage
Issue: Need rich email templates (HTML, variables)
Solution:
- Store templates in database or files
- Email MCP supports variable substitution
- LLM can customize templates on-the-fly
- Phase 2: Template editor in web app
Issue: Every query goes through Claude API (costs)
Solution:
- For simple lookups, MCP servers can cache results
- Pre-compute common analytics (daily rollups)
- Use Haiku for simple queries, Sonnet for complex
- Your volume (~300 contacts, occasional queries) = minimal cost
Timeline: This is a planning document - no estimates provided
Deliverables:
- PostgreSQL database with schema
- Contact/Database MCP server (Python or TypeScript)
- Email MCP server
- Import MCP server
- MCP server configuration for Claude.ai/Claude Code
- Sample data for testing
Usage:
- Use Claude.ai desktop app or Claude Code CLI
- MCP servers run locally or on cloud VM
- All interactions via chat
Advantages:
- Fastest to build (no frontend needed)
- Validate the approach with real usage
- Iterate on MCP tool design based on feedback
Deliverables:
- Web application (React/Next.js or similar)
- Chat UI component
- Backend API that calls Claude API with MCP servers
- User authentication
- Email template management UI
- Data visualization dashboards
Usage:
- Browser-based chat interface
- Same MCP servers from Phase 1
- Enhanced email composer
- Visual analytics
Advantages:
- Shareable with co-organizers
- Better email template management
- Visual dashboards for quick insights
- User access controls
Potential Additions:
- Automated Quicket/Sessionize API integration
- Scheduled reports (weekly attendance summaries)
- SMS notifications
- Survey integration (post-event feedback)
- Social media analytics (mentions, engagement)
- Multi-event comparison dashboards
- Normalize core entities (contacts, events separate)
- Use junction tables for many-to-many (participation, sessions)
- Index frequently queried fields (email, event_date)
- Store import metadata as JSONB (flexibility)
- Atomic tools: Each tool does ONE thing well
- Composable: LLM chains tools together for complex workflows
- Validated: Tools validate inputs before DB operations
- Descriptive: Clear tool names and parameter descriptions
- Idempotent where possible: Safe to retry
- Email MCP requires confirmation for bulk sends (>10 recipients)
- Import MCP logs all operations (audit trail)
- Database MCP uses parameterized queries (SQL injection prevention)
- Contact data includes GDPR fields (consent, opt-out)
- Email unsubscribe support
- Store only necessary contact data
- Email preference management (opt-out)
- GDPR compliance (data export, deletion)
- Audit log for all email sends
❌ Custom admin UI with forms for every operation ❌ Pre-defined report templates (inflexible) ❌ Complex filters/search interfaces ❌ Email campaign builder UI ❌ Analytics dashboard with fixed visualizations ❌ User manual/training
✅ Natural language for everything ✅ Ad-hoc queries without new code ✅ Context-aware responses ✅ Flexible email workflows via conversation ✅ No training needed (just ask questions) ✅ Easy to extend (add MCP tools)
Your vision is not only feasible - it's architecturally superior for your use case.
The LLM/MCP approach gives you:
- Flexibility: Ask any question, not just pre-built reports
- Speed: No frontend development for Phase 1
- Intelligence: Context-aware, conversational responses
- Safety: Preview/confirmation workflows
- Extensibility: Add capabilities without UI changes
- Future-proof: Same backend works with any frontend
Recommended Next Step: Start with Phase 1 (Claude.ai/Claude Code + MCP servers). This lets you validate the approach, refine the data model, and understand your real usage patterns before investing in a web UI.
The database schema and MCP servers you build for Phase 1 will work unchanged in Phase 2 - you're not throwing away any work.