n8n Workflow Import Guide

Quick Start

This guide explains how to import the chatmate_workflow.json file into n8n and configure it properly.

Prerequisites

Before importing, ensure you have:

  • n8n running (Docker or self-hosted)
  • HuggingFace API key
  • Groq API key
  • Supabase project with vector extension enabled
  • PDF file accessible to n8n

Import Steps

1. Import the Workflow

Method 1: Via UI

  1. Open n8n in your browser (usually http://localhost:5678)
  2. Click Workflows in the left sidebar
  3. Click Import from File button (top right)
  4. Select chatmate_workflow.json
  5. Click Import

Method 2: Via File System

# Copy to n8n workflows directory
cp chatmate_workflow.json ~/.n8n/workflows/

2. Configure Credentials

After importing, you’ll need to add three credentials:

A) HuggingFace API Credential

  1. Click on HuggingFace Embeddings node
  2. Click Create New Credential
  3. Select HuggingFace API
  4. Enter your API token from huggingface.co/settings/tokens
  5. Save as “HuggingFace API”

B) Supabase API Credential

  1. Click on Supabase Vector Store - Insert node
  2. Click Create New Credential
  3. Select Supabase API
  4. Enter:
    • Host: Your Supabase project URL (e.g., https://xxxxx.supabase.co)
    • Service Role Key: Your Supabase anon/public key
  5. Save as “Supabase API”

Reuse Credential

The same Supabase credential will be used for both Insert and Retrieve nodes.

C) Groq API Credential

  1. Click on Groq Chat Model node
  2. Click Create New Credential
  3. Select Groq API
  4. Enter your API key from console.groq.com
  5. Save as “Groq API”

3. Update File Path

  1. Click on Read PDF File node
  2. Update the File Path parameter to match your setup:
    • Default: /data/attention_is_all_you_need.pdf
    • Change to your actual PDF location

4. Prepare Supabase Database

Run this SQL in your Supabase SQL Editor:

-- Enable the pgvector extension
create extension if not exists vector;
 
-- Create the documents table
create table documents (
  id bigserial primary key,
  content text,
  metadata jsonb,
  embedding vector(768)
);
 
-- Create an index for faster similarity search
create index on documents 
using ivfflat (embedding vector_cosine_ops)
with (lists = 100);

Usage

Flow 1: Document Ingestion (Run Once)

  1. Open the workflow
  2. Click Execute Workflow button
  3. Wait for all nodes to complete (green checkmarks)
  4. Verify in Supabase: SELECT COUNT(*) FROM documents;

First Time Setup

You only need to run Flow 1 once to load the PDF into the vector database.

Flow 2: Ask Questions (Interactive)

  1. After Flow 1 completes, click on Chat Trigger node
  2. Click Test Workflow or use the chat interface
  3. Type your question, for example:
    • “What is the Transformer model?”
    • “How does multi-head attention work?”
    • “What are the main contributions of this paper?”
  4. The AI Agent will retrieve relevant sections and generate an answer

Workflow Structure

Flow 1: Document Ingestion

Manual Trigger 
  → Read PDF File 
  → Extract From PDF 
  → HuggingFace Embeddings 
  → Supabase Vector Store (Insert)

Flow 2: Conversational Retrieval

Chat Trigger 
  → AI Agent
      ├─ Groq Chat Model (LLM)
      ├─ Supabase Vector Store (Retrieve Tool)
      └─ Window Buffer Memory

Node Configuration Summary

NodeKey Settings
Read PDF FilePath: /data/attention_is_all_you_need.pdf
Extract From PDFChunk Size: 1000, Overlap: 200
HuggingFace EmbeddingsModel: distilbert-base-nli-mean-token
Supabase InsertTable: documents, Embedding dim: 768
Groq Chat ModelModel: llama-3.1-70b-versatile, Temp: 0.3
Supabase RetrieveTop K: 4
Window Buffer MemoryWindow Size: 5

Troubleshooting

Issue: “Credential not found”

Solution: Manually create each credential as described in Step 2 above.

Issue: “File not found”

Solution:

  • Verify Docker volume mount: docker inspect n8n | grep Mounts
  • Update file path in Read PDF File node
  • Ensure file has read permissions: chmod 644 attention_is_all_you_need.pdf

Issue: “Table ‘documents’ does not exist”

Solution: Run the SQL script from Step 4 in your Supabase SQL Editor.

Issue: “Dimension mismatch”

Solution:

  • Ensure your Supabase table uses vector(768)

  • Drop and recreate table if needed:

    DROP TABLE IF EXISTS documents;
    -- Then run the create table script again

Issue: “No response from AI Agent”

Solution:

  • Check that all three connections to AI Agent are properly linked:
    • Groq Chat Model (ai_languageModel)
    • Supabase Vector Store Retrieve (ai_tool)
    • Window Buffer Memory (ai_memory)
  • Verify credentials are valid and have sufficient quota

Customization Options

Change LLM Model

In Groq Chat Model node, you can change the model:

  • llama-3.1-70b-versatile (default - best quality)
  • mixtral-8x7b-32768 (good balance)
  • llama-3.1-8b-instant (fastest)

Adjust Retrieval

In Supabase Vector Store - Retrieve node:

  • Top K: Number of chunks to retrieve (default: 4)
    • Increase for more context (5-7)
    • Decrease for faster responses (2-3)

Modify Chunk Size

In Extract From PDF node:

  • Chunk Size: Characters per chunk (default: 1000)
  • Chunk Overlap: Overlap between chunks (default: 200)

Update System Message

In AI Agent node, customize the system message to:

  • Change the assistant’s personality
  • Add specific instructions
  • Modify response format

Testing Checklist

  • Workflow imported successfully
  • All three credentials configured
  • PDF file path updated
  • Supabase table created
  • Flow 1 executed successfully
  • Documents inserted into Supabase
  • Flow 2 responds to test questions
  • Answers are grounded in paper content
  • Conversation memory works (follow-up questions)

Next Steps

  1. Test with various questions - See Testing Scenarios
  2. Export for submission - Workflows → Download
  3. Create demo video - Record a 1-minute demonstration
  4. Write report - Document your configuration and challenges

Support

If you encounter issues not covered here, refer to the troubleshooting section in the main solution guide.