n8n Workflow Import Guide
Quick Start
This guide explains how to import the chatmate_workflow.json file into n8n and configure it properly.
Prerequisites
Before importing, ensure you have:
- n8n running (Docker or self-hosted)
- HuggingFace API key
- Groq API key
- Supabase project with vector extension enabled
- PDF file accessible to n8n
Import Steps
1. Import the Workflow
Method 1: Via UI
- Open n8n in your browser (usually
http://localhost:5678) - Click Workflows in the left sidebar
- Click Import from File button (top right)
- Select
chatmate_workflow.json - Click Import
Method 2: Via File System
# Copy to n8n workflows directory
cp chatmate_workflow.json ~/.n8n/workflows/2. Configure Credentials
After importing, you’ll need to add three credentials:
A) HuggingFace API Credential
- Click on HuggingFace Embeddings node
- Click Create New Credential
- Select HuggingFace API
- Enter your API token from huggingface.co/settings/tokens
- Save as “HuggingFace API”
B) Supabase API Credential
- Click on Supabase Vector Store - Insert node
- Click Create New Credential
- Select Supabase API
- Enter:
- Host: Your Supabase project URL (e.g.,
https://xxxxx.supabase.co) - Service Role Key: Your Supabase anon/public key
- Host: Your Supabase project URL (e.g.,
- Save as “Supabase API”
Reuse Credential
The same Supabase credential will be used for both Insert and Retrieve nodes.
C) Groq API Credential
- Click on Groq Chat Model node
- Click Create New Credential
- Select Groq API
- Enter your API key from console.groq.com
- Save as “Groq API”
3. Update File Path
- Click on Read PDF File node
- Update the File Path parameter to match your setup:
- Default:
/data/attention_is_all_you_need.pdf - Change to your actual PDF location
- Default:
4. Prepare Supabase Database
Run this SQL in your Supabase SQL Editor:
-- Enable the pgvector extension
create extension if not exists vector;
-- Create the documents table
create table documents (
id bigserial primary key,
content text,
metadata jsonb,
embedding vector(768)
);
-- Create an index for faster similarity search
create index on documents
using ivfflat (embedding vector_cosine_ops)
with (lists = 100);Usage
Flow 1: Document Ingestion (Run Once)
- Open the workflow
- Click Execute Workflow button
- Wait for all nodes to complete (green checkmarks)
- Verify in Supabase:
SELECT COUNT(*) FROM documents;
First Time Setup
You only need to run Flow 1 once to load the PDF into the vector database.
Flow 2: Ask Questions (Interactive)
- After Flow 1 completes, click on Chat Trigger node
- Click Test Workflow or use the chat interface
- Type your question, for example:
- “What is the Transformer model?”
- “How does multi-head attention work?”
- “What are the main contributions of this paper?”
- The AI Agent will retrieve relevant sections and generate an answer
Workflow Structure
Flow 1: Document Ingestion
Manual Trigger
→ Read PDF File
→ Extract From PDF
→ HuggingFace Embeddings
→ Supabase Vector Store (Insert)
Flow 2: Conversational Retrieval
Chat Trigger
→ AI Agent
├─ Groq Chat Model (LLM)
├─ Supabase Vector Store (Retrieve Tool)
└─ Window Buffer Memory
Node Configuration Summary
| Node | Key Settings |
|---|---|
| Read PDF File | Path: /data/attention_is_all_you_need.pdf |
| Extract From PDF | Chunk Size: 1000, Overlap: 200 |
| HuggingFace Embeddings | Model: distilbert-base-nli-mean-token |
| Supabase Insert | Table: documents, Embedding dim: 768 |
| Groq Chat Model | Model: llama-3.1-70b-versatile, Temp: 0.3 |
| Supabase Retrieve | Top K: 4 |
| Window Buffer Memory | Window Size: 5 |
Troubleshooting
Issue: “Credential not found”
Solution: Manually create each credential as described in Step 2 above.
Issue: “File not found”
Solution:
- Verify Docker volume mount:
docker inspect n8n | grep Mounts - Update file path in Read PDF File node
- Ensure file has read permissions:
chmod 644 attention_is_all_you_need.pdf
Issue: “Table ‘documents’ does not exist”
Solution: Run the SQL script from Step 4 in your Supabase SQL Editor.
Issue: “Dimension mismatch”
Solution:
-
Ensure your Supabase table uses
vector(768) -
Drop and recreate table if needed:
DROP TABLE IF EXISTS documents; -- Then run the create table script again
Issue: “No response from AI Agent”
Solution:
- Check that all three connections to AI Agent are properly linked:
- Groq Chat Model (ai_languageModel)
- Supabase Vector Store Retrieve (ai_tool)
- Window Buffer Memory (ai_memory)
- Verify credentials are valid and have sufficient quota
Customization Options
Change LLM Model
In Groq Chat Model node, you can change the model:
llama-3.1-70b-versatile(default - best quality)mixtral-8x7b-32768(good balance)llama-3.1-8b-instant(fastest)
Adjust Retrieval
In Supabase Vector Store - Retrieve node:
- Top K: Number of chunks to retrieve (default: 4)
- Increase for more context (5-7)
- Decrease for faster responses (2-3)
Modify Chunk Size
In Extract From PDF node:
- Chunk Size: Characters per chunk (default: 1000)
- Chunk Overlap: Overlap between chunks (default: 200)
Update System Message
In AI Agent node, customize the system message to:
- Change the assistant’s personality
- Add specific instructions
- Modify response format
Testing Checklist
- Workflow imported successfully
- All three credentials configured
- PDF file path updated
- Supabase table created
- Flow 1 executed successfully
- Documents inserted into Supabase
- Flow 2 responds to test questions
- Answers are grounded in paper content
- Conversation memory works (follow-up questions)
Next Steps
- Test with various questions - See Testing Scenarios
- Export for submission - Workflows → Download
- Create demo video - Record a 1-minute demonstration
- Write report - Document your configuration and challenges
Related Resources
Support
If you encounter issues not covered here, refer to the troubleshooting section in the main solution guide.