User Guide
This guide explains how to create and manage Knowledge Bases and connect them to CX Assistants.
Creating a Knowledge Baseβ
Step 1: Navigate to Knowledge Basesβ
- Log in to the Deepdesk Admin interface
- In the left sidebar, navigate to Knowledge Assist > Knowledge Bases
- Click Add knowledge base
Step 2: Configure Basic Settingsβ
| Field | Description |
|---|---|
| Code | Unique identifier (lowercase letters, numbers, dashes). Cannot be changed after creation. |
| Name | Display name for the knowledge base |
| Description | Optional description of the knowledge base contents |
The code is used to reference the knowledge base in Assistant configurations and API calls. Choose a meaningful, descriptive code like support-articles or product-docs.
Step 3: Add Content Sourcesβ
URLsβ
Add URLs to web pages that should be crawled. Enter one URL per line:
https://help.example.com/articles
https://docs.example.com/faq
https://example.com/support/getting-started
The crawler will:
- Extract text content from each page
- Follow internal links to discover related pages
- Index the content for search
Filesβ
Add URLs to files (PDFs, documents) that should be processed:
https://example.com/files/user-manual.pdf
https://example.com/files/policy-guide.pdf
URLs must be publicly accessible. Private or internal URLs are not allowed for security reasons.
Step 4: Save and Crawlβ
- Click Save to create the knowledge base
- From the knowledge base list, select the knowledge base
- Use the Launch Knowledge Base web crawler action to start indexing
Managing Crawl Jobsβ
Starting a Crawlβ
From the Knowledge Base admin:
- Select one or more knowledge bases
- Choose Launch Knowledge Base web crawler from the actions dropdown
- Click Go
The crawl job will be queued and processed asynchronously.
Monitoring Crawl Statusβ
Navigate to Knowledge Assist > Crawl Jobs to view all crawl jobs:
| Status | Description |
|---|---|
| Created | Job created but not yet queued |
| Queued | Job sent to the crawler service |
| Started | Crawling in progress |
| Completed | Crawling finished successfully |
| Failed | Crawling encountered an error |
You will receive an email notification when crawl jobs complete or fail.
Re-crawling Contentβ
To update the knowledge base with fresh content:
- Navigate to the knowledge base
- Launch a new crawl job
- Wait for completion
Only one crawl job can be active per knowledge base at a time. Wait for the current job to complete before starting a new one.
Connecting to an Assistantβ
Step 1: Navigate to the Assistantβ
- Go to Assistants > Assistants
- Click on the Assistant you want to configure
Step 2: Set the Knowledge Baseβ
In the Assistant configuration:
- Find the Knowledge base code field
- Enter the code of your knowledge base (e.g.,
support-articles) - Click Save
When this field is set, the Assistant will automatically:
- Have access to the
retrieve_knowledgetool - Receive instructions to use the knowledge base for answering questions
Step 3: Test the Assistantβ
Evaluate the Assistant with a test question to verify it retrieves knowledge correctly:
- Use the Playground or API to send a question
- Check that the response references information from the knowledge base
- Verify source URLs are included (if configured)
Managing Documentsβ
Viewing Documentsβ
Navigate to Knowledge Assist > Documents to see all indexed documents:
- External ID: Unique identifier from the source
- Title: Document title
- URL: Source URL
- Last Crawled At: When the document was last updated
Manual Document Managementβ
While most documents are created through crawling, you can also:
- Edit document content (title, description, body)
- Delete documents that should be removed from search
Changes are automatically synced to the search index.
Best Practicesβ
Content Organizationβ
- Use separate knowledge bases for different content types or domains
- Keep URLs organized by topic or section
- Update regularly to keep content fresh
URL Selectionβ
- Start with high-level pages that link to related content
- Avoid duplicate content (same content at different URLs)
- Prefer stable URLs that won't change frequently
Testingβ
- Test with real questions users might ask
- Verify accuracy of retrieved information
- Check for gaps where knowledge is missing
Troubleshootingβ
Crawl Job Failedβ
- Check the crawl job details for error messages
- Verify URLs are accessible and return valid content
- Ensure URLs are not private/internal addresses
- Try crawling a single URL to isolate the issue
No Results Returnedβ
- Verify the knowledge base has been crawled successfully
- Check that documents exist in the knowledge base
- Test with different search queries
- Verify the Assistant's
knowledge_base_codematches the knowledge base
Incorrect Resultsβ
- Review document content for accuracy
- Consider adding more specific content
- Re-crawl to update stale content
- Adjust the Assistant's instructions for better query handling
Crawl Not Startingβ
- Check if another crawl is already in progress
- Verify you have the required permissions
- Ensure Knowledge Assist is enabled for your account
Related Documentationβ
- Overview: Conceptual overview of Knowledge Assist
- Developer Guide: API reference and technical details
- Assistants User Guide: Configuring Assistants