Skip to main content

PAM AI Knowledge Sources

This article explains how to add, map, scrape, and manage website knowledge for your AI agent.

Written by Marcel Dordan
Updated over 2 weeks ago

Part 1 — Adding a Knowledge Source: Step-by-Step

The Knowledge Base is what PAM reads to answer your partners' questions. You can add Documents, Text, or Website sources. This guide walks through the full Website flow — the most powerful option, and the one with the most steps.

Step 1 — Open the Knowledge Source modal

Navigate to PAM AI → Knowledge and click the New Knowledge button in the top-right corner of the Knowledge Base table.

PAM AI → Knowledge — click "New Knowledge" to begin

Step 2 — Name your source and choose Website

Give your knowledge source a clear, descriptive name (e.g., "Euler Help Center" or "Partner Program Guide"). Then select Website as the Knowledge Type.

1. Enter a name 2. Select Website as the knowledge type

Step 3 — Add a URL

Type or paste your URL into the URL link field and click Save link. The URL appears as a card below the field, defaulting to Single page mode — PAM will only read that one page, with no discovery of other links.

URL saved in Single page mode (Map site is OFF by default)

Single page mode is a low-cost operation. Only enable Map site when you genuinely need to discover multiple pages from a URL.

Step 4 — Enable Map site to discover links

If you want PAM to discover all the pages available under a URL, toggle Map site to ON on the URL card. The card turns blue and a hint confirms that links will be discovered.

A Discover links button then appears. Click it to start the mapping process.

1. Toggle Map site ON 2.Click Discover links to start

Step 5 — Review Map results and select links

Once mapping completes, a table of discovered links appears. Each row shows the page title, URL path, a Collection toggle, and a Scrape action.

Map results — 1. Summary bar 2. Collection toggle 3. Scrape button

What is Collection?

The Collection toggle (Yes / No) tells PAM whether a link is a parent/index page that contains links to sub-pages, or an individual content page.

Setting

When to use it

Effect on Scrape

Yes

Page is a section index — mostly links to other articles

Scrape button is enabled → click to fetch child pages

No

Page is a standalone article with actual content

Scrape button is disabled — page is indexed as-is

Check the links you want PAM to index. Use Select all or pick individually. Only checked links are saved.

Step 6 — Try scraping if Map results are incomplete

If the pages you need aren't appearing in the map results, a hint appears at the bottom of the list:

"Not finding what you need?" — click Try scraping to run a second discovery method

Click Try scraping. PAM runs a different discovery method — it loads the page and extracts all visible hyperlinks, often returning results the map missed.

Once complete, a tab switcher appears at the top of the results panel, letting you switch between Map results and Scrape results. Each tab has its own independent selection.

1. Tab switcher 2. Collection = Yes (parent row) 3. Indented child pages discovered by scraping

Selections are independent per tab. When you save, only the links selected in the currently active tab are saved. To include links from both methods, save them as two separate knowledge sources.

Step 7 — Save your selection

When you're happy with your selection, click Save selected. The button shows the count of selected links (e.g., "Save 5 selected"). Saved sources are added to your Knowledge Base as Active by default.

Managing your Knowledge Base

After saving, your sources appear in the PAM AI → Knowledge list. Each entry has a Status toggle and a Category label.

Status (Active/Inactive) and Category columns in the Knowledge Base

  • Active — PAM can read this source and use it in answers

  • Inactive — PAM ignores this source (useful for pausing outdated content without deleting it)

  • Collection — an index page that groups child pages under it

  • Website — an individual content page indexed directly

Part 2 — Understanding Map vs. Scrape

Map and Scrape are the two methods PAM uses to discover links from a URL. They work differently, return different results, and are best used in combination rather than as alternatives.

Map — Structure-based discovery

Map reads the site's sitemap or link graph to enumerate all declared pages under a given URL. It does not load the full content of each page — it only reads the structural declaration of what exists.

  • Best for: Well-structured sites with a public sitemap (help centers, docs portals, marketing sites)

  • What you get: A clean, comprehensive list of URLs reflecting the site's official hierarchy

  • What it may miss: Pages not in the sitemap — dynamically generated pages, recently published articles, or intentionally excluded pages

Scrape — Content-based discovery

Scrape loads the page and extracts all hyperlinks found in the rendered HTML — navigation menus, inline links, footer links. It reads what is actually visible on the page at scrape time.

  • Best for: Sites without a public sitemap, or when you need to fill gaps left by Map

  • What you get: Links as they appear on the live page — often includes pages the map missed

  • What it may miss: Pages only reachable via search, filters, or deep pagination

  • Side effect: May include navigation/footer links that aren't content pages — review results before saving

How to use them together

The recommended workflow is:

  1. Start with Map — it's faster and gives you the broad structure

  2. Select the pages you want

  3. If you notice gaps, use Try scraping at the bottom of the results panel

  4. Compare Map results and Scrape results using the tab switcher

  5. Save from whichever tab best covers your needs

Depth limits when scraping Collections

When you scrape a Collection from within the results table, its child pages appear indented below. You can mark a child as a Collection and scrape it again. PAM supports up to 3 levels of depth:

Level

What it contains

Scrape available?

Level 0

Root links returned by Map or Scrape

Yes — if marked as Collection

Level 1

Children discovered from a Level 0 Collection

Yes — if marked as Collection

Level 2

Children discovered from a Level 1 Collection

No — Max depth reached

At Level 2, the Scrape button is replaced with a Max depth indicator. To access deeper pages, add that specific URL as a separate knowledge source.

Quick reference

Map

Scrape

Method

Reads sitemap / link graph

Loads page, extracts live links

Speed

Faster

Slightly slower

Best for

Well-structured sites broad discovery

Filling gaps, dynamic or unsitemapped sites

May miss

Pages excluded from sitemap

Pages behind search or deep pagination

Result volume

High (full site structure)

Medium (links visible on a specific page)

Troubleshooting

  • Map returned no results or very few links

The site may not have a public sitemap, or your URL is too specific (a leaf page rather than a section root). Try scraping instead, or use a higher-level URL such as the section index.

  • Scrape returned irrelevant links (navigation, footer)

This is expected — scrape extracts all visible links. Use the Selected tab to review picks before saving, and deselect anything that isn't relevant content.

  • A specific page isn't in either Map or Scrape results

Add that exact URL as a separate knowledge source without enabling Map site. It will be indexed as a single page directly.

  • I need content from both Map and Scrape

Save your Map selections first, then create a second knowledge source from the same URL and use Scrape. Each becomes an independent, separately manageable entry in your Knowledge Base.

Still Have Questions?

Reach out to our support team by:

We’ll get back to you as soon as possible!

Did this answer your question?