Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 3 additions & 11 deletions src/content/docs/creating-custom-feeds.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,6 @@ When auto-sourcing isn't enough, you can write your own configuration files to c

**Prerequisites:** You should be familiar with the [Getting Started](/getting-started) guide before diving into custom configurations.

<Aside type="note" title="Release note">
This guide tracks the current documentation tree and may describe features that have not yet shipped in the
latest released `html2rss` gem. If you want the newest integrated behavior, prefer running
[`html2rss-web`](/web-application/getting-started) via Docker. The web application ships as a rolling
release and usually reflects the latest development state of the gem first. See [Versioning and
releases](/web-application/reference/versioning-and-releases/) for details.
</Aside>

<Aside type="tip" title="Use this guide when you need more control">
Start with included feeds first. If your site is not covered, try [automatic feed
generation](/web-application/how-to/use-automatic-feed-generation/) next. Reach for a custom config when you
Expand Down Expand Up @@ -48,7 +40,7 @@ When auto-sourcing isn't enough, you can write your own configuration files to c
3. **Validate the config** with `html2rss validate your-config.yml`
4. **Render the feed** with `html2rss feed your-config.yml`
5. **Add it to `html2rss-web`** so you can use it through your normal instance
6. **Escalate to `browserless`** if the content is rendered by JavaScript
6. **Escalate request strategy when needed**: use a browser-based rendering strategy only when troubleshooting requires it

This order keeps iteration fast and makes it easier to see whether the problem is the page structure, your
selectors, or the fetch strategy.
Expand Down Expand Up @@ -210,7 +202,7 @@ there.
- **No items found?** Check your selectors with browser tools (F12) - the `items.selector` might not match the page structure
- **Invalid YAML?** Use spaces, not tabs, and ensure proper indentation
- **Website not loading?** Check the URL and try accessing it in your browser
- **Missing content?** Some websites load content with JavaScript - you may need to use the `browserless` strategy
- **Missing content?** Try a browser-based rendering strategy during troubleshooting
- **Wrong data extracted?** Verify your selectors are pointing to the right elements

**Need more help?** See our [comprehensive troubleshooting guide](/troubleshooting/troubleshooting) or ask in [GitHub Discussions](https://github.com/orgs/html2rss/discussions).
Expand All @@ -234,5 +226,5 @@ there.

- **[Browse existing configs](https://github.com/html2rss/html2rss-configs/tree/master/lib/html2rss/configs)** - See real examples
- **[Join discussions](https://github.com/orgs/html2rss/discussions)** - Connect with other users
- **[Learn about strategies](/ruby-gem/reference/strategy/)** - Decide when to use `browserless`
- **[Learn about strategies](/ruby-gem/reference/strategy/)** - Decide when to use static vs JavaScript/browser-based extraction
- **[Learn advanced features](/ruby-gem/how-to/advanced-features/)** - Take your configs to the next level
2 changes: 2 additions & 0 deletions src/content/docs/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@ If you are working directly with the gem instead of `html2rss-web`, start with:

<Code code={`html2rss auto https://example.com/blog`} lang="bash" />

For strategy behavior and manual overrides, see the [Strategy reference](/ruby-gem/reference/strategy).

If the target site is unusually redirect-heavy or needs extra follow-up requests, the CLI also supports:

<Code code={`html2rss auto https://example.com/blog --max-redirects 10 --max-requests 5`} lang="bash" />
Expand Down
2 changes: 1 addition & 1 deletion src/content/docs/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Most people should start with the web application:

1. **[Creating Custom Feeds](/creating-custom-feeds)**: write and test your own configs
2. **[Selectors Reference](/ruby-gem/reference/selectors/)**: learn the matching rules
3. **[Strategy Reference](/ruby-gem/reference/strategy/)**: decide when `browserless` is justified
3. **[Strategy Reference](/ruby-gem/reference/strategy/)**: choose the right extraction strategy for static vs JavaScript-heavy pages

### I'm building or integrating

Expand Down
2 changes: 1 addition & 1 deletion src/content/docs/ruby-gem/how-to/advanced-features.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ html2rss uses parallel processing in auto-source discovery. This happens automat
1. **Use appropriate selectors:** More specific selectors reduce processing time
2. **Limit items when possible:** Use CSS selectors that target only the content you need
3. **Cache responses:** The web application caches responses automatically
4. **Choose the right strategy:** Use `faraday` for static content, `browserless` only when JavaScript is required
4. **Choose the right strategy:** Use static HTTP fetching for simple pages, and move to a JavaScript/browser-based extraction strategy when rendering or anti-bot handling is required

## Memory Optimization

Expand Down
3 changes: 2 additions & 1 deletion src/content/docs/ruby-gem/how-to/custom-http-requests.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Keep this structure in mind:

- `headers` stays top-level
- `strategy` stays top-level
- request-specific controls such as budgets and Browserless options live under `request`
- request-specific controls such as budgets and strategy-specific options live under `request`

## When You Need Custom Headers

Expand Down Expand Up @@ -74,6 +74,7 @@ Request budgets are configured under `request`, not as top-level keys:
- `request.max_redirects` limits redirect hops
- `request.max_requests` limits the total request budget for the feed build
- `request.browserless.*` is reserved for Browserless-only behavior such as preload actions
- `request.botasaurus.*` is reserved for Botasaurus-only behavior such as navigation mode and retries

## Common Use Cases

Expand Down
16 changes: 9 additions & 7 deletions src/content/docs/ruby-gem/how-to/handling-dynamic-content.mdx
Original file line number Diff line number Diff line change
@@ -1,15 +1,17 @@
---
title: Handling Dynamic Content
description: "Learn how to handle JavaScript-heavy websites and dynamic content with html2rss. Use browserless strategy for sites that load content dynamically."
description: "Learn how to handle JavaScript-heavy websites and dynamic content with html2rss using browser-based extraction strategies."
---

import { Code } from "@astrojs/starlight/components";

Some websites load their content dynamically using JavaScript. The default `html2rss` strategy might not see this content.
Some websites load their content dynamically using JavaScript. Static fetch paths may not see this content reliably.

## Solution

Use the [`browserless` strategy](/ruby-gem/reference/strategy) to render JavaScript-heavy websites with a headless browser.
Use a [browser-based extraction strategy](/ruby-gem/reference/strategy) when JavaScript-heavy pages do not work with default static fetching.

`browserless` is common for this workflow, and `botasaurus` is an alternate browser-based strategy when you run a Botasaurus scrape API.

Keep the strategy at the top level and put request-specific options under `request`:

Expand All @@ -36,9 +38,9 @@ Keep the strategy at the top level and put request-specific options under `reque
lang="yaml"
/>

## When to Use Browserless
## When to Use Browser-Based Extraction

The `browserless` strategy is necessary when:
A browser-based extraction strategy is necessary when:

- **Content loads after page load** - JavaScript fetches data from APIs
- **Single Page Applications (SPAs)** - React, Vue, Angular apps
Expand Down Expand Up @@ -100,13 +102,13 @@ These preload steps can be combined in a single config when a site needs several

## Performance Considerations

The `browserless` strategy is slower than the default `faraday` strategy because it:
Browser-based extraction is slower than default static HTTP fetching because it:

- Launches a headless Chrome browser
- Renders the full page with JavaScript
- Takes more memory and CPU resources

**Use `faraday` for static content** and only switch to `browserless` when necessary.
**Use static HTTP fetching for static content** and switch to browser-based extraction when needed. See the [Strategy Reference](/ruby-gem/reference/strategy) for concrete transports, defaults, and environment requirements.

## Related Topics

Expand Down
38 changes: 33 additions & 5 deletions src/content/docs/ruby-gem/reference/cli-reference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,17 @@ Automatically discovers items from a page and prints the generated RSS feed to s
<Code
code={`
html2rss auto https://example.com/articles ; \
html2rss auto https://example.com/app --strategy browserless ; \
html2rss auto https://example.com/app --strategy browserless --max-redirects 5 --max-requests 6 ; \
BOTASAURUS_SCRAPER_URL="http://localhost:4010" html2rss auto https://example.com/protected --strategy botasaurus ; \
html2rss auto https://example.com/articles --items_selector ".post-card"
`}
lang="bash"
/>

Command: `html2rss auto URL`

Default behavior is `--strategy auto`, which tries `faraday` then `botasaurus` then `browserless`.

#### URL Surface Guidance For `auto`

`auto` works best when the input URL already exposes a server-rendered list of entries.
Expand All @@ -49,23 +51,29 @@ When possible, pass a direct listing/update URL instead of a top-level homepage

#### Failure Outcomes You Should Expect

When no extractable items are found, `auto` now classifies likely causes instead of only returning a generic message:
When no extractable items are found, `auto` classifies likely causes instead of only returning a generic message:

- `blocked surface likely (anti-bot or interstitial)`:
- retry with `--strategy browserless`
- try a more specific public listing URL
- `app-shell surface detected`:
- retry with `--strategy browserless`
- switch to a direct listing/update URL
- `unsupported extraction surface for auto mode`:
- switch to listing/changelog/category URLs
- use explicit selectors in a feed config

Known anti-bot interstitial responses (for example Cloudflare challenge pages) are surfaced explicitly as blocked-surface errors.

If all fallback tiers run but still extract zero items, html2rss raises:

- `No RSS feed items extracted after auto fallback ...`

If failures continue after URL/surface fixes, retry with an explicit browser-based override (`--strategy browserless`), or `--strategy botasaurus` when `BOTASAURUS_SCRAPER_URL` is configured.

Start by changing the input URL to a direct listing/update page, then move to explicit selectors if needed.

#### Browserless Setup And Diagnostics (CLI)

`browserless` is opt-in for CLI usage.
`browserless` is an explicit override for CLI usage.

<Code
code={`
Expand Down Expand Up @@ -97,6 +105,24 @@ If you see `Browserless connection failed`, check:

For custom Browserless endpoints, `BROWSERLESS_IO_API_TOKEN` is required.

#### Botasaurus Environment Requirement (CLI)

`botasaurus` is an explicit override for CLI usage and requires `BOTASAURUS_SCRAPER_URL`:

<Code
code={`
BOTASAURUS_SCRAPER_URL="http://localhost:4010" \
html2rss auto https://example.com/updates --strategy botasaurus
`}
lang="bash"
/>

If you see a Botasaurus configuration error, check:

- `BOTASAURUS_SCRAPER_URL` is set
- `BOTASAURUS_SCRAPER_URL` is a valid URL
- the Botasaurus scrape API is reachable from the shell environment running `html2rss`

### Feed

Loads a YAML config, builds the feed, and prints the RSS XML to stdout.
Expand All @@ -105,7 +131,9 @@ Loads a YAML config, builds the feed, and prints the RSS XML to stdout.
code={`
html2rss feed single.yml ; \
html2rss feed feeds.yml my-first-feed ; \
html2rss feed single.yml --strategy auto ; \
html2rss feed single.yml --strategy browserless ; \
BOTASAURUS_SCRAPER_URL="http://localhost:4010" html2rss feed single.yml --strategy botasaurus ; \
html2rss feed single.yml --max-redirects 5 --max-requests 6 ; \
html2rss feed single.yml --params id:42 foo:bar
`}
Expand Down
69 changes: 65 additions & 4 deletions src/content/docs/ruby-gem/reference/strategy.mdx
Original file line number Diff line number Diff line change
@@ -1,18 +1,26 @@
---
title: Strategy
description: "Learn about different strategies for fetching website content with html2rss. Choose between faraday and browserless strategies for optimal performance."
description: "Learn how html2rss chooses request strategies by default with auto fallback, and when to override with faraday, botasaurus, or browserless."
---

import { Code } from "@astrojs/starlight/components";

The `strategy` key defines how `html2rss` fetches a website's content.

- **`faraday`** (default): Makes a direct HTTP request. It is fast but does not execute JavaScript.
- **`auto`** (default): Tries concrete strategies in order: `faraday` -> `botasaurus` -> `browserless`.
- **`faraday`**: Makes a direct HTTP request. It is fast but does not execute JavaScript.
- **`browserless`**: Renders the website in a headless Chrome browser, which is necessary for JavaScript-heavy sites.
- **`botasaurus`**: Delegates fetching to a Botasaurus scrape API. This is opt-in and requires `BOTASAURUS_SCRAPER_URL`.

`strategy` is a top-level config key. Request-specific controls live under `request`.

Use `faraday` first for direct newsroom/listing/changelog pages. Prefer `browserless` when the target is client-rendered, protected by anti-bot checks, or otherwise requires JavaScript to expose article links.
`auto` falls back to the next strategy when the current attempt errors or extracts zero items. Use explicit `--strategy ...` only when you need to force a specific transport for troubleshooting or reproducibility.

## `auto` (default)

The default strategy chain is:

`faraday` -> `botasaurus` -> `browserless`

## `browserless`

Expand Down Expand Up @@ -62,11 +70,12 @@ Set the `strategy` at the top level of your feed configuration and put request c

Use this split consistently:

- `strategy`: selects `faraday` or `browserless`
- `strategy`: selects `auto`, `faraday`, `browserless`, or `botasaurus`
- `headers`: top-level headers shared by all strategies
- `request.max_redirects`: redirect limit for the request session
- `request.max_requests`: total request budget for the whole feed build
- `request.browserless.*`: Browserless-only options
- `request.botasaurus.*`: Botasaurus-only options

Example:

Expand Down Expand Up @@ -153,6 +162,58 @@ Check these first:

For custom Browserless websocket endpoints, `BROWSERLESS_IO_API_TOKEN` is mandatory. The local default endpoint (`ws://127.0.0.1:3000`) can use the default local token `6R0W53R135510`.

## `botasaurus`

`botasaurus` delegates page fetching to a Botasaurus scrape API endpoint. This strategy is explicit opt-in and requires:

- `strategy: botasaurus`
- `BOTASAURUS_SCRAPER_URL` set to your Botasaurus scrape API base URL (for example `http://localhost:4010`)

### Configuration

<Code
code={`
strategy: botasaurus
request:
max_redirects: 5
max_requests: 6
botasaurus:
navigation_mode: auto
max_retries: 2
headless: false
channel:
url: "https://example.com/protected-listing"
auto_source: {}
`}
lang="yml"
/>

Supported `request.botasaurus` options:

- `navigation_mode` (`auto`, `get`, `google_get`, `google_get_bypass`)
- `max_retries` (`0..3`)
- `wait_for_selector`
- `wait_timeout_seconds`
- `block_images`
- `block_images_and_css`
- `wait_for_complete_page_load`
- `headless`
- `proxy`
- `user_agent`
- `window_size` (two integers, for example `[1920, 1080]`)
- `lang`

### Command-Line Usage

<Code
code={`
BOTASAURUS_SCRAPER_URL="http://localhost:4010" \
html2rss auto https://example.com/updates --strategy botasaurus ; \
html2rss feed my_config.yml --strategy botasaurus
`}
lang="sh"
/>

---

For detailed documentation on the Ruby API, see the [official YARD documentation](https://www.rubydoc.info/gems/html2rss).
14 changes: 9 additions & 5 deletions src/content/docs/troubleshooting/troubleshooting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,25 +39,27 @@ If your feed is empty, check the following:
- **URL:** Ensure the `url` in your configuration is correct and accessible.
- **`items.selector`:** Verify that the `items.selector` matches the elements on the page.
- **Website Changes:** Websites change their HTML structure frequently. Your selectors may be outdated.
- **JavaScript Content:** If the content is loaded via JavaScript, use the `browserless` strategy instead of `faraday`.
- **JavaScript Content:** If the content is loaded via JavaScript, use a browser-based rendering strategy.
- **Authentication:** Some sites require authentication — check if you need to add headers or use a different strategy.

### `No scrapers found` Failure Taxonomy (`auto`)

`auto` classifies no-scraper failures with actionable hints:

- **Blocked surface likely (anti-bot or interstitial):**
- retry with `--strategy browserless`
- try a more specific public listing URL
- **App-shell surface detected:**
- retry with `--strategy browserless`
- target a direct listing/update page instead of homepage/shell entrypoint
- **Unsupported extraction surface for auto mode:**
- switch to listing/changelog/category URLs
- or use explicit selectors in YAML config

Known anti-bot interstitial patterns (for example Cloudflare challenge pages) are surfaced as blocked-surface errors instead of silent empty extraction results.

When all auto fallback tiers complete but still extract zero items, html2rss raises `No RSS feed items extracted after auto fallback ...`.

If failures continue after URL/surface fixes, retry with an explicit browser-based override (`--strategy browserless`), or `--strategy botasaurus` when `BOTASAURUS_SCRAPER_URL` is configured.

### Browserless Connection / Setup Failures

If you receive `Browserless connection failed (...)`:
Expand Down Expand Up @@ -91,7 +93,9 @@ For custom websocket endpoints, `BROWSERLESS_IO_API_TOKEN` is required.
Common configuration-related errors:

- **`UnsupportedResponseContentType`:** The website returned content that html2rss can't parse (not HTML or JSON).
- **`UnsupportedStrategy`:** The specified strategy is not available. Use `faraday` or `browserless`.
- **`UnsupportedStrategy`:** The specified strategy is not available. Use `auto`, `faraday`, `browserless`, or `botasaurus`.
- **`BOTASAURUS_SCRAPER_URL is required for strategy=botasaurus.`:** Set `BOTASAURUS_SCRAPER_URL` to your Botasaurus scrape API base URL when using `--strategy botasaurus`.
- **`BOTASAURUS_SCRAPER_URL is invalid`:** Fix the URL format and retry.
- **`Configuration must include at least 'selectors' or 'auto_source'`:** You need to specify either manual selectors or enable auto-source.
- **`stylesheet.type invalid`:** Only `text/css` and `text/xsl` are supported for stylesheets.

Expand All @@ -101,7 +105,7 @@ If parts of your items (e.g., title, link) are missing, check the following:

- **Selector:** Ensure the selector for the missing part is correct and relative to the `items.selector`.
- **Extractor:** Verify that you are using the correct `extractor` (e.g., `text`, `href`, `attribute`).
- **Dynamic Content:** `faraday` does not render JavaScript. If content loads dynamically, run with `--strategy browserless` (with the Browserless service available) so the page can be rendered before extraction.
- **Dynamic Content:** `faraday` does not render JavaScript. If content loads dynamically, run with `--strategy browserless` (with Browserless available) or `--strategy botasaurus` (with `BOTASAURUS_SCRAPER_URL` configured) so the page can be rendered before extraction.

### Date/Time Parsing Errors

Expand Down
Loading
Loading