What is a page tracker?
A page tracker is a utility that empowers developers to detect and monitor the content of any web page. Use cases range from ensuring that the deployed web application loads only the intended content throughout its lifecycle to tracking changes in arbitrary web content when the application lacks native tracking capabilities. In the event of a change, whether it's caused by a broken deployment or a legitimate content modification, the tracker promptly notifies the user.
Currently, Secutils.dev doesn't support tracking content for web pages protected by application firewalls (WAF) or any form of CAPTCHA. If you require tracking content for such pages, please comment on #secutils/34 to discuss your use case.
On this page, you can find guides on creating and using page trackers.
The Content extractor script is essentially a Playwright scenario that allows you to extract almost anything from the web page as long as it doesn't exceed 1MB in size. For instance, you can include text, links, images, or even JSON.
Create a page tracker
In this guide, you'll create a simple page tracker for the top post on Hacker News:
![[object Object]](../../img/docs/guides/web_scraping/create_step1_empty.png)
Navigate to Web Scraping → Page trackers and click Track page.
![[object Object]](../../img/docs/guides/web_scraping/create_step2_form.png)
Configure the tracker and click Save.
| Name | |
| Frequency | |
| Content extractor | |

The tracker appears in the grid.
![[object Object]](../../img/docs/guides/web_scraping/create_step4_update.png)
Expand the tracker and click Update to fetch content.

After a few seconds the tracker fetches and renders the top post as a clickable markdown link.
The content includes only the title of the post. However, as noted at the beginning of this guide, the content extractor script allows you to return almost anything, even the entire HTML of the post.
Detect changes with a page tracker
In this guide, you'll create a page tracker and test it with changing content:
![[object Object]](../../img/docs/guides/web_scraping/detect_step1_empty.png)
Navigate to Web Scraping → Page trackers and click Track page.
![[object Object]](../../img/docs/guides/web_scraping/detect_step2_form.png)
Configure the tracker with an hourly frequency and click Save.
| Name | |
| Frequency | |
| Content extractor | |

The tracker appears in the grid with bell and timer icons, indicating it is configured for regular checks with notifications.
Expand the tracker's row and click the Update button to make the first snapshot of the web page content. After a few seconds, the tracker will fetch the current Berlin time and render a nice markdown with a link to a world clock website:
Berlin time is 01:02:03
With this configuration, the tracker will check the content of the web page every hour and notify you if any changes are detected.
Track web page resources
You can also use the page tracker utility to detect and track resources of any web page. This functionality falls under the category of synthetic monitoring tools and helps ensure that the deployed application loads only the intended web resources (JavaScript and CSS) during its lifetime. If any unintended changes occur, which could result from a broken deployment or malicious activity, the tracker will promptly notify developers or IT personnel about the detected anomalies.
Additionally, security researchers who focus on discovering potential vulnerabilities in third-party web applications can use page trackers to be notified when the application's resources change. This allows them to identify if the application has been upgraded, providing an opportunity to re-examine it and potentially discover new vulnerabilities.
Extracting all page resources isn't as straightforward as it might seem, so it's recommended to use the utilities provided by Secutils.dev, as demonstrated in the examples in the following sections. Utilities return CSS and JS resource descriptors with the following interfaces:
/**
* Describes external or inline resource.
*/
interface WebPageResource {
/**
* Resource type, either 'script' or 'stylesheet'.
*/
type: 'script' | 'stylesheet';
/**
* The URL resource is loaded from.
*/
url?: string;
/**
* Resource content descriptor (size and digest), if available.
*/
content: WebPageResourceContent;
}
/**
* Describes resource content.
*/
interface WebPageResourceContent {
/**
* Resource content data.
*/
data: { raw: string } | { tlsh: string } | { sha1: string };
/**
* Describes resource content data, it can either be the raw content data or a hash such as Trend Micro Locality
* Sensitive Hash or simple SHA-1.
*/
size: number;
}
In this guide, you'll create a simple page tracker to track resources of the Hacker News:
![[object Object]](../../img/docs/guides/web_scraping/resources_step1_empty.png)
Navigate to Web Scraping → Page trackers and click Track page.
![[object Object]](../../img/docs/guides/web_scraping/resources_step2_form.png)
Configure the tracker with a resource-tracking content extractor script and click Save.
| Name | |
| Content extractor | |
![[object Object]](../../img/docs/guides/web_scraping/resources_step3_created.png)
Expand the tracker row and click Update to fetch the page resources.
![[object Object]](../../img/docs/guides/web_scraping/resources_step4_result.png)
Once the tracker has fetched the resources, they appear in the resources grid.
It's hard to believe, but as of the time of writing, Hacker News continues to rely on just a single script and stylesheet!
Filter web page resources
In this guide, you will create a page tracker for the GitHub home page and learn how to track only specific resources:
![[object Object]](../../img/docs/guides/web_scraping/filter_step1_empty.png)
Navigate to Web Scraping → Page trackers and click Track page.
![[object Object]](../../img/docs/guides/web_scraping/filter_step2_form.png)
Configure a tracker for the GitHub home page and click Save.
| Name | |
| Content extractor | |
![[object Object]](../../img/docs/guides/web_scraping/filter_step3_created.png)
Expand the tracker row and click Update to fetch the page resources.
![[object Object]](../../img/docs/guides/web_scraping/filter_step4_result.png)
Once the tracker has fetched the resources, they appear in the resources grid.
You'll notice that there are nearly 100 resources used for the GitHub home page! In the case of large and complex pages like this one, it's recommended to have multiple separate trackers, e.g. one per logical functionality domain, to avoid overwhelming the developer with too many resources and consequently changes they might need to track. Let's say we're only interested in "vendored" resources.
To filter out all resources that are not "vendored", edit the tracker and update the Content extractor script:
export async function execute(page, { previousContent }) {
// Load built-in utilities for tracking resources.
const { resources: utils } = await import(`data:text/javascript,${encodeURIComponent(
await (await fetch('https://secutils.dev/retrack/utilities.js')).text()
)}`);
// Start tracking resources.
utils.startTracking(page);
// Navigate to the target page.
await page.goto('https://github.com');
await page.waitForTimeout(1000);
// Stop tracking and return resources.
const allResources = await utils.stopTracking(page);
// Filter out all resources that are not "vendored".
const resources = {
scripts: allResources.scripts.filter((resource) => resource.url?.includes('vendors')),
styles: allResources.styles.filter((resource) => resource.url?.includes('vendors')),
};
// Format resources as a table,
// showing diff status if previous content is available.
return utils.formatAsTable(
previousContent
? utils.setDiffStatus(previousContent.original.source, resources)
: resources
);
};
Save the tracker and click the Update button to re-fetch web page resources. Once the tracker has re-fetched resources, only about half of the previously extracted resources will appear in the resources grid.
Detect changes in web page resources
In this guide, you will create several webhook responders that emulate JavaScript files and a simple HTML page, then set up a page tracker to detect changes in the resources loaded by that page across revisions.
![[object Object]](../../img/docs/guides/web_scraping/detect_resources_step1_responders_empty.png)
Navigate to Webhooks → Responders and click Create responder.
![[object Object]](../../img/docs/guides/web_scraping/detect_resources_step2_no_changes_form.png)
Create a JavaScript responder that will remain unchanged across revisions and click Save.
| Name | |
| Path | |
| Headers | |
| Body | |
![[object Object]](../../img/docs/guides/web_scraping/detect_resources_step3_changed_form.png)
Create a JavaScript responder that will change across revisions and click Save.
| Name | |
| Path | |
| Headers | |
| Body | |
![[object Object]](../../img/docs/guides/web_scraping/detect_resources_step4_removed_form.png)
Create a JavaScript responder that will be removed across revisions and click Save.
| Name | |
| Path | |
| Headers | |
| Body | |
![[object Object]](../../img/docs/guides/web_scraping/detect_resources_step5_added_form.png)
Create a JavaScript responder that will be added in a new revision and click Save.
| Name | |
| Path | |
| Headers | |
| Body | |
![[object Object]](../../img/docs/guides/web_scraping/detect_resources_step6_html_form.png)
Create a responder that serves a simple HTML page referencing the first three scripts (except added.js) and click Save.
| Name | |
| Path | |
| Headers | |
| Body | |

All five responders appear in the grid with their unique URLs.
Navigate to Web Scraping → Page trackers and click Track page.
Configure a tracker for the track-me.html responder and click Save.
| Name | |
| Content extractor | |
Expand the tracker row and click Update to make the first snapshot of the web page resources.

The initial resources appear in the grid - three scripts with no diff status yet.
![[object Object]](../../img/docs/guides/web_scraping/detect_resources_step12_diff.png)
After editing the responders - replace removed.js with added.js in track-me.html and update the body of changed.js - click Update again to see the diff statuses: Added, Changed, and Removed.
You can configure the tracker with a schedule (e.g. Daily) and enable Notifications so that Secutils.dev automatically checks for resource changes and alerts you when they occur.
Annex: Content extractor script examples
In this section, you can find examples of content extractor scripts that extract various content from web pages. Essentially, the script defines a function with the following signature:
/**
* Content extractor script that extracts content from a web page.
* @param page - The Playwright Page object representing the web page.
* See more details at https://playwright.dev/docs/api/class-page.
* @param context.previousContent - The context extracted during
* the previous execution, if available.
* @returns {Promise<unknown>} - The extracted content to be tracked.
*/
export async function execute(
page: Page,
context: { previousContent?: { original: unknown } }
)
Track markdown-style content
The script can return any valid markdown-style content that Secutils.dev will happily render in preview mode.
export async function execute() {
return `
## Text
### h3 Heading
#### h4 Heading
**This is bold text**
*This is italic text*
~~Strikethrough~~
## Lists
* Item 1
* Item 2
* Item 2a
## Code
\`\`\` js
const foo = (bar) => {
return bar++;
};
console.log(foo(5));
\`\`\`
## Tables
| Option | Description |
| -------- | ------------- |
| Option#1 | Description#1 |
| Option#2 | Description#2 |
## Links
[Link Text](https://secutils.dev)
## Emojies
:wink: :cry: :laughing: :yum:
`;
}
Track API response
You can use page tracker to track API responses as well (until dedicated API tracker utility is released). For instance, you can track the response of the JSONPlaceholder API:
Ensure that the web page from which you're making a fetch request allows cross-origin requests. Otherwise, you'll get an error.
export async function execute() {
const {url, method, headers, body} = {
url: 'https://jsonplaceholder.typicode.com/posts',
method: 'POST',
headers: {'Content-Type': 'application/json; charset=UTF-8'},
body: JSON.stringify({title: 'foo', body: 'bar', userId: 1}),
};
const response = await fetch(url, {method, headers, body});
return {
status: response.status,
headers: Object.fromEntries(response.headers.entries()),
body: (await response.text()) ?? '',
};
}
Use previous content
In the content extract script, you can use the context.previousContent.original property to access the content extracted during the previous execution:
export async function execute(page, { previousContent }) {
// Update counter based on the previous content.
return (previousContent?.original ?? 0) + 1;
}
Use external content extractor script
Sometimes, your content extractor script can become large and complicated, making it hard to edit in the Secutils.dev UI. In such cases, you can develop and deploy the script separately in any development environment you prefer. Once the script is deployed, you can just use URL as the script content :
// This code assumes your script exports a function named `execute` function.
https://secutils-dev.github.io/secutils-sandbox/content-extractor-scripts/markdown-table.js
You can find more examples of content extractor scripts at the Secutils.dev Sandbox repository.
Annex: Custom cron schedules
Custom cron schedules are available only for Pro subscription users.
In this section, you can learn more about the supported cron expression syntax used to configure custom tracking schedules. A cron expression is a string consisting of six or seven subexpressions that describe individual details of the schedule. These subexpressions, separated by white space, can contain any of the allowed values with various combinations of the allowed characters for that subexpression:
| Subexpression | Mandatory | Allowed values | Allowed special characters |
|---|---|---|---|
Seconds | Yes | 0-59 | * / , - |
Minutes | Yes | 0-59 | * / , - |
Hours | Yes | 0-23 | * / , - |
Day of month | Yes | 1-31 | * / , - ? |
Month | Yes | 0-11 or JAN-DEC | * / , - |
Day of week | Yes | 1-7 or SUN-SAT | * / , - ? |
Year | No | 1970-2099 | * / , - |
Following the described cron syntax, you can create almost any schedule you want as long as the interval between two consecutive checks is longer than 10 minutes. Below are some examples of supported cron expressions:
| Expression | Meaning |
|---|---|
0 0 12 * * ? | Run at 12:00 (noon) every day |
0 15 10 ? * * | Run at 10:15 every day |
0 15 10 * * ? | Run at 10:15 every day |
0 15 10 * * ? * | Run at 10:15 every day |
0 15 10 * * ? 2025 | Run at 10:15 every day during the year 2025 |
0 0/10 14 * * ? | Run every 10 minutes from 14:00 to 14:59, every day |
0 10,44 14 ? 3 WED | Run at 14:10 and at 14:44 every Wednesday in March |
0 15 10 ? * MON-FRI | Run at 10:15 from Monday to Friday |
0 11 15 8 10 ? | Run every October 8 at 15:11 |
To assist you in creating custom cron schedules, Secutils.dev lists five upcoming scheduled times for the specified schedule:
![]() |
|---|
