# Secutils.dev Documentation > Documentation and guides for Secutils.dev - an open-source, versatile toolbox for security-minded engineers. Full Secutils.dev documentation (all guides) in one markdown document for LLM and offline use. ## Digital Certificates ➔ Certificate templates # What is a digital certificate? A digital certificate, also known as an SSL/TLS certificate or public key certificate, is a digital document that verifies the identity of a website, server, or other digital entity, and allows secure communication between two parties by encrypting data sent over the internet. It contains information about the identity of the certificate holder, such as their name and public key, and is issued by a trusted third-party Certificate Authority (CA). There are different types of digital certificates that can be generated with various parameters. Certificates can be password-protected, can be bundled with the keys, can rely on different cryptographic algorithms, and eventually expire. Considering these factors, it can be challenging to develop and test web applications that rely on digital certificates. On this page, you can find guides on creating digital certificate templates with parameters that match your specific needs. ## Generate a key pair for a HTTPS server In this guide you'll create a template for generating a private key and self-signed certificate for a Node.js HTTPS server: Navigate to Digital Certificates → Certificate templates and click Create template., alt: 'Navigate to Digital Certificates → Certificate templates and click Create template.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/https_step2_general.png', caption: <>Fill in the General section of the template form. , alt: 'Fill in the General section: name, key algorithm, key size, and signature algorithm.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/https_step3_extensions.png', caption: <>Scroll down to Extensions and configure the certificate type, key usage, and extended key usage. , alt: 'Configure Extensions: certificate type, key usage, and extended key usage.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/https_step4_dn.png', caption: <>Scroll down to Distinguished Name (DN) and set the common name to localhost. Click Save when done. , alt: 'Set the common name (CN) to localhost.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/https_step5_created.png', caption: 'The template appears in the grid.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/https_step6_generate.png', caption: <>Click the template's Generate button, choose the format and passphrase, and click Generate to download the certificate bundle. FormatPKCS#12 Passphrasepass , alt: 'Generate a PKCS#12 certificate bundle with a passphrase.', }, ]} /> Use the downloaded `https-server.pfx` file to configure a Node.js HTTPS server: ```js title="index.js" (async function main() { const https = await import('node:https'); const fs = await import('node:fs'); const httpsOptions = { // highlight-start // The name of the certificate bundle and the passphrase that was set in the generation dialog pfx: fs.readFileSync('https-server.pfx'), passphrase: 'pass' // highlight-end }; https.createServer(httpsOptions, (req, res) => { res.writeHead(200); res.end('Hello World\n'); }).listen(8000); console.log(`Listening on https://localhost:8000`); })(); ``` Run the server and query it with **cURL** or a similar HTTP client: ```bash title="Example commands" // Start server $ node index.js Listening on https://localhost:8000 // Query the server with cURL $ curl -kv https://localhost:8000 * Trying 127.0.0.1:8000... ... * Server certificate: * subject: CN=localhost; C=US; ST=California; L=San Francisco; O=CA Issuer, Inc * ... * issuer: CN=localhost; C=US; ST=California; L=San Francisco; O=CA Issuer, Inc * SSL certificate verify result: self-signed certificate (18), continuing anyway. ... > GET / HTTP/1.1 > Host: localhost:8000 > User-Agent: curl/7.88.1 > ... < HTTP/1.1 200 OK < .... < Hello World ``` ## Export a private key as a JSON Web Key (JWK) In this guide, you will generate a private key in PKCS#8 format and then export it to a JSON Web Key (JWK) using a custom responder and the browser's built-in Web Crypto API: Navigate to Digital Certificates → Certificate templates and click Create template., alt: 'Navigate to Digital Certificates → Certificate templates and click Create template.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/jwk_step2_general.png', caption: <>Fill in the General section with ECDSA key parameters. Click Save when done. , alt: 'Fill in the General section: name, ECDSA key algorithm, curve, and signature algorithm.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/jwk_step3_created.png', caption: 'The template appears in the grid.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/jwk_step4_generate.png', caption: <>Click the template's Generate button, choose PKCS#8 (private key only) format, and click Generate to download the private key as jwk.p8. FormatPKCS#8 (private key only) , alt: 'Generate a PKCS#8 private key from the template.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/jwk_step5_responder_form.png', caption: <>Navigate to Webhooks → Responders, click Create responder, and configure a responder that serves an HTML page with the Web Crypto API to convert PKCS#8 keys to JWK. Click Save. , alt: 'Create a Subtle Crypto responder with an HTML page that converts PKCS#8 to JWK.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/jwk_step6_responder_created.png', caption: 'The responder appears in the grid with its unique URL.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/jwk_step7_result.png', caption: <>Click the responder URL, upload the jwk.p8 file, and observe the JSON Web Key (JWK) derived from your ECDSA key., alt: 'Upload the PKCS#8 key and view the exported JWK.', }, ]} /> ## Import a certificate template from a string In this guide you'll import a certificate template by pasting PEM-encoded certificate content: Navigate to Digital Certificates → Certificate templates and click Import template., alt: 'Navigate to Digital Certificates → Certificate templates and click Import template.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/import_string_step2_pem.png', caption: <>Paste one or more PEM-encoded certificates into the PEM content field and click Parse certificates., alt: 'Paste PEM content and click Parse certificates.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/import_string_step3_preview.png', caption: <>Review the parsed certificates. Expand each certificate to view its details. Select which certificates to import and set template names. Click Import to create the certificate templates., alt: 'Preview and select certificates to import.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/import_string_step4_imported.png', caption: 'The imported templates appear in the grid.', }, ]} /> You can also import certificates by selecting the **File** source and uploading a `.pem`, `.crt`, `.cer`, or `.cert` file. ## Import a certificate template from URL In this guide you'll import a certificate template by extracting the TLS certificate chain from a website: Navigate to Digital Certificates → Certificate templates and click Import template., alt: 'Navigate to Digital Certificates → Certificate templates and click Import template.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/import_url_step2_url.png', caption: <>Select URL as the source, enter an HTTPS URL (e.g., https://test.example.com), and click the Fetch button., alt: 'Select URL source and fetch certificates from a URL.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/import_url_step3_pem_loaded.png', caption: <>The fetched PEM content appears in the text area. Click Parse certificates to parse the certificates., alt: 'Fetched PEM content loaded in the text area.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/import_url_step4_preview.png', caption: <>Review the fetched certificates. Expand each certificate to view its details. Select which certificates to import and set template names. Click Import to create the certificate templates., alt: 'Preview and select certificates to import.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/import_url_step5_imported.png', caption: 'The imported templates appear in the grid.', }, ]} /> ## Share a certificate template This guide will walk you through sharing a certificate template publicly, allowing anyone on the internet to view it: Navigate to Digital Certificates → Certificate templates, pick the template you'd like to share, and click Share., alt: 'Pick the template and click Share in the actions menu.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/share_step2_copy_link.png', caption: <>Toggle the Share template switch to on position, then click the Copy link button to copy a unique shared template link to your clipboard., alt: 'Toggle sharing on and copy the public link.', }, { img: '../../img/docs/guides/digital_certificates/certificate_templates/share_step3_unshare.png', caption: <>To stop sharing the template, switch the Share template toggle to the off position., alt: 'Toggle sharing off to stop sharing.', }, ]} /> --- ## Digital Certificates ➔ Private keys # What is a private key? A private key is a sensitive piece of cryptographic information that is used in asymmetric encryption systems, such as RSA or ECC (Elliptic Curve Cryptography). In these systems, a pair of keys is used: a public key and a private key. The private key is kept secret and is known only to the owner. It's used to decrypt data that has been encrypted with its corresponding public key. Additionally, the private key is used to sign digital messages, ensuring that they came from the owner of the private key and have not been tampered with. On this page, you can find guides on creating private keys with parameters that match your specific needs. ## Generate an RSA private key In this guide, you'll create the simplest possible RSA key and verify its validity with the OpenSSL command-line tool: Navigate to Digital Certificates → Private keys and click Create private key., alt: 'Navigate to Digital Certificates → Private keys and click Create private key.', }, { img: '../../img/docs/guides/digital_certificates/private_keys/rsa_step2_general.png', caption: <>Fill in the General section of the private key form. , alt: 'Fill in the General section: name, key algorithm, and key size.', }, { img: '../../img/docs/guides/digital_certificates/private_keys/rsa_step3_security.png', caption: <>Scroll down to Security and set encryption to None. Click Save when done. , alt: 'Set encryption to None and click Save.', }, { img: '../../img/docs/guides/digital_certificates/private_keys/rsa_step4_created.png', caption: 'The RSA key appears in the grid.', }, { img: '../../img/docs/guides/digital_certificates/private_keys/rsa_step5_export.png', caption: <>Click the key's Export button, choose the format, and click Export to download the key as RSA.pem. FormatPEM EncryptionNone , alt: 'Export the RSA key in PEM format.', }, ]} /> Use the OpenSSL command-line tool to view the key's content and verify its validity: ```bash title="View the RSA key's content" $ openssl rsa -in ~/Downloads/RSA.pem | openssl pkey -inform PEM -text -noout writing RSA key Private-Key: (2048 bit, 2 primes) modulus: 00:c4:96:a7:80:e4:45:19:47:3f:55:48:0e:eb:da: ... publicExponent: 65537 (0x10001) privateExponent: 2d:c0:94:3e:4a:a2:0c:46:89:26:5b:6d:61:95:cd: ... prime1: 00:f9:9f:52:03:48:2d:bf:a7:c1:9a:e5:68:51:7d: ... prime2: 00:c9:9c:75:f6:ab:49:4a:6b:85:6b:61:cc:04:20: ... exponent1: 00:be:75:85:49:e3:c4:a4:3b:07:49:7c:48:40:05: ... exponent2: 00:94:db:de:49:8b:fc:e8:62:ed:36:f5:15:92:f2: ... coefficient: 27:bf:26:e8:31:41:0c:2f:88:c7:5e:2d:af:46:c4: ... ``` ## Generate an ECDSA elliptic curve private key In this guide, you'll generate an ECDSA elliptic curve private key protected by a passphrase: Navigate to Digital Certificates → Private keys and click Create private key., alt: 'Navigate to Digital Certificates → Private keys and click Create private key.', }, { img: '../../img/docs/guides/digital_certificates/private_keys/ecdsa_step2_general.png', caption: <>Fill in the General section with ECDSA key parameters. , alt: 'Fill in the General section: name, ECDSA key algorithm, and curve name.', }, { img: '../../img/docs/guides/digital_certificates/private_keys/ecdsa_step3_security.png', caption: <>Scroll down to Security and set a passphrase to protect the key. Click Save when done. , alt: 'Set encryption to Passphrase, enter and repeat the passphrase, and click Save.', }, { img: '../../img/docs/guides/digital_certificates/private_keys/ecdsa_step4_created.png', caption: 'The ECDSA key appears in the grid.', }, { img: '../../img/docs/guides/digital_certificates/private_keys/ecdsa_step5_export.png', caption: <>Click the key's Export button, choose the format, enter the current and export passphrases, and click Export to download the key as ECC.p8. FormatPKCS#8 Current passphrasepass Export passphrasepass-export Repeat export passphrasepass-export , alt: 'Export the ECDSA key in PKCS#8 format with a passphrase.', }, ]} /> Use the OpenSSL command-line tool to view the key's content and verify its validity: ```bash title="View the ECDSA key's content" $ openssl pkcs8 -inform DER -in ~/Downloads/ECC.p8 -passin pass:pass-export | \ openssl pkey -inform PEM -text -noout Private-Key: (384 bit) priv: 8c:30:d7:b2:df:7c:9d:75:cb:a0:ec:93:53:ea:91: ... pub: 04:f8:94:f2:28:f7:be:e7:75:ff:8d:3a:0d:c9:d3: ... ASN1 OID: secp384r1 NIST CURVE: P-384 ``` --- ## API Keys Secutils.dev supports **API keys** for programmatic access to the REST API. API keys are ideal for CI/CD pipelines, automation scripts, and AI agents that need to interact with Secutils.dev without a browser session. ## Key features - **Opaque tokens** - each key is a random token prefixed with `su_ak_` for easy identification - **Optional expiration** - keys can be set to expire on a specific date, or never - **One-time display** - the plaintext token is shown only at creation and regeneration; it cannot be retrieved afterward - **Independent of sessions** - API keys work without cookies or browser login ## Managing API keys Navigate to **Settings → Security** and click **Manage API keys** to open the API keys management panel. Navigate to Settings → Security and click Manage API keys., alt: 'Settings Security tab with the Manage API keys button highlighted.', }, { img: '../../img/docs/guides/api_keys/api_keys_step2_empty.png', caption: <>The API keys panel in its empty state. Click Create API key to create your first key., alt: 'Empty API keys modal with the Create API key button highlighted.', }, { img: '../../img/docs/guides/api_keys/api_keys_step3_create_form.png', caption: <>Enter a Name for the key and optionally set an Expires date. Click Save to generate the key., alt: 'Inline create form with Name and Expires fields.', }, { img: '../../img/docs/guides/api_keys/api_keys_step4_token_reveal.png', caption: <>The token is displayed once. Copy it now - it cannot be retrieved again after you dismiss this message., alt: 'Token reveal callout showing the generated API key with a Copy button.', }, { img: '../../img/docs/guides/api_keys/api_keys_step5_list.png', caption: <>The API keys list showing your keys with their expiration and usage information. Use the actions menu to Edit, Regenerate, or Delete a key., alt: 'API keys table with multiple keys and action buttons.', }, ]} /> ## Using API keys Include the API key in the `Authorization` header of your HTTP requests: ```bash curl -H "Authorization: Bearer su_ak_your_token_here" \ https://secutils.dev/api/user/api_keys ``` API keys grant access to all user-facing API endpoints. They **cannot** be used to manage other API keys (the server returns 403 for API-key-management endpoints when authenticated with an API key). ## Key actions ### Rename Use the **Edit** action to change a key's name. The name is for your reference only and does not affect the key's functionality. ### Regenerate The **Regenerate** action creates a new token and immediately invalidates the old one. You can optionally set a new expiration date during regeneration. This is the only way to change expiration after creation. :::warning Regenerating a key is irreversible. Any application using the old token will immediately lose access. ::: ### Delete The **Delete** action permanently removes the key. This cannot be undone. ## Expiration - Keys created without an expiration date are valid indefinitely - Expired keys remain visible in the list with a **red** expiration indicator - Expired keys cannot be used for authentication - the server rejects them - To extend an expired key, use **Regenerate** and set a new expiration date ## Limits - Up to **30** API keys per user (configurable via `security.max_user_api_keys`) - Key names must be unique and at most **128** characters - Tokens are approximately **70** characters long (`su_ak_` prefix + 64 hex characters) --- ## Deno Sandbox Runtime Webhook [responder scripts](/docs/guides/webhooks#annex-responder-script-examples) and API tracker [extractor](/docs/guides/web_scraping/api#annex-extractor-script) / [configurator](/docs/guides/web_scraping/api#annex-configurator-script) scripts run inside a **restricted [Deno](https://deno.com/) runtime**. The sandbox intentionally limits what code can do: - **Limited network access** - `fetch`, `XMLHttpRequest`, and standard network APIs are unavailable. Responder scripts can make outbound HTTP requests through [`Deno.core.ops.op_proxy_request()`](#op_proxy_request) with built-in SSRF protection. - **No file-system access** - reading or writing files is not possible. - **No `btoa` / `atob`** - the standard Base64 globals are not exposed (see [Base64 encoding](#base64-encoding) below for a workaround). What **is** available: - All standard JavaScript built-ins (`JSON`, `Math`, `Date`, `Map`, `Set`, `Array`, `TextEncoder`, `TextDecoder`, `Promise`, etc.). - The `Deno.core` utility namespace documented on this page. - The global `context` variable injected by Secutils.dev with request/response data (see the individual guide pages for its shape). :::tip Full API list The complete list of `Deno.core` members exposed in the sandbox is published at [**demo.webhooks.dev.secutils.dev/deno-apis**](https://demo.webhooks.dev.secutils.dev/deno-apis). This page covers only the subset that is most useful when writing scripts. ::: ## Encoding and decoding These are the most commonly used APIs. Every script that builds or reads a binary body will use at least `encode` and `decode`. ### `Deno.core.encode(text)` Encodes a JavaScript string into a `Uint8Array` using **UTF-8**: ```javascript const bytes = Deno.core.encode('Hello, world!'); // Uint8Array(13) [72, 101, 108, 108, 111, 44, 32, 119, 111, 114, 108, 100, 33] ``` Use it whenever a script needs to explicitly encode a string to binary: ```javascript (() => { const bytes = Deno.core.encode('raw binary payload'); return { body: bytes }; })(); ``` :::tip Scripts can also return `body` as a plain string, object, or array - the runtime auto-converts them. See [Body auto-conversion](#body-auto-conversion) below. ::: ### `Deno.core.decode(buffer)` Decodes a `Uint8Array` back into a JavaScript string using **UTF-8**: ```javascript const text = Deno.core.decode(new Uint8Array([72, 101, 108, 108, 111])); // 'Hello' ``` Typical use - parsing an incoming JSON body in a responder or extractor: ```javascript const body = context.body.length > 0 ? JSON.parse(Deno.core.decode(new Uint8Array(context.body))) : {}; ``` ### `Deno.core.encodeBinaryString(buffer)` Converts a `Uint8Array` into a **binary string** where each byte maps to a single Latin-1 character. This is the same encoding that `String.fromCharCode` would produce byte-by-byte, but much faster for large buffers: ```javascript const bin = Deno.core.encodeBinaryString( new Uint8Array([0x48, 0x65, 0x6c, 0x6c, 0x6f]) ); // 'Hello' ``` This function is particularly useful as a building block for [Base64 encoding](#base64-encoding). ## Base64 encoding The sandbox does not expose the standard `btoa` and `atob` globals. You can implement Base64 encoding and decoding with pure JavaScript and the APIs above. ### Encode (btoa replacement) ```javascript function toBase64(input) { const CHARS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; const bytes = typeof input === 'string' ? Deno.core.encode(input) : input; let result = ''; for (let i = 0; i < bytes.length; i += 3) { const a = bytes[i]; const b = i + 1 < bytes.length ? bytes[i + 1] : 0; const c = i + 2 < bytes.length ? bytes[i + 2] : 0; result += CHARS[a >> 2] + CHARS[((a & 3) << 4) | (b >> 4)] + (i + 1 < bytes.length ? CHARS[((b & 15) << 2) | (c >> 6)] : '=') + (i + 2 < bytes.length ? CHARS[c & 63] : '='); } return result; } ``` ### Decode (atob replacement) ```javascript function fromBase64(base64) { const CHARS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; const stripped = base64.replace(/=+$/, ''); const bytes = []; for (let i = 0; i < stripped.length; i += 4) { const a = CHARS.indexOf(stripped[i]); const b = CHARS.indexOf(stripped[i + 1]); const c = CHARS.indexOf(stripped[i + 2]); const d = CHARS.indexOf(stripped[i + 3]); bytes.push((a << 2) | (b >> 4)); if (c >= 0) bytes.push(((b & 15) << 4) | (c >> 2)); if (d >= 0) bytes.push(((c & 3) << 6) | d); } return new Uint8Array(bytes); } ``` ### Example: return a Base64-encoded value ```javascript (() => { const payload = JSON.stringify({ user: 'alice', role: 'admin' }); const encoded = toBase64(payload); return { body: encoded }; })(); ``` ## String utilities ### `Deno.core.byteLength(str)` Returns the **UTF-8 byte length** of a string without allocating a `Uint8Array`. This is handy when you need to set a `Content-Length` header: ```javascript (() => { const body = JSON.stringify({ status: 'ok' }); return { headers: { 'Content-Type': 'application/json', 'Content-Length': String(Deno.core.byteLength(body)), }, body, }; })(); ``` ## Debugging ### `Deno.core.print(msg, isErr)` Prints a string to the runtime's output stream. Pass `true` as the second argument to write to stderr instead of stdout: ```javascript Deno.core.print('Debug: processing request\n', false); Deno.core.print('Warning: missing field\n', true); ``` :::caution Output from `Deno.core.print` is written to the server log, not returned to the HTTP client. Use it only for temporary debugging. ::: ## Deep copy ### `Deno.core.structuredClone(value)` Creates a deep copy of a value using the [structured clone algorithm](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm), the same mechanism behind the global `structuredClone` in modern runtimes: ```javascript const original = { nested: { count: 1 }, items: [1, 2, 3] }; const copy = Deno.core.structuredClone(original); copy.nested.count = 99; // original.nested.count is still 1 ``` ## Type checking `Deno.core` exposes a set of `is*` predicates that mirror Node.js's [`util.types`](https://nodejs.org/api/util.html#utiltypes). They are useful when your script handles input whose shape is not known in advance. | Predicate | Returns `true` for | |---|---| | `Deno.core.isDate(v)` | `Date` instances | | `Deno.core.isRegExp(v)` | `RegExp` instances | | `Deno.core.isMap(v)` | `Map` instances | | `Deno.core.isSet(v)` | `Set` instances | | `Deno.core.isTypedArray(v)` | Any typed array (`Uint8Array`, `Float64Array`, etc.) | | `Deno.core.isArrayBuffer(v)` | `ArrayBuffer` instances | | `Deno.core.isArrayBufferView(v)` | Any `ArrayBuffer` view (typed arrays and `DataView`) | | `Deno.core.isPromise(v)` | `Promise` instances | | `Deno.core.isNativeError(v)` | Native `Error` instances (including subtypes) | ```javascript if (Deno.core.isTypedArray(context.body)) { Deno.core.print('Body is already a typed array\n', false); } ``` ## Serialization (advanced) {#serialization} For advanced use cases that need compact binary encoding beyond JSON, `Deno.core` exposes V8's built-in serialization format: ### `Deno.core.serialize(value)` Serializes a JavaScript value into a `Uint8Array` using V8's internal format. This preserves types that `JSON.stringify` cannot, such as `Map`, `Set`, `Date`, `RegExp`, `ArrayBuffer`, and typed arrays: ```javascript const data = { created: new Date(), tags: new Set(['a', 'b']) }; const packed = Deno.core.serialize(data); ``` ### `Deno.core.deserialize(buffer)` Restores the value from a buffer previously produced by `Deno.core.serialize`: ```javascript const restored = Deno.core.deserialize(packed); // restored.created is a Date, restored.tags is a Set ``` :::warning The serialization format is specific to V8 and is **not** portable across engines or language runtimes. Prefer JSON for data that will be consumed outside of the Deno sandbox. ::: ## Proxy requests {#op_proxy_request} ### `Deno.core.ops.op_proxy_request(request)` Sends an HTTP request to an upstream server and returns the response. This enables responder scripts to act as a **MITM proxy** - forwarding requests to a real backend, inspecting or transforming responses, and returning them to the client. See [Proxy requests to an upstream service](/docs/guides/webhooks#proxy-requests-to-an-upstream-service-mitm) for usage examples. The `request` argument has the following interface: ```typescript interface ProxyRequest { // Target URL (required). Must be http:// or https://. url: string; // HTTP method. Defaults to "GET". method?: string; // Optional HTTP headers to send with the request. headers?: Record; // Optional binary request body as an array of bytes. body?: number[]; // Skip TLS certificate validation. Defaults to false. // Useful for proxying to upstream servers with self-signed certificates. insecure?: boolean; // Per-request timeout in milliseconds. // Clamped to the server-configured maximum (default: 30 000 ms). // If omitted, the server default timeout applies. timeout?: number; // Automatically decompress the response body based on the Content-Encoding // header (gzip, deflate, br, zstd). Defaults to true. decompress?: boolean; } ``` The returned value has the following interface: ```typescript interface ProxyResponse { // HTTP status code from the upstream server. statusCode: number; // HTTP response headers from the upstream server. headers: Record; // Response body as an array of bytes (decompressed by default). body: number[]; } ``` Basic example: ```javascript (async () => { const resp = await Deno.core.ops.op_proxy_request({ url: 'https://api.example.com/data', method: 'POST', headers: { 'Content-Type': 'application/json' }, body: Deno.core.encode(JSON.stringify({ key: 'value' })), }); // resp.statusCode, resp.headers, resp.body are available return resp; })() ``` #### Transparent decompression When the upstream server returns a compressed response (e.g. `Content-Encoding: gzip`), `op_proxy_request` **automatically decompresses** the body before returning it to the script. Supported encodings: `gzip`, `x-gzip`, `deflate`, `br` (Brotli), and `zstd` (Zstandard). After decompression, the original transport headers are preserved under renamed keys so you can still see what the upstream sent: | Original header | Renamed to | |--------------------|-------------------------------| | `content-encoding` | `x-original-content-encoding` | | `content-length` | `x-original-content-length` | This means scripts can always work with the decompressed body directly (e.g. `JSON.parse(Deno.core.decode(new Uint8Array(resp.body)))`) regardless of whether the upstream compresses its responses. To **opt out** of automatic decompression (e.g. to forward compressed bytes as-is), pass `decompress: false`: ```javascript const resp = await Deno.core.ops.op_proxy_request({ url: 'https://upstream/api', decompress: false, // body stays compressed, content-encoding header is preserved }); ``` **Error handling.** When the upstream request fails, the op throws a JavaScript error with a descriptive message. Scripts can catch these errors and return a custom response: ```javascript (async () => { try { return await Deno.core.ops.op_proxy_request({ url: 'https://upstream/api' }); } catch (e) { return { statusCode: 502, headers: { 'Content-Type': 'text/plain' }, body: `Proxy error: ${e.message}`, }; } })() ``` Possible error messages include: | Error | Meaning | |---------------------------------------------------|---------------------------------------------------------------------------------------| | `Invalid URL '…': …` | The `url` field could not be parsed. | | `URL not allowed (non-public address): …` | SSRF protection blocked the URL because it resolves to a private/internal IP address. | | `Invalid HTTP method: '…'` | The `method` value is not a valid HTTP method token. | | `Invalid header name: '…'` | A key in `headers` is not a valid HTTP header name. | | `Invalid header value for '…'` | A value in `headers` contains invalid characters. | | `Upstream request timed out: …` | The upstream server did not respond within the configured timeout. | | `Failed to connect to upstream: …` | Could not establish a TCP connection to the upstream server. | | `Upstream request failed: …` | A general upstream request failure (DNS, TLS, etc.). | | `Upstream response body too large: …` | The response body exceeded the configured size limit. | | `Failed to decompress gzip/deflate/brotli/zstd …` | The response body claimed a content-encoding but the data was invalid. | :::caution Security By default, `op_proxy_request` only allows requests to **publicly routable** IP addresses. URLs that resolve to private or loopback addresses (e.g. `127.0.0.1`, `10.x.x.x`, `192.168.x.x`) are rejected to prevent [Server-Side Request Forgery (SSRF)](https://owasp.org/www-community/attacks/Server-Side_Request_Forgery). This restriction can be relaxed per subscription tier for local development scenarios. ::: :::info Limits Each responder has a maximum number of concurrent proxy requests it can handle simultaneously (configurable per subscription tier). When the limit is reached, additional requests receive a `429 Too Many Requests` response with a `Retry-After` header. The upstream response body size is also limited (default: 10 MB) to prevent memory exhaustion. Every proxy request has a timeout (default: 30 s) that can be lowered per-request via the `timeout` field but cannot exceed the server-configured maximum. All proxy requests use **HTTP/1.1** - HTTP/2 is not supported. ::: :::tip Response tracking When using `op_proxy_request`, you can store the upstream response alongside the tracked request by returning `trackResponse: true` in your script result. This is especially useful for debugging proxy issues. See [Track responses](/docs/guides/webhooks#track-responses) for details. ::: ## Body auto-conversion {#body-auto-conversion} Responder scripts can return `body` as several types - not just `Uint8Array`. The runtime automatically converts the value before sending the response: | Script returns | Conversion | Result | |------------------------------------------|---------------------------------|-----------------------------------| | `Uint8Array` or `ArrayBuffer` | None (pass-through) | Raw bytes | | `string` | UTF-8 encode | UTF-8 bytes of the string | | Plain object (e.g. `{ key: "value" }`) | `JSON.stringify` + UTF-8 encode | UTF-8 bytes of the JSON | | Array of non-numbers (e.g. `[{ a: 1 }]`) | `JSON.stringify` + UTF-8 encode | UTF-8 bytes of the JSON array | | Array of numbers (e.g. `[72, 101]`) | `new Uint8Array(arr)` | Raw bytes (backward compatible) | | `number` or `boolean` | `JSON.stringify` + UTF-8 encode | UTF-8 bytes of `"42"` or `"true"` | | `null` or `undefined` | Skipped | Uses the responder's default body | Examples: ```javascript // Return a JSON object directly - no Deno.core.encode needed. (() => ({ headers: { 'Content-Type': 'application/json' }, body: { status: 'ok', count: 42 }, }))(); ``` ```javascript // Return a plain text string. (() => ({ body: 'Hello, world!' }))(); ``` ```javascript // Modify a proxied JSON response and return the object. (async () => { const resp = await Deno.core.ops.op_proxy_request({ url: 'https://api.example.com/data', }); const data = JSON.parse(Deno.core.decode(new Uint8Array(resp.body))); data._proxied = true; return { ...resp, body: data }; })() ``` --- ## Export & Import Data Secutils.dev allows you to export and import your data for backup, migration between accounts, or configuration management. The export/import feature supports all entity types: scripts, secrets, responders (with history), certificate templates, private keys, content security policies, trackers (page and API, with history), and user settings. ## Getting started Navigate to **Settings → Account** to access the export and import functionality. Open the account menu, click Settings, then select the Account tab. You'll see buttons to Export data and Import data in the Data section., alt: 'Navigate to Settings → Account tab showing export and import buttons.', }, { img: '../../img/docs/guides/export_import/export_import_step2_export_modal.png', caption: <>Click Export data to open the export modal. Select the items you want to include in the export and click Export to download a .secutils.json file., alt: 'Export data modal with entity selection checkboxes.', }, { img: '../../img/docs/guides/export_import/export_import_step3_import_modal.png', caption: <>Click Import data to open the import modal. Upload a previously exported .secutils.json file, choose an import mode, and follow the guided steps to import your data., alt: 'Import data modal with file upload and mode selection.', }, ]} /> ## Export The export feature lets you selectively choose which entities to include in the export file. You can export: - **Scripts** - responder and tracker scripts - **Secrets** - secret names and, optionally, passphrase-encrypted secret values - **Responders** - webhook responders, optionally including their request history - **Certificate templates** - X.509 certificate generation templates - **Private keys** - cryptographic private keys - **CSP** - content security policy configurations - **Page trackers** - web page change trackers, optionally including revision history - **API trackers** - API endpoint trackers, optionally including revision history - **Settings** - user preferences such as UI theme and sidebar state When exporting secrets, you can optionally include their values by enabling **Include secret values** and providing a passphrase (minimum 8 characters). The values are encrypted using AES-256-GCM with an Argon2id-derived key, so the export file is safe to store - but you must remember the passphrase to import the values later. The export produces a JSON file with the `.secutils.json` extension that contains all selected entities and their configuration. ## Import When importing data, you can choose between two modes: ### Merge mode (default) Merge adds items from the import file to your existing data. Your current data is preserved - nothing is deleted. If an imported item has the same name as an existing item, you can resolve the conflict by: - **Rename** - import the item with a ` (Copy N)` suffix appended to its name - **Overwrite** - replace the existing item with the imported one - **Skip** - keep the existing item and skip the import ### Apply mode Apply treats the import file as the **desired state** and synchronizes your data to match it. Items not in the file may be removed (with your explicit confirmation). This mode is useful for: - **Configuration-as-code** - maintain your Secutils.dev configuration in a repository and apply it - **Environment synchronization** - keep multiple accounts in sync - **Drift detection** - compare your current state against a known-good configuration :::warning Apply mode can delete existing data. Always review the preview carefully before confirming an apply operation. ::: ## Export file format The export file uses JSON format with a version field for forward compatibility: ```json { "version": 1, "exportedAt": 1740000000, "data": { "scripts": [...], "secrets": [...], "responders": [...], "certificateTemplates": [...], "privateKeys": [...], "contentSecurityPolicies": [...], "pageTrackers": [...], "apiTrackers": [...], "settings": { "common.uiTheme": "dark", ... } } } ``` Only the entity categories you selected during export will be present in the `data` object. ## Limitations - Maximum import file size: **10 MB** - Entity counts are subject to your subscription tier limits - Secret values are only included if you explicitly opt in and provide a passphrase during export - Tracker scheduling state (next run time, last run time) is not exported - previously scheduled trackers will be rescheduled after import --- ## User Secrets Secutils.dev allows you to securely store sensitive values - API keys, tokens, passwords, private keys, and other credentials - as **user secrets**. Secret values are encrypted at rest and are never returned to the browser after being saved. You can reference them in responder scripts, tracker extractor scripts, and responder static body/headers without ever exposing them in your code. ## Managing secrets Navigate to **Workspace → Secrets** in the sidebar to manage your secrets. You can: - **Add** a new secret with a name and value - **Update** an existing secret's value (the old value is replaced) - **Delete** a secret you no longer need - **Upload from file** - useful for private keys, JSON credentials, or other file-based secrets Navigate to Workspace → Secrets in the sidebar. Click Add secret to create your first secret., alt: 'Navigate to Workspace → Secrets and click Add secret.', }, { img: '../../img/docs/guides/secrets/secrets_step2_form.png', caption: <>Enter a Name and Value for the secret and click Create. You can also load the value from a file using the file picker., alt: 'Fill in the secret name and value and click Create.', }, { img: '../../img/docs/guides/secrets/secrets_step3_created.png', caption: <>The secret appears in the table. Note that the value is write-only - it cannot be retrieved after saving., alt: 'The newly created secret appears in the secrets table.', }, { img: '../../img/docs/guides/secrets/secrets_step4_list.png', caption: <>You can add more secrets as needed. Use the Edit action to replace a value, or Delete to remove a secret., alt: 'The secrets table showing multiple secrets with edit and delete actions.', }, ]} /> ### Naming rules - Must start with a letter (a-z, A-Z) - Can contain letters, digits, underscores (`_`), and hyphens (`-`) - Maximum 128 characters ### Limits - Up to **100** secrets per user (configurable per subscription tier) - Maximum secret value size: **10 KB** ### Write-only values Once saved, a secret's value **cannot be retrieved** - it can only be replaced or deleted. This follows the industry best practice used by GitHub, AWS, Vercel, and similar services. ## Exposing secrets to responders and trackers By default, **no secrets are exposed** to any responder or tracker. You must explicitly choose which secrets to make available using the **Secrets** section in the advanced settings of each responder or page tracker. Three access modes are available: | Mode | Description | |--------------------------|--------------------------------------------------------------------------------------| | **No secrets** (default) | No secrets are decrypted or injected. | | **All secrets** | All of your secrets are decrypted and made available. | | **Selected secrets** | Only the secrets you pick from a multi-select list are decrypted and made available. | When a secret is deleted, it is automatically removed from any **Selected** lists in responders and trackers to prevent dangling references. :::note Automatic syncing When you create, update, or delete a secret, Secutils.dev automatically re-syncs the decrypted values to all page trackers whose secrets access is set to **All** or **Selected**. Responders resolve secrets at request time and do not require syncing. ::: ## Using secrets in responder scripts In responder scripts, secrets are available through the `context.secrets` object: ```javascript (async () => { const apiKey = context.secrets.MY_API_KEY; const resp = await Deno.core.ops.op_proxy_request({ url: 'https://api.example.com/data', headers: { 'Authorization': `Bearer ${apiKey}` }, }); const data = JSON.parse(Deno.core.decode(new Uint8Array(resp.body))); return { headers: { 'Content-Type': 'application/json' }, body: data }; })(); ``` ## Using secrets in responder static body and headers For responders that don't use scripts, you can reference secrets directly in the **body** and **header values** using the `${secrets.KEY}` template syntax: | Field | Example value | Resolved value | |--------------|--------------------------------------------|----------------------------| | Header value | `Bearer ${secrets.API_TOKEN}` | `Bearer sk-abc123…` | | Body | `{"apiKey": "${secrets.THIRD_PARTY_KEY}"}` | `{"apiKey": "real-value"}` | The server resolves these templates at request time. If a referenced secret doesn't exist, the template is left as-is. ## Using secrets in tracker scripts In **page tracker** extractor scripts, secrets are available through the second `context` parameter: ```javascript export async function execute(page, context) { const token = context.params.secrets.AUTH_TOKEN; await page.setExtraHTTPHeaders({ 'Authorization': `Bearer ${token}` }); await page.goto('https://protected.example.com/dashboard'); return await page.content(); } ``` In **API tracker** scripts (both extractor and configurator), secrets are available through the global `context` object. API tracker scripts use the IIFE format: ```javascript (() => { const secret = context.params?.secrets?.AUTH_TOKEN ?? ""; const response = context.responses?.[0]; const body = response?.body ? Deno.core.decode(new Uint8Array(response.body)) : "{}"; return { body: Deno.core.encode( JSON.stringify({ ...JSON.parse(body), secret }) ) }; })(); ``` ## Access patterns summary | Surface | Syntax | Example | |------------------------------------|------------------------------|-------------------------------------| | Responder scripts | `context.secrets.KEY` | `context.secrets.MY_API_KEY` | | Responder static body/headers | `${secrets.KEY}` | `Bearer ${secrets.API_TOKEN}` | | Page tracker extractor scripts | `context.params.secrets.KEY` | `context.params.secrets.AUTH_TOKEN` | | API tracker extractor/configurator | `context.params.secrets.KEY` | `context.params.secrets.AUTH_TOKEN` | The difference in tracker scripts (`context.params.secrets` vs `context.secrets`) exists because tracker scripts run in the Retrack web scraper service and receive secrets through the existing parameters mechanism. --- ## Tags As the number of items in your workspace grows, it becomes harder to find the ones you need. **Tags** let you label any item - responders, trackers, CSP policies, certificate templates, private keys, scripts, and secrets - with one or more keywords so you can quickly filter and group related items across all tools. ## Managing tags Navigate to **Workspace → Tags** in the sidebar to manage your tag library. You can: - **Add** a new tag with a name and color - **Edit** an existing tag's name or color - **Delete** a tag - it will be removed from all items that use it Navigate to Workspace → Tags in the sidebar. Click Add tag to create your first tag., alt: 'Navigate to Workspace → Tags and click Add tag.', }, { img: '../../img/docs/guides/tags/tags_step2_form.png', caption: <>Enter a Name and choose a Color for the tag, then click Create., alt: 'Fill in the tag name and color and click Create.', }, { img: '../../img/docs/guides/tags/tags_step3_list.png', caption: <>The tag library showing multiple tags. Use the Edit action to rename or recolor, or Delete to remove a tag., alt: 'The tags table showing multiple tags with edit and delete actions.', }, ]} /> ### Naming rules - Tag names are normalized to **lowercase** and trimmed of leading/trailing whitespace - Maximum **50** characters - Names must be unique per user ### Limits - Up to **50** tags per user - Up to **20** tags per item ### Available colors Tags support custom **hex colors**. When creating or editing a tag, use the **color picker** to choose from predefined swatches or enter any hex color value (e.g., `#54B399`). ## Assigning tags to items Every item edit form (responder, tracker, policy, certificate template, private key, script, or secret) includes a **Tags** field in the General section. Select existing tags from the dropdown, or **type a new name and press Enter** to create a tag inline. Open any item's edit form and use the Tags field to select or create tags. Tags are saved when you save the item., alt: 'Assigning tags to a responder in the edit flyout.', }, ]} /> ## Filtering by tags Each item list page (responders, trackers, policies, etc.) has a **Tags** filter button next to the search bar. Click it to select one or more tags. Items that have **at least one** of the selected tags will be shown (OR logic). The responders list showing all items, each with their assigned tags displayed as colored badges., alt: 'Unfiltered responder list with tag badges.', }, { img: '../../img/docs/guides/tags/tags_step6_filtered.png', caption: <>After selecting the production tag in the filter, only matching responders are shown., alt: 'Responder list filtered to show only production-tagged items.', }, ]} /> ## Global tag scope The workspace header includes a **Scope** button that applies a tag filter across **all** pages simultaneously. When a global scope is active, every item list only shows items tagged with **all** of the selected scope tags (AND logic). This effectively creates lightweight workspaces - select "production" in the scope to see only production items everywhere. Your global scope selection is automatically saved as a user setting and persists across page refreshes and sessions. The global scope works together with per-page tag filters: | Filter level | Logic | Example | |----------------------------|-----------------------------------------------|-------------------------------------------------------------| | **Global scope** (header) | AND - item must have all scoped tags | Scope = "production" → only items with the "production" tag | | **Page filter** (per list) | OR - item must have at least one selected tag | Filter = "api", "webhook" → items with either tag | Both filters stack: an item must pass the global scope first, then the page-level filter. ## Tags in export/import Tags are included when you export your data. The export file contains: - The **tag definitions** (name, color) in a top-level `tags` array - Each item's **assigned tags** in its `tags` field When importing, tags are recreated automatically. If a tag with the same name already exists, the existing tag is reused. --- ## User Scripts Secutils.dev lets you save reusable JavaScript snippets as **user scripts** and import them directly into responder scripts, API tracker extractor/configurator scripts, and page tracker extractor scripts. Instead of duplicating the same logic in every responder or tracker, you write it once in **Workspace → Scripts** and import it wherever you need it. ## Script types Each script has a **type** that determines which contexts it can be imported in: | Type | Label | Can be imported in | |--------------------|------------------|------------------------------------------| | `responder` | Responder | Responder scripts | | `api_extractor` | API Extractor | API tracker data extractor scripts | | `api_configurator` | API Configurator | API tracker request configurator scripts | | `page_extractor` | Page Extractor | Page tracker content extractor scripts | | `universal` | Universal | Any context | When you open the import modal in an editor, only scripts compatible with that context are shown. ## Managing scripts Navigate to **Workspace → Scripts** in the sidebar to create and manage your scripts. Navigate to Workspace → Scripts in the sidebar. Click Add script to create your first script., alt: 'Navigate to Workspace → Scripts and click Add script.', }, { img: '../../img/docs/guides/user_scripts/scripts_step2_form.png', caption: <>Enter a Name, choose a Type, and write or paste the script content. Click Create when ready., alt: 'Fill in the script name, type, and content and click Create.', }, { img: '../../img/docs/guides/user_scripts/scripts_step3_created.png', caption: 'The script appears in the table with its type badge and last-updated timestamp.', alt: 'The newly created script appears in the scripts table.', }, { img: '../../img/docs/guides/user_scripts/scripts_step4_list.png', caption: <>You can add more scripts as needed. Use the Edit action to update a script's content, Duplicate to create a copy, or Delete to remove it., alt: 'The scripts table showing multiple scripts with edit, duplicate, and delete actions.', }, ]} /> ### Naming rules - Must start with a letter (a–z, A–Z) - Can contain letters, digits, underscores (`_`), and hyphens (`-`) - Maximum 128 characters ### Limits - Up to **100** scripts per user (configurable per subscription tier) - Maximum script content size: **50 KB** ## Importing a script into a responder When you create or edit a responder with **Advanced mode** enabled, the **Script** editor exposes an **Import** action in its toolbar. Clicking it opens a modal listing all scripts whose type is `responder` or `universal`. In the responder form, enable Advanced mode. The Script editor appears. Click the Import action (the download icon in the editor toolbar) to open the script picker., alt: 'Responder form with Advanced mode enabled showing the Script editor.', }, { img: '../../img/docs/guides/user_scripts/scripts_step6_import_modal.png', caption: <>Select the script you want to import and click Import. Only scripts compatible with the responder context are listed., alt: 'Import modal showing the list of available responder scripts.', }, { img: '../../img/docs/guides/user_scripts/scripts_step7_imported.png', caption: 'The selected script content is inserted into the editor, replacing any existing content.', alt: 'Responder script editor after importing the predefined script.', }, ]} /> :::tip Importing a script copies its content into the editor at the time of import. Subsequent edits to the original script in **Workspace → Scripts** do not automatically update responders or trackers that have already imported it. ::: ## Importing a script into an API tracker API tracker extractor and configurator editors each expose the same **Import** action. The modal shows only `api_extractor`, `api_configurator`, or `universal` scripts, depending on which editor triggered the import. In the API tracker form, enable Advanced mode. The Data extractor and Request configurator editors appear. Click the Import action in the desired editor to open the script picker., alt: 'API tracker form with Advanced mode enabled showing the Data extractor script editor.', }, { img: '../../img/docs/guides/user_scripts/scripts_api_tracker_step2_import_modal.png', caption: <>Select the compatible script and click Import. Only scripts of the matching type (api_extractor, api_configurator, or universal) are listed., alt: 'Import modal showing the list of compatible API tracker scripts.', }, { img: '../../img/docs/guides/user_scripts/scripts_api_tracker_step3_imported.png', caption: 'The script content is inserted into the editor.', alt: 'API tracker Data extractor editor after importing the predefined script.', }, ]} /> ## Importing a script into a page tracker The page tracker **Content extractor** editor works the same way. The import modal shows only `page_extractor` or `universal` scripts. In the page tracker form, the Content extractor editor is always visible. Click the Import action to open the script picker., alt: 'Page tracker form showing the Content extractor script editor.', }, { img: '../../img/docs/guides/user_scripts/scripts_page_tracker_step2_import_modal.png', caption: <>Select the compatible script and click Import. Only page_extractor and universal scripts are listed., alt: 'Import modal showing the list of compatible page tracker scripts.', }, { img: '../../img/docs/guides/user_scripts/scripts_page_tracker_step3_imported.png', caption: 'The script content is inserted into the Content extractor editor.', alt: 'Page tracker Content extractor editor after importing the predefined script.', }, ]} /> --- ## Web Scraping ➔ API trackers # What is an API tracker? An API tracker is a utility that empowers developers to detect and monitor changes in the responses of any HTTP API endpoint. Whether you need to ensure that a deployed REST API continues to return the expected data or you want to be notified when an upstream API changes its response format, the API tracker has you covered. When a change is detected, the tracker promptly notifies the user. Unlike [page trackers](/docs/guides/web_scraping/page), which use a full headless browser to extract content from web pages, API trackers send direct HTTP requests to API endpoints. This makes them lighter, faster, and ideal for monitoring JSON APIs, webhooks, and other HTTP services. On this page, you can find guides on creating and using API trackers. ## Create an API tracker In this guide, you'll create a simple API tracker to monitor a public JSON API endpoint: Navigate to Web Scraping → API trackers and click Track API., alt: 'Navigate to Web Scraping → API trackers and click Track API.', }, { img: '../../img/docs/guides/web_scraping/api_create_step2_form.png', caption: <>Configure the tracker and click Save. , alt: 'Configure the API tracker with name, URL, method, and frequency.', }, { img: '../../img/docs/guides/web_scraping/api_create_step3_created.png', caption: 'The tracker appears in the grid.', }, { img: '../../img/docs/guides/web_scraping/api_create_step4_update.png', caption: <>Expand the tracker and click Update to fetch the API response., alt: 'Expand the tracker and click Update to fetch content.', }, { img: '../../img/docs/guides/web_scraping/api_create_step5_result.png', caption: 'After a few seconds the tracker fetches the API response and displays the formatted JSON.', }, ]} /> When no data extractor script is provided, the API tracker returns the raw response body. You can add a custom extractor to parse, filter, or transform the response before it is stored. ## Create an API tracker with a POST request In this guide, you'll create an API tracker that sends a POST request with a JSON body: Navigate to Web Scraping → API trackers and click Track API., alt: 'Navigate to Web Scraping → API trackers and click Track API.', }, { img: '../../img/docs/guides/web_scraping/api_post_step2_form.png', caption: <>Configure the tracker with a POST method and JSON body and click Save. , alt: 'Configure the API tracker with a POST method and JSON body.', }, { img: '../../img/docs/guides/web_scraping/api_post_step3_created.png', caption: 'The tracker appears in the grid showing the POST method.', }, ]} /> ## Debug an API tracker Before saving a tracker you can use the built-in **Debug** mode to run the full pipeline and inspect every stage - from request configuration to response extraction. This is useful for verifying that your configurator and extractor scripts work correctly, that the right headers are sent, and that the response is what you expect. Open the tracker form, fill in the URL, and click the Debug button in the footer. The debug modal shows the pipeline stages as horizontal steps. Click the Configurator step to see the Result tab - the requests or responses produced by the configurator script., alt: 'Debug modal showing the Configurator step with the Result tab selected.', }, { img: '../../img/docs/guides/web_scraping/api_debug_step2_configurator_params.png', caption: <>Switch to the Params tab to inspect the parameters passed to the configurator script, including any decrypted secrets., alt: 'Debug modal showing the Configurator step with the Params tab selected.', }, { img: '../../img/docs/guides/web_scraping/api_debug_step3_request_response_body.png', caption: <>Click the Request step. The Response Body tab shows the formatted response received from the API endpoint., alt: 'Debug modal showing the Request step with the Response Body tab selected.', }, { img: '../../img/docs/guides/web_scraping/api_debug_step4_request_response_headers.png', caption: <>Switch to the Response Headers tab to inspect the HTTP headers returned by the server., alt: 'Debug modal showing the Request step with the Response Headers tab selected.', }, { img: '../../img/docs/guides/web_scraping/api_debug_step5_request_request_headers.png', caption: <>Switch to the Request Headers tab to verify the headers that were sent with the outgoing request (including any added by the configurator)., alt: 'Debug modal showing the Request step with the Request Headers tab selected.', }, { img: '../../img/docs/guides/web_scraping/api_debug_step6_request_request_body.png', caption: <>Switch to the Request Body tab to see the body payload that was sent with the request., alt: 'Debug modal showing the Request step with the Request Body tab selected.', }, { img: '../../img/docs/guides/web_scraping/api_debug_step7_extractor_result.png', caption: <>Click the Extractor step. The Result tab shows the value returned by the extractor script after processing the response., alt: 'Debug modal showing the Extractor step with the Result tab selected.', }, { img: '../../img/docs/guides/web_scraping/api_debug_step8_extractor_params.png', caption: <>Switch to the Params tab to inspect the parameters passed to the extractor script., alt: 'Debug modal showing the Extractor step with the Params tab selected.', }, { img: '../../img/docs/guides/web_scraping/api_debug_step9_result.png', caption: <>Click the Result step to see the final output that would be stored as a tracker revision, along with the total pipeline execution time., alt: 'Debug modal showing the final Result step with the pipeline output.', }, ]} /> :::tip The Debug button is available whenever a URL is filled in - you don't need to save the tracker first. If a pipeline stage fails, it will be highlighted in red and the error message will be shown in the detail panel. ::: ## View execution logs Every time an API tracker runs - whether manually or on a schedule - Secutils.dev records an execution log entry. You can use these logs to understand when the tracker ran, how long it took, whether it succeeded or failed, and what happened during each phase of execution. To view the execution logs, expand the tracker row in the list and switch to the **Logs** view using the view mode toggle: | ![Secutils.dev - API tracker execution logs](/img/docs/guides/web_scraping/api_logs_step1_view.png) | |-----------------------------------------------------------------------------------------------------| The log table shows the same columns as for [page trackers](/docs/guides/web_scraping/page#view-execution-logs): status, start time, duration, type, retry information, revision size, and error details. Each row can be expanded to reveal the execution phases timeline. The **Health** column in the tracker list provides a quick at-a-glance summary of recent execution results. To clear all execution logs for a tracker, click the **Clear logs** button (cross icon) while in the Logs view mode. ## Annex: Data extractor script {#annex-extractor-script} The data extractor script is a JavaScript **IIFE** (Immediately Invoked Function Expression) that runs in an isolated Deno sandbox. It processes the raw HTTP responses from the API. For a complete reference of the `Deno.core` utilities available inside the sandbox (encoding, Base64, type checking, and more), see [Deno Sandbox Runtime](/docs/guides/platform/deno_runtime). The global `context` object has the following shape: ```typescript interface ExtractorContext { /** Tags associated with the tracker. */ tags: string[]; /** Content extracted during the previous execution, if available. */ previousContent?: { original: unknown }; /** Raw HTTP responses from the API requests. */ responses?: Array<{ status: number; headers: Record; body: number[]; }>; /** Optional parameters including user secrets. */ params?: { secrets?: Record }; } ``` The script must be wrapped in `(() => { ... })();` and return an object with an optional `body` field (a `Uint8Array`). Use `Deno.core.encode()` and `Deno.core.decode()` to convert between strings and byte arrays: ```javascript (() => { const responses = context.responses ?? []; if (responses.length === 0) { return { body: Deno.core.encode("No responses") }; } const last = responses[responses.length - 1]; const body = last.body ? Deno.core.decode(new Uint8Array(last.body)) : "{}"; return { body: Deno.core.encode( JSON.stringify(JSON.parse(body), null, 2) ) }; })(); ``` :::caution Sandbox restrictions Scripts run in a strictly isolated Deno sandbox. **`fetch`, `XMLHttpRequest`, and other network APIs are not available.** Only `Deno.core` utilities (such as `encode`, `decode`) and JavaScript built-ins are accessible. See [Deno Sandbox Runtime](/docs/guides/platform/deno_runtime) for the full reference of available APIs. ::: :::tip Using secrets in scripts To make secrets available to your extractor or configurator script, open the tracker's edit form, enable **Advanced mode**, and set the **Secrets → Access mode** to **All secrets** or **Selected secrets**. The decrypted secrets will then be available as `context.params.secrets`. Manage your secrets in **Workspace → Secrets**. Example: `const token = context.params.secrets.MY_TOKEN;` ::: ## Annex: Request configurator script {#annex-configurator-script} The request configurator is an advanced **IIFE** script that allows you to dynamically modify API requests before they are sent. This is useful for adding dynamic authentication tokens, timestamps, or other request parameters. It runs in the same isolated Deno sandbox as the extractor. The global `context` object has the following shape: ```typescript interface ConfiguratorRequest { url: string; method?: string; headers?: Record; body?: number[]; mediaType?: string; acceptStatuses?: number[]; acceptInvalidCertificates?: boolean; } interface ConfiguratorResponse { status: number; headers: Record; body: number[]; } interface ConfiguratorContext { /** Tags associated with the tracker. */ tags: string[]; /** Content extracted during the previous execution, if available. */ previousContent?: { original: unknown }; /** The requests to be sent, which can be modified. */ requests: ConfiguratorRequest[]; /** Optional parameters including user secrets. */ params?: { secrets?: Record }; } ``` The script must return either `{ requests: [...] }` to modify the outgoing requests, or `{ responses: [...] }` to skip the real HTTP call and provide mock responses directly. :::warning Response body must be valid JSON When returning `{ responses: [...] }` without a data extractor, each response `body` must contain **valid JSON bytes** (e.g. `Deno.core.encode(JSON.stringify(...))`), because the tracker stores the result as a JSON value. Plain text like `Deno.core.encode("Hello World")` will fail with "Cannot deserialize API response". Either wrap the value with `JSON.stringify()` or add a data extractor script to process the raw bytes. ::: For example, to inject a secret as a Bearer token: ```javascript (() => { const req = context.requests[0]; return { requests: [{ ...req, headers: { ...req.headers, Authorization: "Bearer " + (context.params?.secrets?.API_KEY ?? "") } }] }; })(); ``` ## Annex: Custom cron schedules :::caution NOTE Custom cron schedules are available only for [**Pro** subscription](https://secutils.dev/pricing) users. ::: The cron expression syntax for API tracker schedules is identical to the one used by [page trackers](/docs/guides/web_scraping/page#annex-custom-cron-schedules). Refer to the page tracker documentation for details on supported cron expressions and examples. --- ## Web Scraping ➔ Page trackers # What is a page tracker? A page tracker is a utility that empowers developers to detect and monitor the content of any web page. Use cases range from ensuring that the deployed web application loads only the intended content throughout its lifecycle to tracking changes in arbitrary web content when the application lacks native tracking capabilities. In the event of a change, whether it's caused by a broken deployment or a legitimate content modification, the tracker promptly notifies the user. :::caution NOTE Currently, Secutils.dev doesn't support tracking content for web pages protected by application firewalls (WAF) or any form of CAPTCHA. If you require tracking content for such pages, please comment on [#secutils/34](https://github.com/secutils-dev/secutils/issues/34) to discuss your use case. ::: On this page, you can find guides on creating and using page trackers. :::note The `Content extractor` script is essentially a [Playwright scenario](https://playwright.dev/docs/writing-tests) that allows you to extract almost anything from the web page as long as it doesn't exceed **1MB** in size. For instance, you can include text, links, images, or even JSON. ::: ## Create a page tracker In this guide, you'll create a simple page tracker for the top post on [Hacker News](https://news.ycombinator.com/): Navigate to Web Scraping → Page trackers and click Track page., alt: 'Navigate to Web Scraping → Page trackers and click Track page.', }, { img: '../../img/docs/guides/web_scraping/create_step2_form.png', caption: <>Configure the tracker and click Save. , alt: 'Configure the tracker name, frequency, and content extractor script.', }, { img: '../../img/docs/guides/web_scraping/create_step3_created.png', caption: 'The tracker appears in the grid.', }, { img: '../../img/docs/guides/web_scraping/create_step4_update.png', caption: <>Expand the tracker and click Update to fetch content., alt: 'Expand the tracker and click Update to fetch content.', }, { img: '../../img/docs/guides/web_scraping/create_step5_result.png', caption: 'After a few seconds the tracker fetches and renders the top post as a clickable markdown link.', }, ]} /> The content includes only the title of the post. However, as noted at the beginning of this guide, the content extractor script allows you to return almost anything, even the entire HTML of the post. ## Detect changes with a page tracker In this guide, you'll create a page tracker and test it with changing content: Navigate to Web Scraping → Page trackers and click Track page., alt: 'Navigate to Web Scraping → Page trackers and click Track page.', }, { img: '../../img/docs/guides/web_scraping/detect_step2_form.png', caption: <>Configure the tracker with an hourly frequency and click Save. , alt: 'Configure the tracker with an hourly frequency and a content extractor for the Berlin world clock.', }, { img: '../../img/docs/guides/web_scraping/detect_step3_created.png', caption: 'The tracker appears in the grid with bell and timer icons, indicating it is configured for regular checks with notifications.', }, ]} /> Expand the tracker's row and click the **Update** button to make the first snapshot of the web page content. After a few seconds, the tracker will fetch the current Berlin time and render a nice markdown with a link to a world clock website: :::note EXAMPLE Berlin time is [**01:02:03**](https://www.timeanddate.com/worldclock/germany/berlin) ::: With this configuration, the tracker will check the content of the web page every hour and notify you if any changes are detected. ## Record a content extractor script Instead of writing a content extractor script from scratch, you can **record browser interactions** using standard Playwright tools and import the recording into Secutils.dev. This is a great way to quickly get started, especially for scenarios that involve navigating through several pages or filling out forms. ### Step 1: Record your interactions Use any of these methods to record a Playwright scenario: **Option A: Playwright codegen (CLI)** ```bash npx playwright codegen https://your-target-site.com ``` This opens a browser window and the Playwright Inspector. Interact with the site as you normally would - the Inspector generates JavaScript code in real time. When done, click the **copy** button in the Inspector to copy the generated script. :::tip Use `--target=javascript-library` to generate a plain script (without the test framework wrapper), which produces slightly cleaner output for import. ::: **Option B: Chrome DevTools Recorder** 1. Open Chrome DevTools (**F12**). 2. Go to the **Recorder** panel. 3. Click **Create recording**, perform your interactions, then stop recording. 4. Click **Export recording** → **JSON**. **Option C: Browser extensions** Several Chrome extensions can record Playwright scripts directly: - [Playwright Chrome Recorder](https://chromewebstore.google.com/detail/playwright-chrome-recorde/bfnbgoehgplaehdceponclakmhlgjlpd) ### Step 2: Import into Secutils.dev 1. Open the page tracker form (create new or edit existing). 2. **Right-click** the `Content extractor` editor. 3. Select **Import: Playwright recording** or **Import: Chrome DevTools recording** from the context menu. 4. Paste the recorded script (or JSON export) into the dialog and click **Import**. The script is automatically transformed into the Secutils.dev extractor format - all browser setup/teardown boilerplate is stripped and the code is wrapped in the `execute(page)` function. ### Step 3: Add a return statement The imported script contains a `TODO` comment reminding you to add a `return` statement. This determines what content the tracker will store and monitor for changes. For example: ```javascript export async function execute(page) { await page.goto('https://example.com/'); await page.getByRole('link', { name: 'Pricing' }).click(); // Return the content you want to track: return await page.locator('.pricing-table').textContent(); } ``` ### Step 4: Debug and save Use the **Debug** button to run the script and verify the extracted content before saving the tracker. ## Track web page resources You can also use the page tracker utility to detect and track resources of any web page. This functionality falls under the category of [synthetic monitoring](https://en.wikipedia.org/wiki/Synthetic_monitoring) tools and helps ensure that the deployed application loads only the intended web resources (JavaScript and CSS) during its lifetime. If any unintended changes occur, which could result from a broken deployment or malicious activity, the tracker will promptly notify developers or IT personnel about the detected anomalies. Additionally, security researchers who focus on discovering potential vulnerabilities in third-party web applications can use page trackers to be notified when the application's resources change. This allows them to identify if the application has been upgraded, providing an opportunity to re-examine it and potentially discover new vulnerabilities. :::note EXAMPLE Extracting all page resources isn't as straightforward as it might seem, so it's recommended to use the utilities provided by Secutils.dev, as demonstrated in the examples in the following sections. Utilities return CSS and JS resource descriptors with the following interfaces: ```typescript /** * Describes external or inline resource. */ interface WebPageResource { /** * Resource type, either 'script' or 'stylesheet'. */ type: 'script' | 'stylesheet'; /** * The URL resource is loaded from. */ url?: string; /** * Resource content descriptor (size and digest), if available. */ content: WebPageResourceContent; } /** * Describes resource content. */ interface WebPageResourceContent { /** * Resource content data. */ data: { raw: string } | { tlsh: string } | { sha1: string }; /** * Describes resource content data, it can either be the raw content data or a hash such as Trend Micro Locality * Sensitive Hash or simple SHA-1. */ size: number; } ``` ::: In this guide, you'll create a simple page tracker to track resources of the [Hacker News](https://news.ycombinator.com/): Navigate to Web Scraping → Page trackers and click Track page., alt: 'Navigate to Web Scraping → Page trackers and click Track page.', }, { img: '../../img/docs/guides/web_scraping/resources_step2_form.png', caption: <>Configure the tracker with a resource-tracking content extractor script and click Save. , alt: 'Configure the tracker with a resource-tracking content extractor script.', }, { img: '../../img/docs/guides/web_scraping/resources_step3_created.png', caption: <>Expand the tracker row and click Update to fetch the page resources., alt: 'Expand the tracker row and click Update to fetch the page resources.', }, { img: '../../img/docs/guides/web_scraping/resources_step4_result.png', caption: <>Once the tracker has fetched the resources, they appear in the resources grid., alt: 'Once the tracker has fetched resources, they appear in the resources grid.', }, ]} /> It's hard to believe, but as of the time of writing, Hacker News continues to rely on just a single script and stylesheet! ## Filter web page resources In this guide, you will create a page tracker for the GitHub home page and learn how to track only specific resources: Navigate to Web Scraping → Page trackers and click Track page., alt: 'Navigate to Web Scraping → Page trackers and click Track page.', }, { img: '../../img/docs/guides/web_scraping/filter_step2_form.png', caption: <>Configure a tracker for the GitHub home page and click Save. , alt: 'Create a tracker for the GitHub home page with the resource-tracking script.', }, { img: '../../img/docs/guides/web_scraping/filter_step3_created.png', caption: <>Expand the tracker row and click Update to fetch the page resources., alt: 'Expand the tracker row and click Update to fetch the page resources.', }, { img: '../../img/docs/guides/web_scraping/filter_step4_result.png', caption: <>Once the tracker has fetched the resources, they appear in the resources grid., alt: 'Once the tracker has fetched resources, they appear in the resources grid.', }, ]} /> You'll notice that there are nearly 100 resources used for the GitHub home page! In the case of large and complex pages like this one, it's recommended to have multiple separate trackers, e.g. one per logical functionality domain, to avoid overwhelming the developer with too many resources and consequently changes they might need to track. Let's say we're only interested in "vendored" resources. To filter out all resources that are not "vendored", edit the tracker and update the Content extractor script: {`export async function execute(page, { previousContent }) { // Load built-in utilities for tracking resources. const { resources: utils } = await import(\`data:text/javascript,\${encodeURIComponent( await (await fetch('https://secutils.dev/retrack/utilities.js')).text() )}\`); // Start tracking resources. utils.startTracking(page); // Navigate to the target page. await page.goto('https://github.com'); await page.waitForTimeout(1000); // Stop tracking and return resources. const allResources = await utils.stopTracking(page); // Filter out all resources that are not "vendored". const resources = { scripts: allResources.scripts.filter((resource) => resource.url?.includes('vendors')), styles: allResources.styles.filter((resource) => resource.url?.includes('vendors')), }; // Format resources as a table, // showing diff status if previous content is available. return utils.formatAsTable( previousContent ? utils.setDiffStatus(previousContent.original.source, resources) : resources ); };`} Save the tracker and click the **Update** button to re-fetch web page resources. Once the tracker has re-fetched resources, only about half of the previously extracted resources will appear in the resources grid. ## Detect changes in web page resources In this guide, you will create several webhook responders that emulate JavaScript files and a simple HTML page, then set up a page tracker to detect changes in the resources loaded by that page across revisions. Navigate to Webhooks → Responders and click Create responder., alt: 'Navigate to Webhooks → Responders and click Create responder.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step2_no_changes_form.png', caption: <>Create a JavaScript responder that will remain unchanged across revisions and click Save. , alt: 'Create the no-changes.js responder.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step3_changed_form.png', caption: <>Create a JavaScript responder that will change across revisions and click Save. , alt: 'Create the changed.js responder.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step4_removed_form.png', caption: <>Create a JavaScript responder that will be removed across revisions and click Save. , alt: 'Create the removed.js responder.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step5_added_form.png', caption: <>Create a JavaScript responder that will be added in a new revision and click Save. , alt: 'Create the added.js responder.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step6_html_form.png', caption: <>Create a responder that serves a simple HTML page referencing the first three scripts (except added.js) and click Save. , alt: 'Create the track-me.html responder serving an HTML page with three script tags.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step7_responders_created.png', caption: 'All five responders appear in the grid with their unique URLs.', alt: 'All five responders appear in the grid.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step8_trackers_empty.png', caption: <>Navigate to Web Scraping → Page trackers and click Track page., alt: 'Navigate to Web Scraping → Page trackers and click Track page.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step9_tracker_form.png', caption: <>Configure a tracker for the track-me.html responder and click Save. , alt: 'Configure a tracker for the track-me.html responder with the resource-tracking script.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step10_tracker_created.png', caption: <>Expand the tracker row and click Update to make the first snapshot of the web page resources., alt: 'Expand the tracker row and click Update to make the first snapshot.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step11_initial.png', caption: 'The initial resources appear in the grid - three scripts with no diff status yet.', alt: 'The initial resources appear in the grid with no diff status.', }, { img: '../../img/docs/guides/web_scraping/detect_resources_step12_diff.png', caption: <>After editing the responders - replace removed.js with added.js in track-me.html and update the body of changed.js - click Update again to see the diff statuses: Added, Changed, and Removed., alt: 'After modifying the responders and clicking Update, the resources grid shows diff statuses.', }, ]} /> :::tip TIP You can configure the tracker with a schedule (e.g. **Daily**) and enable **Notifications** so that Secutils.dev automatically checks for resource changes and alerts you when they occur. ::: ## Debug a page tracker Before saving a tracker you can use the built-in **Debug** mode to run the full extraction pipeline and inspect every detail - from the Playwright scenario execution to the final extracted result. This is useful for verifying that your content extractor script works correctly and produces the expected output. Open the tracker form and click the Debug button in the footer. The debug modal shows the pipeline stages as horizontal steps. Click the Extractor step to inspect the Params tab - this shows the parameters passed to the extractor script, including any decrypted secrets., alt: 'Debug modal showing the Extractor step with the Params tab selected.', }, { img: '../../img/docs/guides/web_scraping/page_debug_step2_extractor_logs.png', caption: <>Switch to the Logs tab to see log messages collected during the Playwright scenario execution, including console.log calls from the extractor script and browser-side console output. This is helpful for diagnosing navigation issues or slow page loads., alt: 'Debug modal showing the Extractor step with the Logs tab selected.', }, { img: '../../img/docs/guides/web_scraping/page_debug_step3_result.png', caption: <>Click the Result step to see the final output that would be stored as a tracker revision, along with the total pipeline execution time., alt: 'Debug modal showing the final Result step with the pipeline output.', }, ]} /> ### Automatic visual trace (Screenshots) In debug mode, a screenshot is automatically captured after every significant Playwright action (`goto`, `click`, `fill`, `type`, `press`, `check`, `uncheck`, `selectOption`). These auto-trace screenshots create a visual storyboard of the script execution without any changes to the extractor script. You can also call `page.screenshot()` manually in the script - in debug mode the image is captured in-memory rather than written to disk. All captured screenshots appear in the **Screenshots** tab in the Extractor step. Click any screenshot to view it in fullscreen. Screenshot labels (e.g., "after goto: https://example.com") describe the action that triggered the capture. :::note Screenshots are subject to a per-run size limit (5 MB by default). When the limit is reached, subsequent screenshots are silently skipped - the extraction still succeeds. Auto-trace screenshots use viewport-only captures to conserve the size budget. ::: :::tip The Debug button is available whenever a content extractor script is present - you don't need to save the tracker first. If the extraction fails, the Extractor step will be highlighted in red and the error message will be shown in the detail panel. ::: ## Browser engine By default, page trackers use **Chromium** to run content extractor scripts. If the target page blocks automated Chromium browsers (e.g. via bot detection or browser fingerprinting), you can switch to **Camoufox** - a Firefox-based engine with enhanced anti-fingerprinting capabilities. The browser engine setting is available in **Advanced mode**. Toggle the **Advanced mode** switch in the tracker form header to reveal the **Browser engine** selector in the **Scripts** section. Enable Advanced mode and select Camoufox from the Browser engine dropdown in the Scripts section., alt: 'Tracker form with Advanced mode enabled showing the Browser engine selector set to Camoufox.', }, { img: '../../img/docs/guides/web_scraping/engine_step2_debug.png', caption: <>When debugging, the Extractor step shows the engine used for the run as a badge next to the duration., alt: 'Debug modal showing the Extractor step with the camoufox engine badge.', }, ]} /> :::note The selected engine applies to both scheduled checks and debug runs. Camoufox may be slower than Chromium for some pages due to the additional anti-fingerprinting measures. Use the **Debug** mode to compare extraction results and performance before choosing an engine. ::: ## View execution logs Every time a page tracker runs - whether manually or on a schedule - Secutils.dev records an execution log entry. You can use these logs to understand when the tracker ran, how long it took, whether it succeeded or failed, and what happened during each phase of execution. To view the execution logs, expand the tracker row in the list and switch to the **Logs** view using the view mode toggle: | ![Secutils.dev - Page tracker execution logs](/img/docs/guides/web_scraping/page_logs_step1_view.png) | |------------------------------------------------------------------------------------------------------| The log table shows: - **Status** - whether the execution succeeded or failed - **Started** - when the execution began - **Duration** - how long the execution took - **Type** - whether it was a manual or scheduled run - **Retry** - the retry attempt number, if applicable - **Revision size** - the size of the stored revision data - **Error** - the error message, if the execution failed Each row can be expanded to reveal the **execution phases** - a detailed timeline of the steps the tracker performed (e.g. fetching data, extracting content, comparing with previous revision, persisting the result). The **Health** column in the tracker list provides a quick at-a-glance summary of recent execution results, showing colored dots for the last several runs (green for success, red for failure). To clear all execution logs for a tracker, click the **Clear logs** button (cross icon) while in the Logs view mode. ## Annex: Content extractor script examples In this section, you can find examples of content extractor scripts that extract various content from web pages. Essentially, the script defines a function with the following signature: ```typescript /** * Content extractor script that extracts content from a web page. * @param page - The Playwright Page object representing the web page. * See more details at https://playwright.dev/docs/api/class-page. * @param context.previousContent - The context extracted during * the previous execution, if available. * @param context.params.secrets - User secrets (key-value pairs). * Available when secrets access is enabled in the tracker settings. * @returns {Promise} - The extracted content to be tracked. */ export async function execute( page: Page, context: { previousContent?: { original: unknown }; params?: { secrets?: Record }; } ) ``` :::tip Using secrets in extractor scripts To make secrets available to your extractor script, open the tracker's edit form and set the **Secrets → Access mode** to **All secrets** or **Selected secrets**. The decrypted secrets will then be available as `context.params.secrets`. Manage your secrets in **Workspace → Secrets**. Example: `const token = context.params.secrets.MY_TOKEN;` ::: ### Track markdown-style content The script can return any [**valid markdown-style content**](https://eui.elastic.co/#/editors-syntax/markdown-format#kitchen-sink) that Secutils.dev will happily render in preview mode. ```javascript export async function execute() { return ` ## Text ### h3 Heading #### h4 Heading **This is bold text** *This is italic text* ~~Strikethrough~~ ## Lists * Item 1 * Item 2 * Item 2a ## Code \`\`\` js const foo = (bar) => { return bar++; }; console.log(foo(5)); \`\`\` ## Tables | Option | Description | | -------- | ------------- | | Option#1 | Description#1 | | Option#2 | Description#2 | ## Links [Link Text](https://secutils.dev) ## Emojies :wink: :cry: :laughing: :yum: `; } ``` ### Track API response You can use page tracker to track API responses as well (until dedicated [`API tracker` utility](https://github.com/secutils-dev/secutils/issues/32) is released). For instance, you can track the response of the [JSONPlaceholder](https://jsonplaceholder.typicode.com/) API: :::caution NOTE Ensure that the web page from which you're making a fetch request allows cross-origin requests. Otherwise, you'll get an error. ::: ```javascript export async function execute() { const {url, method, headers, body} = { url: 'https://jsonplaceholder.typicode.com/posts', method: 'POST', headers: {'Content-Type': 'application/json; charset=UTF-8'}, body: JSON.stringify({title: 'foo', body: 'bar', userId: 1}), }; const response = await fetch(url, {method, headers, body}); return { status: response.status, headers: Object.fromEntries(response.headers.entries()), body: (await response.text()) ?? '', }; } ``` ### Use previous content In the content extract script, you can use the `context.previousContent.original` property to access the content extracted during the previous execution: ```javascript export async function execute(page, { previousContent }) { // Update counter based on the previous content. return (previousContent?.original ?? 0) + 1; } ``` ### Use external content extractor script Sometimes, your content extractor script can become large and complicated, making it hard to edit in the Secutils.dev UI. In such cases, you can develop and deploy the script separately in any development environment you prefer. Once the script is deployed, you can just use URL as the script content : ```javascript // This code assumes your script exports a function named `execute` function. https://secutils-dev.github.io/secutils-sandbox/content-extractor-scripts/markdown-table.js ``` You can find more examples of content extractor scripts at the [Secutils.dev Sandbox](https://github.com/secutils-dev/secutils-sandbox/tree/main/content-extractor-scripts) repository. ## Annex: Custom cron schedules :::caution NOTE Custom cron schedules are available only for [**Pro** subscription](https://secutils.dev/pricing) users. ::: In this section, you can learn more about the supported cron expression syntax used to configure custom tracking schedules. A cron expression is a string consisting of six or seven subexpressions that describe individual details of the schedule. These subexpressions, separated by white space, can contain any of the allowed values with various combinations of the allowed characters for that subexpression: | Subexpression | Mandatory | Allowed values | Allowed special characters | |----------------|-----------|-----------------|----------------------------| | `Seconds` | Yes | 0-59 | * / , - | | `Minutes` | Yes | 0-59 | * / , - | | `Hours` | Yes | 0-23 | * / , - | | `Day of month` | Yes | 1-31 | * / , - ? | | `Month` | Yes | 0-11 or JAN-DEC | * / , - | | `Day of week` | Yes | 1-7 or SUN-SAT | * / , - ? | | `Year` | No | 1970-2099 | * / , - | Following the described cron syntax, you can create almost any schedule you want as long as the interval between two consecutive checks is **longer than 10 minutes**. Below are some examples of supported cron expressions: | Expression | Meaning | |-----------------------|-----------------------------------------------------| | `0 0 12 * * ?` | Run at 12:00 (noon) every day | | `0 15 10 ? * *` | Run at 10:15 every day | | `0 15 10 * * ?` | Run at 10:15 every day | | `0 15 10 * * ? *` | Run at 10:15 every day | | `0 15 10 * * ? 2025` | Run at 10:15 every day during the year 2025 | | `0 0/10 14 * * ?` | Run every 10 minutes from 14:00 to 14:59, every day | | `0 10,44 14 ? 3 WED` | Run at 14:10 and at 14:44 every Wednesday in March | | `0 15 10 ? * MON-FRI` | Run at 10:15 from Monday to Friday | | `0 11 15 8 10 ?` | Run every October 8 at 15:11 | To assist you in creating custom cron schedules, Secutils.dev lists five upcoming scheduled times for the specified schedule: | ![Secutils.dev UI - Custom schedule](/img/docs/guides/web_scraping/custom_schedule.png) | |---------------------------------------------------------------------------------------- | --- ## Web Security ➔ CSP # What is a Content Security Policy? Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross-Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft, to site defacement, to malware distribution. Generally, to enable CSP, you need to configure your web server to return the **Content-Security-Policy** HTTP header or HTML meta tag. For more details, refer to [MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) and [OWASP](https://owasp.org/www-community/controls/Content_Security_Policy). On this page, you can find guides on creating Content Security Policies that match your specific needs. ## Create a Content Security Policy In this guide you'll create a simple Content Security Policy template that allows you to generate policies that are ready to be applied to any web application: Navigate to Web Security → CSP and click Create policy., alt: 'Navigate to Web Security → CSP and click Create policy.' }, { img: '../../img/docs/guides/csp/create_step2_form.png', caption: <>Enter the policy name, configure directives, and click Save to save the policy. , alt: 'Enter the policy name, configure directives, and click Save to save the policy.' }, { img: '../../img/docs/guides/csp/create_step3_created.png', caption: 'The new policy appears in the grid.' }, { img: '../../img/docs/guides/csp/create_step4_copy.png', caption: <>Use the Copy context menu button to get different policy representations., alt: 'Use the Copy context menu button to get different policy representations.' }, ]} /> ## Import a Content Security Policy from URL In this guide you'll import a Content Security Policy from an external URL: Navigate to Web Security → CSP and click Import policy., alt: 'Navigate to Web Security → CSP and click Import policy.' }, { img: '../../img/docs/guides/csp/import_url_step2_modal.png', caption: <>Pick URL tab, enter the policy name, target URL, select the policy source, and click Import. Policy nameGoogle CSP URLhttps://google.com Policy sourceHTTP header (report only) , alt: 'Pick URL tab, enter the policy name, target URL, elect the policy source, and click Import.' }, { img: '../../img/docs/guides/csp/import_url_step3_created.png', caption: 'The new policy appears in the grid.' }, ]} /> ## Import a Content Security Policy from a string In this guide you'll import a Content Security Policy from a string (serialized policy text): Navigate to Web Security → CSP and click Import policy., alt: 'Navigate to Web Security → CSP and click Import policy.' }, { img: '../../img/docs/guides/csp/import_string_step2_modal.png', caption: <>Pick Serialized policy tab, enter the policy name, policy string, and click Import. Policy nameCustom CSP Serialized policydefault-src 'self' api.secutils.dev; style-src 'self' fonts.googleapis.com , alt: 'Pick Serialized policy tab, enter the policy name, policy string, and click Import.' }, { img: '../../img/docs/guides/csp/import_string_step3_created.png', caption: 'The new policy appears in the grid.' }, ]} /> ## Share a Content Security Policy This guide will walk you through sharing a Content Security Policy template publicly, allowing anyone on the internet to view it: 1. Navigate to [Web Security → CSP](https://secutils.dev/ws/web_security__csp) and pick the policy you'd like to share 2. Click the policy's **Share policy** button and toggle **Share policy** switch to **on** position 3. Once the policy is shared, the dialog will show a **Copy link** button 4. Click the **Copy link** button to copy a unique shared policy link to your clipboard 5. To stop sharing the policy, click the **Share policy** button again, and switch the **Share policy** toggle to the **off** position. Navigate to Web Security → CSP, pick the policy you'd like to share, and click Share., alt: 'Navigate to Web Security → CSP, pick the policy you\'d like to share, and click Share.' }, { img: '../../img/docs/guides/csp/share_step2_copy_link.png', caption: <>Toggle the Share policy switch to on position, and then click the Copy link button to copy a unique shared policy link to your clipboard., alt: 'Toggle the Share policy switch to on position, and then click the Copy link button to copy a unique shared policy link to your clipboard.' }, { img: '../../img/docs/guides/csp/share_step3_unshare.png', caption: <>To stop sharing the policy, click the Share policy button again, and switch the Share policy toggle to the off position., alt: 'To stop sharing the policy, click the Share policy button again, and switch the Share policy toggle to the off position.' }, ]} /> ## Test a Content Security Policy In this guide, you will create a Content Security Policy and test it using a custom HTML responder: Navigate to Webhooks → Responders, click Create responder, and configure it with a simple HTML page that uses eval(). Click Save. NameCSP Test Path/csp-test MethodGET Body{` Evaluate CSP `} , alt: 'Create a CSP Test responder with an HTML page that uses eval().' }, { img: '../../img/docs/guides/csp/test_step2_responder_created.png', caption: 'The responder appears in the grid with its unique URL.', }, { img: '../../img/docs/guides/csp/test_step2b_eval_page.png', caption: 'Click the URL to open the test page and verify that the Eval button works without restrictions.', }, { img: '../../img/docs/guides/csp/test_step3_policy_form.png', caption: <>Navigate to Web Security → CSP, click Create policy, and configure it to forbid eval(). Click Save. NameCSP Test Script source (script-src){'\'self\', \'unsafe-inline\''} , alt: 'Create a CSP Test policy with script-src set to self and unsafe-inline to forbid eval().' }, { img: '../../img/docs/guides/csp/test_step4_policy_created.png', caption: 'The policy appears in the grid.', }, { img: '../../img/docs/guides/csp/test_step5_copy_meta_tag.png', caption: <>Use the Copy context menu button, switch Policy source to HTML meta tag, and copy the generated {''} tag., alt: 'Copy the policy as an HTML meta tag.', }, { img: '../../img/docs/guides/csp/test_step6_responder_meta_tag.png', caption: <>Navigate back to Webhooks → Responders, edit the CSP Test responder, and paste the {''} tag inside the {''} of the body. Save and navigate to the responder's URL again - this time, clicking Eval does nothing and a CSP error appears in the browser console. NameCSP Test Path/csp-test Body (updated){` Evaluate CSP `} , alt: 'Edit the CSP Test responder to add the Content-Security-Policy meta tag to the HTML head.', }, ]} /> ## Report Content Security Policy violations In this guide, you will create a Content Security Policy and collect its violation reports using a custom tracking responder: Navigate to Webhooks → Responders, click Create responder, enable Advanced mode, and configure a responder to collect CSP violation reports. Click Save. , alt: 'Create a CSP Reporting responder with POST method and tracking enabled.', }, { img: '../../img/docs/guides/csp/report_step2_reporting_created.png', caption: <>The reporting responder appears in the grid. Copy its URL - you will use it as the report-uri value in the next step., }, { img: '../../img/docs/guides/csp/report_step3_policy_form.png', caption: <>Navigate to Web Security → CSP, click Create policy, and configure it with the report-uri directive pointing to the reporting responder URL. Click Save. , alt: 'Create a CSP Reporting policy with script-src and report-uri directives.', }, { img: '../../img/docs/guides/csp/report_step4_policy_created.png', caption: 'The policy appears in the grid.', }, { img: '../../img/docs/guides/csp/report_step5_copy_header.png', caption: <>Use the Copy context menu button to view the policy as an HTTP header (enforcing). The generated header includes the report-uri directive with the reporting responder URL., alt: 'Copy the policy as an HTTP header (enforcing) with report-uri.', }, { img: '../../img/docs/guides/csp/report_step6_eval_form.png', caption: <>Navigate back to Webhooks → Responders, click Create responder, and configure a responder that serves an HTML page with eval(). Set its Content-Security-Policy response header to include the policy with the report-uri directive. Click Save. , alt: 'Create a CSP Eval Test responder with a CSP header using report-uri and an HTML page that uses eval().', }, { img: '../../img/docs/guides/csp/report_step7_eval_created.png', caption: <>Both responders appear in the grid. The CSP Eval Test responder has a unique URL., }, { img: '../../img/docs/guides/csp/report_step8_eval_blocked.png', caption: <>Click the CSP Eval Test responder URL to open the test page. Click Eval - nothing happens because the Content Security Policy blocks eval(). The browser automatically sends a violation report to the report-uri endpoint., alt: 'Open the eval test page and click Eval - CSP blocks it and sends a report.', }, { img: '../../img/docs/guides/csp/report_step9_violation_report.png', caption: <>Go back to the responders grid and expand the CSP Reporting responder. The violation report sent by the browser is now visible in the requests list., alt: 'Expand the CSP Reporting responder to see the received violation report.', }, ]} /> --- ## Webhooks # What is a webhook? A **webhook** is a mechanism that enables an application to receive automatic notifications or data updates by sending a request to a specified URL when a particular event or trigger occurs. There are various types of webhooks that serve different purposes. One such type is the responder, which is a special webhook that responds to requests with a certain predefined response. A responder is a handy tool when you need to simulate an HTTP endpoint that's not yet implemented or even create a quick ["honeypot"](https://en.wikipedia.org/wiki/Honeypot_(computing)) endpoint. Responders can also serve as a quick and easy way to test HTML, JavaScript, and CSS code. On this page, you can find several guides on how to create different types of responders. :::tip NOTE Each user on [**secutils.dev**](https://secutils.dev) is assigned a randomly generated dedicated subdomain. This subdomain can host user-specific responders at any path, including the root path. For instance, if your dedicated subdomain is `abcdefg`, creating a responder at `/my-responder` would make it accessible via `https://abcdefg.webhooks.secutils.dev/my-responder`. ::: ## Return a static HTML page In this guide you'll create a simple responder that returns a static HTML page: Navigate to Webhooks → Responders and click Create responder., alt: 'Navigate to Webhooks → Responders and click Create responder.', }, { img: '../img/docs/guides/webhooks/html_step2_form.png', caption: <>Fill in the responder form and click Save. , alt: 'Fill in the responder name, path, headers, and body.', }, { img: '../img/docs/guides/webhooks/html_step3_created.png', caption: 'The new HTML responder appears in the grid with its unique URL.', }, { img: '../img/docs/guides/webhooks/html_step4_result.png', caption: <>Click the responder URL to open it in a new tab and verify it renders Hello World., alt: 'Open the responder URL and verify the HTML page renders.', }, ]} /> ## Emulate a JSON API endpoint In this guide you'll create a simple responder that returns a JSON value: Navigate to Webhooks → Responders and click Create responder., alt: 'Navigate to Webhooks → Responders and click Create responder.', }, { img: '../img/docs/guides/webhooks/json_step2_form.png', caption: <>Fill in the responder form and click Save. , alt: 'Configure the JSON responder with name, path, headers, and a JSON body.', }, { img: '../img/docs/guides/webhooks/json_step3_created.png', caption: 'The JSON responder appears in the grid with its unique URL.', }, ]} /> Use an HTTP client like **cURL** to verify the responder returns the expected JSON value: ```bash title="Query the JSON responder" $ curl -s https://.webhooks.secutils.dev/json-responder | jq . { "message": "Hello World" } ``` ## Use the honeypot endpoint to inspect incoming requests In this guide, you'll create a responder that returns an HTML page with custom Iframely meta-tags, providing a rich preview in Notion. Additionally, the responder will track the five most recent incoming requests, allowing you to see exactly how Notion communicates with the responder's endpoint: Navigate to Webhooks → Responders and click Create responder., alt: 'Navigate to Webhooks → Responders and click Create responder.', }, { img: '../img/docs/guides/webhooks/tracking_step2_form.png', caption: <>Enable Advanced mode, set Tracking to 5, and fill in the rest of the form. Click Save. , alt: 'Configure the honeypot responder with tracking enabled and rich meta-tags in the body.', }, { img: '../img/docs/guides/webhooks/tracking_step3_created.png', caption: 'The honeypot responder appears in the grid with its unique URL.', }, { img: '../img/docs/guides/webhooks/tracking_step4_request.png', caption: <>Copy its URL and create a bookmark in Notion to test the rich preview. Expand the responder's row to view tracked incoming requests., alt: 'Expand the responder row to view tracked incoming requests.', }, ]} /> ## Generate a dynamic response In this guide, you'll build a responder that uses a custom JavaScript script to generate a dynamic response based on the request's query string parameter: :::info NOTE The script should be provided in the form of an [Immediately Invoked Function Expression (IIFE)](https://developer.mozilla.org/en-US/docs/Glossary/IIFE). It runs within a restricted version of the [Deno JavaScript runtime](https://deno.com/) for each incoming request, producing an object capable of modifying the default response's status code, headers, or body. Request details are accessible through the global `context` variable. Refer to the [Annex: Responder script examples](/docs/guides/webhooks#annex-responder-script-examples) for a list of script examples, expected return value, and properties available in the global `context` variable. See [Deno Sandbox Runtime](/docs/guides/platform/deno_runtime) for the full reference of available `Deno.core` APIs. ::: Navigate to Webhooks → Responders and click Create responder., alt: 'Navigate to Webhooks → Responders and click Create responder.', }, { img: '../img/docs/guides/webhooks/dynamic_step2_form.png', caption: <>Enable Advanced mode, set Tracking to 5, and fill in the form with a custom script. , alt: 'Configure the dynamic responder with a custom script that reads the query string.', }, { img: '../img/docs/guides/webhooks/dynamic_step3_created.png', caption: 'The dynamic responder appears in the grid with its unique URL.', }, { img: '../img/docs/guides/webhooks/dynamic_step4_no_arg.png', caption: <>Click the responder URL to open it in a new tab. Without the arg query parameter the script returns the default message., alt: 'Open the dynamic responder URL without a query parameter.', }, { img: '../img/docs/guides/webhooks/dynamic_step5_with_arg.png', caption: <>Append ?arg=hello to the URL and reload. The script reads the query parameter and returns the dynamic reply., alt: 'Open the dynamic responder URL with ?arg=hello.', }, ]} /> ## Annex: Responder script examples In this section, you'll discover examples of responder scripts capable of constructing dynamic responses based on incoming request properties. Essentially, each script defines a JavaScript function running within a restricted version of the [Deno JavaScript runtime](https://deno.com/). This function has access to incoming request properties through the global `context` variable. The returned value can override default responder's status code, headers, and body. For a complete reference of the `Deno.core` utilities available inside the sandbox (encoding, Base64, type checking, and more), see [Deno Sandbox Runtime](/docs/guides/platform/deno_runtime). The `context` argument has the following interface: ```typescript interface Context { // An internet socket address of the client that made the request, if available. clientAddress?: string; // HTTP method of the received request. method: string; // HTTP headers of the received request. headers: Record; // HTTP path of the received request. path: string; // Raw query string of the received request (without the leading `?`), if present. rawQuery?: string; // Parsed query string of the received request. query: Record; // HTTP body of the received request in binary form. body: number[]; // User secrets (decrypted key-value pairs). Manage secrets in Workspace → Secrets. secrets: Record; } ``` :::note Reverse-proxy headers Incoming requests reach Secutils.dev through a reverse proxy (Nginx in self-hosted setups, Traefik in the hosted service), which injects synthetic headers used internally to reconstruct the original request - anything starting with `x-forwarded-` (e.g. `x-forwarded-for`, `x-forwarded-host`, `x-forwarded-proto`), plus `x-real-ip` and `x-replaced-path`. These remain visible in `context.headers` so scripts can use them when needed, but the request history table in the UI hides them by default in the **Headers** column and exposes them in a separate, hidden-by-default **Proxy headers** column. The **Client address** column already reflects the real client IP (resolved from `x-real-ip` / `x-forwarded-for`) - and `context.clientAddress` does the same. ::: :::tip Secrets access By default, no secrets are exposed to a responder. To enable secrets, open the responder's advanced settings and set the **Secrets → Access mode** to **All secrets** or **Selected secrets**. Once enabled, you can reference secrets in scripts via `context.secrets.KEY` and in static body/headers via `${secrets.KEY}` template syntax. ::: The returned value has the following interface: ```typescript interface ScriptResult { // HTTP status code to respond with. If not specified, the default status code of responder is used. statusCode?: number; // Optional HTTP headers of the response. If not specified, the default headers of responder are used. headers?: Record; // Optional HTTP body of the response. If not specified, the default body of responder is used. // Accepts Uint8Array, string, object, or array - see Body auto-conversion. body?: Uint8Array | string | object; // When true, the request is not recorded in the responder's tracked request history. skipRequest?: boolean; // When true, the response sent to the client is also stored alongside the tracked request. trackResponse?: boolean; } ``` :::tip Body auto-conversion The `body` field accepts multiple types: `Uint8Array` for raw bytes, a **string** (auto-encoded to UTF-8), a **plain object or array** (auto-serialized to JSON), or a **number/boolean** (auto-stringified). See [Body auto-conversion](/docs/guides/platform/deno_runtime#body-auto-conversion) for the full conversion table. ::: ### Override response properties The script overrides the responder's response with a custom status code, headers, and body: ```javascript (async () => { return { statusCode: 201, headers: { "Content-Type": "application/json" }, body: { a: 1, b: 2 }, }; })(); ``` ### Inspect request properties This script inspects the incoming request properties and returns them as a JSON value: ```javascript (async () => { // Decode request body as JSON. const parsedJsonBody = context.body.length > 0 ? JSON.parse(Deno.core.decode(new Uint8Array(context.body))) : {}; // Override response with a custom HTML body. return { body: ` Request headers HeaderValue ${Object.entries(context.headers).map(([key, value]) => ` ${key} ${value} ` ).join('')} Request query KeyValue ${Object.entries(context.query ?? {}).map(([key, value]) => ` ${key} ${value} ` ).join('')} Request body
${JSON.stringify(parsedJsonBody, null, 2)}
` }; })(); ``` ### Use secrets in a response This script reads a user secret and includes it in the response. Secrets are managed in **Workspace → Secrets** and are available via `context.secrets`: ```javascript (async () => { const apiKey = context.secrets.THIRD_PARTY_API_KEY ?? 'not-set'; return { headers: { 'Content-Type': 'application/json' }, body: { apiKey } }; })(); ``` ### Proxy requests to an upstream service (MITM) Responder scripts can forward incoming requests to a real backend using `Deno.core.ops.op_proxy_request()`, inspect or modify the response, and return it to the client. This turns the responder into a **man-in-the-middle (MITM) proxy** - useful for debugging, testing, and API traffic inspection. **Pure proxy** - forward every request as-is (with optional `insecure` for self-signed certs and `timeout` in milliseconds): ```javascript (async () => { return await Deno.core.ops.op_proxy_request({ url: 'https://real-backend:9200' + context.path, method: context.method, headers: context.headers, body: context.body, insecure: true, // accept self-signed certificates timeout: 5000, // fail after 5 seconds }); })() ``` **Transform the response** - e.g., inject a field into JSON responses. Compressed responses (gzip, deflate, Brotli) are automatically decompressed, so `JSON.parse` works regardless of the upstream's `Content-Encoding`: ```javascript (async () => { const resp = await Deno.core.ops.op_proxy_request({ url: `https://real-backend:9200${context.path}`, method: context.method, headers: context.headers, body: context.body, }); if (resp.headers['content-type']?.includes('application/json')) { const body = JSON.parse(Deno.core.decode(new Uint8Array(resp.body))); body._proxied = true; return { statusCode: resp.statusCode, headers: resp.headers, body }; } return resp; })() ``` **Conditional proxy** - proxy only certain paths, mock the rest: ```javascript (async () => { if (context.path.startsWith('/_cluster')) { return await Deno.core.ops.op_proxy_request({ url: `https://real-backend:9200${context.path}`, method: context.method, headers: context.headers, body: context.body, }); } return { statusCode: 200, body: { mock: true } }; })() ``` See [`Deno.core.ops.op_proxy_request()`](/docs/guides/platform/deno_runtime#op_proxy_request) for the full API reference including request/response interfaces, error handling, and security considerations. ### Protect a responder with HTTP Basic auth Responder URLs are public by design - anyone who knows the URL can hit them. For HTML apps you self-host or for JSON APIs you want to lock down, the script can gate access behind a password using **HTTP Basic Authentication**: browsers show a native sign-in prompt automatically, and `curl -u user:pass` works for API clients. Store the password as a Secutils secret named `APP_PASSWORD` (managed in [**Workspace → Secrets**](/docs/guides/platform/secrets)) and grant this responder access to it via **Advanced settings → Secrets**. By default any username is accepted and only the password is checked; flip `REQUIRE_USERNAME` to `true` (and create an `APP_USER` secret) to require both: ```javascript (() => { // === Configuration === // Set REQUIRE_USERNAME to true to also require a username. // - false (default): any username is accepted; only APP_PASSWORD is checked. // - true: the snippet additionally reads the expected username from // the secret APP_USER. Create it in Workspace → Secrets. const REQUIRE_USERNAME = false; const expectedPassword = context.secrets.APP_PASSWORD; const expectedUsername = REQUIRE_USERNAME ? context.secrets.APP_USER : null; if (!expectedPassword || (REQUIRE_USERNAME && !expectedUsername)) { return { statusCode: 500, headers: { 'Content-Type': 'text/plain; charset=utf-8' }, body: REQUIRE_USERNAME ? 'Missing required secrets APP_USER and/or APP_PASSWORD. Create them in Workspace → Secrets and grant this responder access to them.' : 'Missing required secret APP_PASSWORD. Create it in Workspace → Secrets and grant this responder access to it.', }; } // Constant-time string comparison. const ctEq = (a, b) => { if (a.length !== b.length) return false; let r = 0; for (let i = 0; i < a.length; i++) r |= a.charCodeAt(i) ^ b.charCodeAt(i); return r === 0; }; // Pure-JS base64 decoder (no atob in the sandbox). const fromBase64 = (b64) => { const C = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; const s = b64.replace(/=+$/, ''); const out = []; for (let i = 0; i < s.length; i += 4) { const a = C.indexOf(s[i]); const b = C.indexOf(s[i + 1]); const c = C.indexOf(s[i + 2]); const d = C.indexOf(s[i + 3]); out.push((a << 2) | (b >> 4)); if (c >= 0) out.push(((b & 15) << 4) | (c >> 2)); if (d >= 0) out.push(((c & 3) << 6) | d); } return Deno.core.decode(new Uint8Array(out)); }; const auth = context.headers['authorization'] || ''; const match = /^Basic\s+(\S+)$/.exec(auth); if (match) { const decoded = fromBase64(match[1]); const colon = decoded.indexOf(':'); const username = colon >= 0 ? decoded.slice(0, colon) : ''; const password = colon >= 0 ? decoded.slice(colon + 1) : ''; const passwordOk = ctEq(password, expectedPassword); const usernameOk = !REQUIRE_USERNAME || ctEq(username, expectedUsername); if (passwordOk && usernameOk) { // Authenticated — fall through to the responder's default response. return null; } } return { statusCode: 401, headers: { 'WWW-Authenticate': 'Basic realm="Protected", charset="UTF-8"', 'Content-Type': 'text/plain; charset=utf-8', }, body: 'Authentication required', }; })(); ``` The script returns `null` on a successful match, which leaves the responder's default status code, headers, and body untouched - so the rest of your responder configuration (HTML page, JSON template, etc.) just works behind the auth gate. :::tip Browser caching and logout HTTP Basic credentials are remembered by the browser until the tab is closed (no programmatic logout). If you need a real logout button, use the cookie-session pattern in the next section instead. ::: ### Protect a responder with a login form (cookie session) For HTML apps, a styled login form gives a better UX than the native browser dialog and supports an explicit logout. This script renders a self-contained sign-in page that uses the **Borealis** design tokens (the same palette as the Secutils.dev sign-in screen), sets a session cookie on success, and clears it on `?_logout=1`. Like the Basic auth example, it uses a single `APP_PASSWORD` secret by default - the password value (URL-encoded) doubles as the cookie value, so there is no separate session-token secret to manage. Flip `REQUIRE_USERNAME` to `true` (and create an `APP_USER` secret) if you also want a username field. :::caution Set the responder method to ANY Because this script handles both the `GET` (form render) and the `POST` (login submission), the responder must accept both. Open the responder editor and set **Method** to `ANY`. ::: ```javascript (() => { // === Configuration === // Set REQUIRE_USERNAME to true to also show a Username field on the login form. // - false (default): only APP_PASSWORD is required. // - true: the snippet additionally reads the expected username from // the secret APP_USER. Create it in Workspace → Secrets. const REQUIRE_USERNAME = false; const PASSWORD = context.secrets.APP_PASSWORD; const USERNAME = REQUIRE_USERNAME ? context.secrets.APP_USER : null; const COOKIE_NAME = 'sec_auth'; const MAX_AGE_SEC = 86400; // 24 hours if (!PASSWORD || (REQUIRE_USERNAME && !USERNAME)) { return { statusCode: 500, headers: { 'Content-Type': 'text/plain; charset=utf-8' }, body: REQUIRE_USERNAME ? 'Missing required secrets APP_USER and/or APP_PASSWORD. Create them in Workspace → Secrets and grant this responder access to them.' : 'Missing required secret APP_PASSWORD. Create it in Workspace → Secrets and grant this responder access to it.', }; } const ctEq = (a, b) => { if (a.length !== b.length) return false; let r = 0; for (let i = 0; i < a.length; i++) r |= a.charCodeAt(i) ^ b.charCodeAt(i); return r === 0; }; // Parse application/x-www-form-urlencoded body manually // (URLSearchParams may not exist in the sandbox). const parseForm = (raw) => { const out = {}; for (const pair of raw.split('&')) { if (!pair) continue; const eq = pair.indexOf('='); const rk = eq < 0 ? pair : pair.slice(0, eq); const rv = eq < 0 ? '' : pair.slice(eq + 1); out[decodeURIComponent(rk.replace(/\+/g, ' '))] = decodeURIComponent(rv.replace(/\+/g, ' ')); } return out; }; const sessionCookie = `${COOKIE_NAME}=${encodeURIComponent(PASSWORD)}; HttpOnly; Secure; SameSite=Strict; Path=/; Max-Age=${MAX_AGE_SEC}`; const clearCookie = `${COOKIE_NAME}=; HttpOnly; Secure; SameSite=Strict; Path=/; Max-Age=0`; const usernameField = REQUIRE_USERNAME ? '\n ' : ''; const pwdAutofocus = REQUIRE_USERNAME ? '' : 'autofocus'; const renderLogin = (errorHtml) => ({ statusCode: 401, headers: { 'Content-Type': 'text/html; charset=utf-8', 'Cache-Control': 'no-store' }, body: ` Sign in
Sign in Enter the password to access this page. ${errorHtml} ${usernameField}
`, }); // Logout: ?_logout=1 clears the cookie and shows the login form again. if (context.query._logout === '1') { return { statusCode: 303, headers: { 'Set-Cookie': clearCookie, 'Location': context.path, 'Content-Type': 'text/plain; charset=utf-8', }, body: 'Signed out', }; } // Login form submission. if (context.method === 'POST' && context.query._auth === '1') { const form = parseForm(Deno.core.decode(new Uint8Array(context.body))); const submittedPassword = form.password || ''; const submittedUsername = form.username || ''; const passwordOk = ctEq(submittedPassword, PASSWORD); const usernameOk = !REQUIRE_USERNAME || ctEq(submittedUsername, USERNAME); if (passwordOk && usernameOk) { return { statusCode: 303, headers: { 'Set-Cookie': sessionCookie, 'Location': context.path, 'Content-Type': 'text/plain; charset=utf-8', }, body: 'Authenticated', }; } return renderLogin('Incorrect ' + (REQUIRE_USERNAME ? 'username or password' : 'password') + '.'); } // Existing session cookie? const cookieHeader = context.headers['cookie'] || ''; const matched = cookieHeader .split(';') .map((s) => s.trim()) .find((c) => c.startsWith(COOKIE_NAME + '=')); if (matched) { const value = decodeURIComponent(matched.slice(COOKIE_NAME.length + 1)); if (ctEq(value, PASSWORD)) { // Authenticated — fall through to the responder's default response. return null; } } return renderLogin(''); })(); ``` Drop a "Sign out" link anywhere in your app HTML pointing at `?_logout=1` to give users an explicit logout. The cookie is set with `HttpOnly`, `Secure`, `SameSite=Strict`, and a 24-hour `Max-Age` - tweak `MAX_AGE_SEC` if you want a shorter or longer session window. Because the cookie is path-scoped to `/`, a single sign-in covers every protected responder on the same subdomain. :::caution Single shared password This pattern is intended for a single shared password (one app, one team). It does not support per-user accounts, password rotation without invalidating all sessions, or rate-limiting brute-force attempts. For higher-stakes content, consider client-side encryption or a dedicated identity provider. ::: ### Selectively track requests By default every incoming request is recorded in the responder's tracked history (up to the configured limit). If you only care about certain requests, the script can return `skipRequest: true` to suppress tracking for the current request while still returning a normal response: ```javascript (async () => { // Only track non-health-check requests. const skip = context.path === '/healthz' || context.path === '/readyz'; return { body: 'ok', skipRequest: skip }; })(); ``` ### Track responses By default, only the incoming request is stored in the history. If you also need to inspect the response sent back to the client (for example, to debug upstream proxy results), the script can return `trackResponse: true`. The response status code, headers, and body are then stored alongside the request in the same history row: ```javascript (async () => { const resp = await Deno.core.ops.op_proxy_request({ url: `https://backend.example.com${context.path}`, method: context.method, headers: context.headers, body: context.body, }); return { ...resp, // Only track the response when the upstream returned an error. trackResponse: resp.statusCode >= 400, }; })() ``` The tracked response body is subject to a size limit (default 1 MB). If the response exceeds this limit, the stored copy is truncated, but the full response is still sent to the client. When a script **fails** (throws an error, times out, etc.), the error response is automatically captured in the tracked request without needing `trackResponse`. This provides crucial debugging visibility for script failures. :::tip If both `skipRequest: true` and `trackResponse: true` are set, `skipRequest` takes precedence and no tracking occurs at all. ::: ### Export request history as HAR The request history table includes an **Export as HAR** button that downloads the tracked requests (and responses, when available) as an [HTTP Archive (HAR)](http://www.softwareishard.com/blog/har-12-spec/) file. HAR files can be imported into browser DevTools, Charles Proxy, or other HTTP analysis tools for further inspection. The duration column shows the total server-side processing time for each request, and this value is included in the HAR file's timing information. ### Generate images and other binary content Responders can return not only JSON, HTML, or plain text, but also binary data, such as images. This script demonstrates how you can generate a simple PNG image on the fly. PNG generation requires quite a bit of code, so for brevity, this guide assumes that you have already downloaded and edit the [`png-generator.js` script](https://secutils-dev.github.io/secutils-sandbox/responder-scripts/png-generator.js) from the [Secutils.dev Sandbox repository](https://github.com/secutils-dev/secutils-sandbox) (you can find the full source code [here](https://github.com/secutils-dev/secutils-sandbox/blob/6fe5bdf0ad8df23ea67a46e6624c8d6975f96f6a/responder-scripts/src/png-generator.ts)). The part you might want to edit is located at the bottom of the script: ```javascript (() => { // …[Skipping definition of the `PngImage` class for brevity]… // Generate a custom 100x100 PNG image with a white background and red rectangle in the center. const png = new PngImage(100, 100, 10, { r: 255, g: 255, b: 255, a: 1 }); const color = png.createRGBColor({ r: 255, g: 0, b: 0, a: 1 }); png.drawRect(25, 25, 75, 75, color); return { body: png.getBuffer() }; })(); ``` --- ## API Reference Secutils.dev exposes a REST API for managing all resources programmatically. The full API is described by an [OpenAPI 3.1](https://spec.openapis.org/oas/v3.1.0) specification and can be explored interactively. | Resource | Description | |-------------------------|----------------------------------------------------------------------------------| | **Interactive docs** | [secutils.dev/api-docs](https://secutils.dev/api-docs) | | **OpenAPI spec (JSON)** | [secutils.dev/api-docs/openapi.json](https://secutils.dev/api-docs/openapi.json) | ## Available API groups | Tag | Base path | Description | |----------------|---------------------------------------------------------------------|------------------------------------------------------------------| | `webhooks` | `/api/webhooks/responders` | Create HTTP responders that capture and replay incoming requests | | `certificates` | `/api/certificates/templates`, `/api/certificates/private_keys` | Generate X.509 certificate templates and manage private keys | | `web_scraping` | `/api/web_scraping/page_trackers`, `/api/web_scraping/api_trackers` | Track changes to web pages and API endpoints | | `web_security` | `/api/web_security/csp` | Build, parse, and serialize Content Security Policy headers | | `api_keys` | `/api/user/api_keys` | Create and manage API keys for programmatic access | | `tags` | `/api/user/tags` | Organize resources with colored tags | | `secrets` | `/api/user/secrets` | Store encrypted secrets for use in scripts | | `scripts` | `/api/user/scripts` | Manage reusable JavaScript scripts for responders and trackers | | `settings` | `/api/user/settings` | Read and update user preferences | | `data` | `/api/user/data` | Export and import user data | ## Authentication All API endpoints require authentication. The following methods are supported: | Method | Format | Description | |--------------------|---------------------------------|---------------------------------------------------------------------------------------------------------------| | **Session cookie** | `id` cookie | Automatically set by the browser after login | | **API key** | `Authorization: Bearer su_ak_…` | Opaque token for programmatic/agent access. Create via the API keys page or the `/api/user/api_keys` endpoint | | **JWT** | `Authorization: Bearer eyJ…` | Service-account token (operator use only) | API keys are the recommended method for scripts, CI pipelines, and AI agents. They can have an optional expiration date and are independent of the browser session. The plaintext token is shown only once at creation - store it securely. Shared resources can be accessed anonymously with the `x-secutils-share-id` header. --- ## 2023 Changelog ## 1.0.0-alpha.4 **2023-12-26** ### [Secutils.dev API](https://github.com/secutils-dev/secutils) #### ⚠ BREAKING CHANGES * **platform:** switch to a new [database migration naming schema](https://github.com/secutils-dev/secutils/tree/main/migrations) and dedicated tables for user data * **platform:** use [proper REST URLs](https://github.com/secutils-dev/secutils/tree/main/tools/api/utils) for all utilities APIs #### Features * **platform:** add support for job retries (constant, linear, and exponential) ([secutils@f3decab](https://github.com/secutils-dev/secutils/commit/f3decab1d36b998fc6514398e64e951410742c05)) * **platform:** allow cross-origin requests to the utilities APIs ([secutils@c1b0dde](https://github.com/secutils-dev/secutils/commit/c1b0dde5e7dccd1890174532e0e1f7fc54b93ca4)) * **certificates:** introduce support for a new `Certificates -> Private keys` utility API ([secutils@ae8a581](https://github.com/secutils-dev/secutils/commit/ae8a5814c3c85342f8a997186b4a930bea6327cc), [secutils#8](https://github.com/secutils-dev/secutils/issues/8)) * **certificates:** allow sharing certificate templates ([secutils@1d57188](https://github.com/secutils-dev/secutils/commit/1d571889b31ae4d86a385834182582bab36cacf7)) * **web-scraping:** introduce `Web Scraping -> Content trackers` utility API ([secutils@b879bf1](https://github.com/secutils-dev/secutils/commit/b879bf1d6e065fd00322ce9a553a9af4499bbbdb)) * **web-scraping:** add diff support for the `Web Scraping -> Resources trackers` preview ([secutils@a647e79](https://github.com/secutils-dev/secutils/commit/a647e796e92839106355b5ea6d5985c5fba0039b)) * **web-scraping:** add support for custom HTTP request headers in `Web Scraping -> Resources trackers` utility API ([secutils@83f48c0](https://github.com/secutils-dev/secutils/commit/83f48c08a4a6be79ebadf0cb3c68b1371ccb17b8)) * **web-scraping:** notify users about failed attempts to check changes in resources or content ([secutils@473191e](https://github.com/secutils-dev/secutils/commit/473191e8f292701891192ed48f0d66ba13cd38f9)) * **web-security:** implement an API for importing content security policies (CSP) ([secutils@2db6c0a](https://github.com/secutils-dev/secutils/commit/2db6c0a04221f612880b9846b1e44430d17fae89)) * **webhooks:** add support for "subdomain"-based webhook URLs ([secutils@eada924](https://github.com/secutils-dev/secutils/commit/eada924dcd5381e7e03224f9f3c89b57df1da742)) #### Fixes * **platform:** use `secutils/{version}` as the `User-Agent` HTTP header for all outbound HTTP requests ([secutils@0a2d7e2](https://github.com/secutils-dev/secutils/commit/0a2d7e2b550667b278cdff3c7f40cba314fbe044)) * **web-scraping:** surface web page content and resources tracker errors in the API responses ([secutils@888c8ac](https://github.com/secutils-dev/secutils/commit/888c8acd4cedba1d9267df2135a79423d84be7a2)) * **webhooks:** properly handle webhook request for root path (`/`) ([secutils@a5c3dcd](https://github.com/secutils-dev/secutils/commit/a5c3dcd3ca793a9b1ebc74863d7e263f3e11aefd)) **Full Changelog**: [secutils@v1.0.0-alpha.3...v1.0.0-alpha.4](https://github.com/secutils-dev/secutils/compare/v1.0.0-alpha.3...v1.0.0-alpha.4) ### [Secutils.dev Web UI](https://github.com/secutils-dev/secutils-webui) #### Features * **platform:** add support for job retries in all web page tracker UIs (only constant strategy) ([secutils-webui@b44fd2d](https://github.com/secutils-dev/secutils-webui/commit/b44fd2df88c12d927251bea84c7155b70521aa72)) ![Secutils.dev UI - Retries](/img/docs/changelog_1.0.0_alpha.4_retries.png) * **certificates:** introduce UI for a new `Certificates -> Private keys` utility ([secutils-webui@a9462dd](https://github.com/secutils-dev/secutils-webui/commit/a9462dd6561c1686efaef42f21dc9d555217a8d9), check out [the guides](https://secutils.dev/docs/guides/digital_certificates/private_keys) to learn more) ![Secutils.dev UI - Private keys](/img/docs/changelog_1.0.0_alpha.4_private_keys.png) * **certificates:** allow sharing certificate templates ([secutils-webui@dc3a269](https://github.com/secutils-dev/secutils-webui/commit/dc3a26971520438875c6bef9428683fc12d96598), check out [the guides](https://secutils.dev/docs/guides/digital_certificates/certificate_templates#share-a-certificate-template) to learn more) ![Secutils.dev UI - Share certificate templates](/img/docs/changelog_1.0.0_alpha.4_share_certificate_templates.png) * **web-scraping:** add UI for custom HTTP headers for web page trackers ([secutils-webui@5ec9b00](https://github.com/secutils-dev/secutils-webui/commit/5ec9b00ab103782797294d49d982982d0b9075cc)) ![Secutils.dev UI - Trackers headers](/img/docs/changelog_1.0.0_alpha.4_tracker_headers.png) * **web-scraping:** introduce UI for a new `Web Scraping -> Page trackers` utility ([secutils-webui@dcde972](https://github.com/secutils-dev/secutils-webui/commit/dcde97204ce0191d7d810f49d10b8bec3e247b83), check out [the guides](https://secutils.dev/docs/guides/web_scraping/page) to learn more) ![Secutils.dev UI - Content trackers](/img/docs/changelog_1.0.0_alpha.4_content_trackers.png) * **web-scraping:** redesign web page tracker previews ([secutils-webui@22bea69](https://github.com/secutils-dev/secutils-webui/commit/22bea699d3ad74509edc03fa7c7f6e6d1ce57579)) ![Secutils.dev UI - Trackers preview](/img/docs/changelog_1.0.0_alpha.4_trackers_preview.png) * **web-security:** implement UI for importing content security policies (CSP) ([secutils-webui@de60ab7](https://github.com/secutils-dev/secutils-webui/commit/de60ab7f2e96c0a85779fb1f5fdb129e2c378add), check out [the guides](https://secutils.dev/docs/guides/web_security/csp#import-a-content-security-policy-from-url) and [this blog post](https://secutils.dev/docs/blog/explore-websites-through-csp) to learn more) ![Secutils.dev UI - Import CSP](/img/docs/changelog_1.0.0_alpha.4_import_csp.png) * **webhooks:** add support for "subdomain"-based webhook URLs ([secutils-webui@edc77c3](https://github.com/secutils-dev/secutils-webui/commit/edc77c317ea869d744fbb31208dc635bdc9addb3), check out [the guides](https://secutils.dev/docs/guides/webhooks) to learn more) ![Secutils.dev UI - Responders subdomains](/img/docs/changelog_1.0.0_alpha.4_responders_subdomain.png) * **webhooks:** support responders with the same path, but different HTTP methods ([secutils-webui@ec43221](https://github.com/secutils-dev/secutils-webui/commit/ec4322107609205c6bde28e2c80cfeb20b5eec99)) ![Secutils.dev UI - Responders same path](/img/docs/changelog_1.0.0_alpha.4_responders_same_path.png) #### Fixes * **platform:** make sure grid items are rendered consistently ([secutils-webui@6213cc6](https://github.com/secutils-dev/secutils-webui/commit/6213cc642b037c4c7853f334e20c875af3e881e4)) * **certificates:** properly handle name change in certificate template editor ([secutils-webui@5134646](https://github.com/secutils-dev/secutils-webui/commit/5134646f421dfa3146c57fde367d4db655ee402d)) * **certificates:** fix docs links for certificate templates and private keys ([secutils-webui@87d1759](https://github.com/secutils-dev/secutils-webui/commit/87d1759da919eab35ee31e95717ac6a0b59f965a)) * **web-scraping:** use tracker ID as a unique identifier instead of name ([secutils-webui@6ead9be](https://github.com/secutils-dev/secutils-webui/commit/6ead9bed6ad4cebd51573f51682a1206f803d854)) * **web-security:** remove `X-User-Share-Id` from URL if it is invalid to avoid infinite reload loop ([secutils-webui@64ea260](https://github.com/secutils-dev/secutils-webui/commit/64ea260478e5fb3d2f98121a4cfb91bb6a2f27dc)) #### Enhancements * **web-scraping:** render web page tracker name with indicators for scheduled checks and notifications ([secutils-webui@7fca493](https://github.com/secutils-dev/secutils-webui/commit/7fca493116f7a2ecf386270a9cb721303d92d305)) ![Secutils.dev UI - Trackers indicators](/img/docs/changelog_1.0.0_alpha.4_trackers_indicators.png) **Full Changelog**: [secutils-webui@v1.0.0-alpha.3...v1.0.0-alpha.4](https://github.com/secutils-dev/secutils-webui/compare/v1.0.0-alpha.3...v1.0.0-alpha.4) ### [Secutils.dev Web Scraper](https://github.com/secutils-dev/secutils-web-scraper) #### Features * **web-page:** add support for custom request HTTP headers ([secutils-web-scraper@6a743ea](https://github.com/secutils-dev/secutils-web-scraper/commit/6a743ea7dd0663a920f560fdfa4b2a7acff25977)) * **web-page:** disable browser cache and selectively proxy requests to bypass CSP/CORS restrictions ([secutils-web-scraper@6825861](https://github.com/secutils-dev/secutils-web-scraper/commit/6825861f80875652163445abb6370178030804de)) * **web-page:** disable CORS with `--disable-web-security` Chromium launch flag ([secutils-web-scraper@f9507eb](https://github.com/secutils-dev/secutils-web-scraper/commit/f9507eb9f78f5d3000a3998a66d814b344f0e05c)) * **web-page:** introduce new Web Content scraper API ([secutils-web-scraper@a7d9de0](https://github.com/secutils-dev/secutils-web-scraper/commit/a7d9de0cee0cc60851cc6a2aa7e82657f1c47a4d)) #### Fixes * **web-page:** bump web page `load` timeout from 5000ms to 10000ms ([secutils-web-scraper@a9ee6ad](https://github.com/secutils-dev/secutils-web-scraper/commit/a9ee6ad0c7dddda6ad157ac0a398106d3836645c)) * **web-page:** serialize content state even if it has exceeded allowed size ([secutils-web-scraper@7df4534](https://github.com/secutils-dev/secutils-web-scraper/commit/7df4534970183ed2ab06ba4276d6720f58744acf)) * **web-page:** use only web page URL, headers, and user scripts to calculate cache key ([secutils-web-scraper@bc163f5](https://github.com/secutils-dev/secutils-web-scraper/commit/bc163f5c9417ac975a643a97074c39a05bd81d4c)) * **web-page:** use proper cryptographic hash for the response cache key ([secutils-web-scraper@bec0919](https://github.com/secutils-dev/secutils-web-scraper/commit/bec091986aebb465a6e58de291696c6c0874d511)) #### Enhancements * **web-page:** use more stable Chrome DevTools Protocol to capture external resources ([secutils-web-scraper@129ca5a](https://github.com/secutils-dev/secutils-web-scraper/commit/129ca5ae716a8a6c5943f55019e7f532cb8e7ee4)) * **web-page:** pretty print HTML content ([secutils-web-scraper@cb613f4](https://github.com/secutils-dev/secutils-web-scraper/commit/cb613f4c506b8b62d2b5428d51ee3749d6cda9ba)) * **web-page:** use stable JSON stringifier to persist web page extracted content ([secutils-web-scraper@5165a83](https://github.com/secutils-dev/secutils-web-scraper/commit/5165a830445f2969a55c34e76c6c68528a93e3ff)) **Full Changelog**: [secutils-web-scraper@v1.0.0-alpha.3...v1.0.0-alpha.4](https://github.com/secutils-dev/secutils-web-scraper/compare/v1.0.0-alpha.3...v1.0.0-alpha.4) ## 1.0.0-alpha.3 **2023-10-03** ### [Secutils.dev API](https://github.com/secutils-dev/secutils) #### Features * **[Web Scraping]** Added support for automatic scheduled checks for changes in tracked web page resources: hourly, daily, weekly, monthly. Refer to the [documentation & guides](https://secutils.dev/docs/guides/web_scraping/page#detect-changes-in-web-page-resources) to learn more. ([secutils#20](https://github.com/secutils-dev/secutils/issues/20)) * **[Web Scraping]** Added support for email notifications when changes in tracked web page resources are detected. Refer to the [documentation & guides](https://secutils.dev/docs/guides/web_scraping/page#detect-changes-in-web-page-resources) to learn more. ([secutils@7595eb](https://github.com/secutils-dev/secutils/commit/7595eb9d7a81fc96e4acec88f9a3488e0c2006d7)) * **[Web Scraping]** Added support for custom web resources trackers scripts (JavaScript) to assist with resource filtering and mapping. Refer to the [documentation & guides](https://secutils.dev/docs/guides/web_scraping/page#filter-web-page-resources) to learn more. ([secutils#19](https://github.com/secutils-dev/secutils/issues/19)) * **[Web Security]** Added support for shareable user resources to improve collaboration (only for CSP in this release). Refer to the [documentation & guides](https://secutils.dev/docs/guides/web_security/csp) to learn more. ([secutils#21](https://github.com/secutils-dev/secutils/issues/21)) * **[Digital Certificates]** Made key size (RSA, DSA) and curve name (EC) configurable in certificate templates. ([secutils#8](https://github.com/secutils-dev/secutils/issues/8)) #### Enhancements * **[Search]** Upgraded to Tantivy `v0.21.0` ([secutils@2a5c83](https://github.com/secutils-dev/secutils/commit/2a5c83378dc0edb8875140337de54b5bec2a49ea)) and switched to lenient query parser for the keywords search to make search more error-tolerant. ([secutils@2f6c10](https://github.com/secutils-dev/secutils/commit/2f6c10bc5c47e0ef217fbd7874dd41ceda41ba8e)) * **[Misc]** Updated OpenSSL libs in a Docker image. ([secutils@e98a31](https://github.com/secutils-dev/secutils/commit/e98a31aee7e7782b832de23d620a1e2eb317e155)) * **[Misc]** Switched Docker image to use non-root user for better security. ([secutils@36555c](https://github.com/secutils-dev/secutils/commit/36555c974b75e2064ef642448503c08267dc9ef1)) * **[Misc]** Dependency upgrades **Full Changelog**: [secutils@v1.0.0-alpha.2...v1.0.0-alpha.3](https://github.com/secutils-dev/secutils/compare/v1.0.0-alpha.2...v1.0.0-alpha.3) ### [Secutils.dev Web UI](https://github.com/secutils-dev/secutils-webui) #### Features * **[Web Scraping]** Added UIs to support web page resources tracking enhancements provided by the latest Secutils.dev API. ([secutils-webui@d2a102](https://github.com/secutils-dev/secutils-webui/commit/d2a102617f360c02df64a004d88bddb4eff7b072), [secutils-webui@f485ff](https://github.com/secutils-dev/secutils-webui/commit/f485ff123fc8b4512fd48b7e8c2ab65cfb90c05a), [secutils-webui@683b5b](https://github.com/secutils-dev/secutils-webui/commit/683b5bfd602cc47617298f8caceaf6b9e62f41dd)) ![Secutils.dev UI - Web Scraping](https://secutils.dev/docs/img/docs/changelog_1.0.0_alpha.3_web_scraping.png) * **[Digital Certificates]** Added UIs to support certificate templates enhancements provided by the latest Secutils.dev API. ([secutils-webui@f83328](https://github.com/secutils-dev/secutils-webui/commit/f83328bdafd4cad21b6033fc92b2dd4ad348a605), [secutils-webui@569883](https://github.com/secutils-dev/secutils-webui/commit/5698837f6dda5fa0ffc5542754e5685c1b2c82e0), [secutils-webui@de81c8](https://github.com/secutils-dev/secutils-webui/commit/de81c85bea84c983e2bb6fd77eca9d867cd8a4fc)) ![Secutils.dev UI - Certificates Key Size](https://secutils.dev/docs/img/docs/changelog_1.0.0_alpha.3_certificates_key_size.png) ![Secutils.dev UI - Certificates Curve Name](https://secutils.dev/docs/img/docs/changelog_1.0.0_alpha.3_certificates_curve_name.png) * **[Web Security]** Added UIs to support shareable user resources to improve collaboration (only for CSP in this release). Refer to the [documentation & guides](https://secutils.dev/docs/guides/web_security/csp) to learn more. ([secutils#21](https://github.com/secutils-dev/secutils/issues/21)) ![Secutils.dev UI - Sharing](https://secutils.dev/docs/img/docs/changelog_1.0.0_alpha.3_sharing.png) #### Enhancements * **[Misc]** Switched main Docker image to `nginxinc/nginx-unprivileged:alpine3.18-slim` for better security and lighter size. ([secutils-webui@b31692](https://github.com/secutils-dev/secutils-webui/commit/b31692259d96db376b40261e442d49ce78f8987f)) * **[Misc]** Dependency upgrades **Full Changelog**: [secutils-webui@v1.0.0-alpha.2...v1.0.0-alpha.3](https://github.com/secutils-dev/secutils-webui/compare/v1.0.0-alpha.2...v1.0.0-alpha.3) ### [Secutils.dev Web Scraper](https://github.com/secutils-dev/secutils-web-scraper) #### Features * **[Web Scraping]** Extended Resources API to support custom JavaScript scripts for resources filtering and mapping. ([secutils-web-scraper@ba5406](https://github.com/secutils-dev/secutils-web-scraper/commit/ba5406beb40e941ab9c9a4093cb3b9e109492be7)) ```http POST /api/resources Accept: application/json Content-Type: application/json { "url": "https://secutils.dev", "scripts": { "resourceFilterMap": "return resource.type === 'script' ? resource : null;" } } ``` #### Enhancements * **[Misc]** Enabled sandbox for the headless Chromium used to extract web page resources and switched Docker image to use non-root user for better security. ([secutils-web-scraper@4717f7](https://github.com/secutils-dev/secutils-web-scraper/commit/4717f744e093a7ac376558ef4ed59f23af9df9aa)) * **[Misc]** Dependency upgrades **Full Changelog**: [secutils-web-scraper@v1.0.0-alpha.2...v1.0.0-alpha.3](https://github.com/secutils-dev/secutils-web-scraper/compare/v1.0.0-alpha.2...v1.0.0-alpha.3) ## 1.0.0-alpha.2 **2023-07-25** ### Secutils.dev API #### Features * **[Web Scraping]** Added support for [web page resources tracking](../../guides/web_scraping/page) functionality ([secutils#14](https://github.com/secutils-dev/secutils/issues/14)). #### Enhancements * **[Digital Certificates]** Fall back to the latest version of the X.509 certificate defined by the spec if not specified by the client ([secutils#1](https://github.com/secutils-dev/secutils/issues/1)) * **[Search]** Switch to Tantivy `v0.20.0` and change data folder naming scheme to include search index version to support auto-reindexing ([secutils@ef9dbf](https://github.com/secutils-dev/secutils/commit/ef9dbf2baa0643f8c7874decb640ec67453047a2)) * **[Misc]** Bump Docker image to Alpine 3.18 ([secutil@9653ac](https://github.com/secutils-dev/secutils/commit/9653ac960b3f468744ca9ea53ef91b7ff1418e1e)) * **[Misc]** Add parameter validation for the utils actions APIs ([secutils@a02a01](https://github.com/secutils-dev/secutils/commit/a02a01a084f539984c2f39fd4a9ba5855a89a3d9)). * **[Misc]** Dependency upgrades **Full Changelog**: [secutils@v1.0.0-alpha.1...v1.0.0-alpha.2](https://github.com/secutils-dev/secutils/compare/v1.0.0-alpha.1...v1.0.0-alpha.2) ### Secutils.dev Web UI #### Features * **[Web Scraping]** Added UIs to support [web page resources tracking](../../guides/web_scraping/page) functionality ([secutils#14](https://github.com/secutils-dev/secutils/issues/14)). #### Enhancements * **[Docs]** Update footer to include links to Blog and Documentation ([secutils-webui@8ce447](https://github.com/secutils-dev/secutils-webui/commit/8ce447e0c1f69dc66f7486f19502d687badbadd7)) * **[Misc]** Bump Docker "builder" image to `node:20-alpine3.18` ([secutils-webui@48a505](https://github.com/secutils-dev/secutils-webui/commit/48a50515957ed36de3bd29dd211c1cfdcf02ce65)) * **[Misc]** Dependency upgrades #### Fixes * **[Security]** Recover original URL after signin ([secutils#9](https://github.com/secutils-dev/secutils/issues/9)) * **[Misc]** Switch local watch port to `7171` ([secutils-webui@9adb12](https://github.com/secutils-dev/secutils-webui/commit/9adb128d8f2eacde94d835bbb63f8926e27dd98f)) **Full Changelog**: [secutils-webui@v1.0.0-alpha.1...v1.0.0-alpha.2](https://github.com/secutils-dev/secutils-webui/compare/v1.0.0-alpha.1...v1.0.0-alpha.2) ### Secutils.dev Web Scraper #### Features * **[Web Scraping]** Initial release of [Secutils.dev Web Scraper component](https://github.com/secutils-dev/secutils-web-scraper) :tada: **Full Changelog**: [secutils-web-scraper@main](https://github.com/secutils-dev/secutils-web-scraper/commits/main) ## 1.0.0-alpha.1 **2023-06-01** :::tip ANNOUNCEMENT This is the first public release of Secutils.dev 🎉 ::: ### Secutils.dev API #### Features * **[Webhooks]** Added support for basic [webhooks functionality](../../guides/webhooks). * **[Digital Certificates]** Added support for generation of the [certificate templates](../../guides/digital_certificates/certificate_templates). * **[Web Security]** Added support for generation of the [Content Security Policies (CSP)](../../guides/web_security/csp). ### Secutils.dev Web UI #### Features * **[Webhooks]** Added UIs to support basic [webhooks functionality](../../guides/webhooks). * **[Digital Certificates]** Added UIs to support generation of the [certificate templates](../../guides/digital_certificates/certificate_templates). * **[Web Security]** Added UIs to support generation of the [Content Security Policies (CSP)](../../guides/web_security/csp). --- ## 2024 Changelog ## 1.0.0-beta.1 **2024-05-20** ### [Secutils.dev API](https://github.com/secutils-dev/secutils) #### ⚠ BREAKING CHANGES * **platform:** migrate from SQLite to PostgreSQL as the main database ([secutils@6c73226](https://github.com/secutils-dev/secutils/commit/6c7322620164816a43584356797fde57e83edf9b)) * **platform, security:** move identity management to [Ory Kratos](https://github.com/ory/kratos) ([secutils@2035135](https://github.com/secutils-dev/secutils/commit/2035135472af85b0b94de7dd9b109b43131f8c13)) * **platform, config:** add support for the application TOML configuration file ([secutils@3446290](https://github.com/secutils-dev/secutils/commit/3446290231069942970119599acfb68cfaa64531)) #### Features * **platform, security:** introduce API to retrieve currently authenticated user ([secutils@a3a4471](https://github.com/secutils-dev/secutils/commit/a3a4471f9e7aa51ea44ece4d7512738a9177c37c)) * **platform, security:** introduce admin-only APIs to retrieve any users by ID and email ([secutils@7c8ec97](https://github.com/secutils-dev/secutils/commit/7c8ec9775ab4342b2cac379316280f4889067250)) * **platform, security:** introduce API to terminate user account ([secutils@8be73d4](https://github.com/secutils-dev/secutils/commit/8be73d40f79365fab3a78b9ff988c5ddecd36f2f)) * **webhooks:** introduce API to enable/disable responders ([secutils@35965fc](https://github.com/secutils-dev/secutils/commit/35965fc62772f76797acb11bccd8176d43c1864f)) * **platform:** add support for user subscriptions and tiers ([secutils@281f80f](https://github.com/secutils-dev/secutils/commit/281f80f7abb1ce211bfb3b21a3a20224e2bd11eb)) * **platform:** introduce subscription tier config ([secutils@72cfd03](https://github.com/secutils-dev/secutils/commit/72cfd031d8a010aff7936dffbd67430ac0240f42)) and make subscription management and feature overview URLs configurable ([secutils@70faae9](https://github.com/secutils-dev/secutils/commit/70faae96d04ee53f18e4d807dda6829a6b487624)) * **platform:** switch to a structured logger ([secutils@60e4a8e](https://github.com/secutils-dev/secutils/commit/60e4a8e44f3fbc1522bbd3cb70c42c5f02b1b02a)) and add support for more detailed structured logging ([secutils@998bdd3](https://github.com/secutils-dev/secutils/commit/998bdd378469e65a3ac54f7df7176707f2e9954c)) * **platform, security:** add support for JWT credentials ([secutils@6e6ca22](https://github.com/secutils-dev/secutils/commit/6e6ca224e4cd507eea1e8e006dd00d96297c5193)) * **platform, security:** add support for operator users and operator ephemeral service accounts ([secutils@88e4cfc](https://github.com/secutils-dev/secutils/commit/88e4cfc5fe582e29ded15a9582ffbe16a34fa5e4)) #### Fixes * **platform:** bump minimum Deno runtime heap size to `5mb` for basic tier ([secutils@7edef3a](https://github.com/secutils-dev/secutils/commit/7edef3a61a64b35db40e4d742a17d016846541c3)) * **platform:** expose all user subscription fields to the clients ([secutils@6c30e40](https://github.com/secutils-dev/secutils/commit/6c30e409245b8a54eae768583e387082d6c9225c)) * **platform:** make termination of the long-running user scripts more resilient ([secutils@3cff6fb](https://github.com/secutils-dev/secutils/commit/3cff6fb3700d10081e86254053941ddb0b04d995)) * **platform:** reset `JsRuntime` termination flag after termination ([secutils@f9e88e6](https://github.com/secutils-dev/secutils/commit/f9e88e61ecb687f2b6cd6ad4a25d4a21d8557e8b)) * **platform:** unify styles for account activation, password reset, and notifications emails ([secutils@f62635c](https://github.com/secutils-dev/secutils/commit/f62635c1d51824a31ba09ef374d0814310de559f)) * **webhooks, web-scraping:** adjust subscription default values ([secutils@9fe5780](https://github.com/secutils-dev/secutils/commit/9fe57804e2acda0144e38470b66903319bc4af81)) #### Performance Improvements * **platform:** acquire single database connection for data streams ([secutils@2fee287](https://github.com/secutils-dev/secutils/commit/2fee2872aa906ad4dd92ff48efb52f825f50a2d8)) **Full Changelog**: [secutils@v1.0.0-alpha.5...v1.0.0-beta.1](https://github.com/secutils-dev/secutils/compare/v1.0.0-alpha.5...v1.0.0-beta.1) ### [Secutils.dev Web UI](https://github.com/secutils-dev/secutils-webui) #### ⚠ BREAKING CHANGES * **platform, security:** migrate user authentication to Ory Kratos ([secutils-webui@b785d68](https://github.com/secutils-dev/secutils-webui/commit/b785d68e24ceacac159644808a07ef0a7ac9f5d8)) #### Features * **platform:** add `Account` UI to view and manage subscription details ([secutils-webui@3e09090](https://github.com/secutils-dev/secutils-webui/commit/3e090904edb09855553014c66b0df5c7fffbfc3f)) ![Secutils.dev UI - Account management](/img/docs/changelog_1.0.0_beta.1_platform_account_management.png) * **webhooks:** allow enabling/disabling responders ([secutils-webui@1104922](https://github.com/secutils-dev/secutils-webui/commit/11049221b749ae16843023b545519399098d4421)) ![Secutils.dev UI - Responder enable/disable switch](/img/docs/changelog_1.0.0_beta.1_responders_enable.png) * **platform:** allow zooming script editor content with the mouse wheel ([secutils-webui@187fdfd](https://github.com/secutils-dev/secutils-webui/commit/187fdfdc0132d748059cf7210242ea8599661de1)) * **web-scraping:** limit a number of tracker revisions and responder requests according to the user subscription ([secutils-webui@24da25e](https://github.com/secutils-dev/secutils-webui/commit/24da25ef8ff0266cf4a092aa796e90453c8a97f1) and [1d67a98](https://github.com/secutils-dev/secutils-webui/commit/1d67a98ba5bef76eec8f06905b9a2ee1cf07cb01)) according to the user subscription #### Fixes * **platform:** redirect user to `/signin` after signout and do not cache `index.html` ([secutils-webui@bb581f6](https://github.com/secutils-dev/secutils-webui/commit/bb581f68f70b79f020f5967be1fd4512d6591b04)) * **utils:** increase width of the `Actions` column for certificates, private keys, and CSP ([secutils-webui@f8a48d9](https://github.com/secutils-dev/secutils-webui/commit/f8a48d9ec24c271d746e1f14c10c9771772e7ce4)) **Full Changelog**: [secutils-webui@v1.0.0-alpha.5...v1.0.0-beta.1](https://github.com/secutils-dev/secutils-webui/compare/v1.0.0-alpha.5...v1.0.0-beta.1) ### [Secutils.dev Web Scraper](https://github.com/secutils-dev/secutils-web-scraper) #### Features * **platform:** support configurable `userAgent` header via `SECUTILS_WEB_SCRAPER_USER_AGENT` envvar ([secutils-web-scraper@030c8d9](https://github.com/secutils-dev/secutils-web-scraper/commit/030c8d988d7487baf0a099d43f93d3d76acb2d0c)) #### Fixes * **platform:** set proper path to the `main` module in `package.json` ([secutils-web-scraper@47aeda2](https://github.com/secutils-dev/secutils-web-scraper/commit/47aeda226fc47150fe6bfba38185164facbf1d2e)) **Full Changelog**: [secutils-web-scraper@v1.0.0-alpha.5...v1.0.0-beta.1](https://github.com/secutils-dev/secutils-web-scraper/compare/v1.0.0-alpha.5...v1.0.0-beta.1) ## 1.0.0-alpha.5 **2024-01-10** ### [Secutils.dev API](https://github.com/secutils-dev/secutils) #### ⚠ BREAKING CHANGES * **webhooks:** drop dedicated `delay` responder setting in favor of custom responder JavaScript extension ([secutils@5fe5d8a](https://github.com/secutils-dev/secutils/commit/5fe5d8a7cfa79ef2558777bf6d7799baba1d860c)) #### Features * **platform:** implement Deno-based `JsRuntime` to support user extensions and scripts ([secutils@98a5d8a](https://github.com/secutils-dev/secutils/commit/98a5d8a2a0419ec05e0ffdad7a397a3acce7eea0)) * **webhooks:** add support for custom responder JavaScript extensions ([secutils@5fe5d8a](https://github.com/secutils-dev/secutils/commit/5fe5d8a7cfa79ef2558777bf6d7799baba1d860c)) * **webhooks:** capture full client socket address in responder and expose it to the script context ([secutils@430a9f9](https://github.com/secutils-dev/secutils/commit/430a9f9834e8bdc507ef4a3da82c8250de779ef8)) * **webhooks:** capture responder request path and query string ([secutils@67eb50a](https://github.com/secutils-dev/secutils/commit/67eb50a5b8be8bcf05cad2b59a632de35f114795)) **Full Changelog**: [secutils@v1.0.0-alpha.4...v1.0.0-alpha.5](https://github.com/secutils-dev/secutils/compare/v1.0.0-alpha.4...v1.0.0-alpha.5) ### [Secutils.dev Web UI](https://github.com/secutils-dev/secutils-webui) #### ⚠ BREAKING CHANGES * **webhooks:** drop UI for the dedicated `delay` responder setting in favor of custom responder JavaScript extension ([secutils-webui@7727f82](https://github.com/secutils-dev/secutils-webui/commit/7727f825b649cacf3f7c5838bdbb36508fabd569)) #### Features * **webhooks:** add UI to support custom responder JavaScript extensions ([secutils-webui@7727f82](https://github.com/secutils-dev/secutils-webui/commit/7727f825b649cacf3f7c5838bdbb36508fabd569), check out [the guides](https://secutils.dev/docs/guides/webhooks#generate-a-dynamic-response) to learn more) ![Secutils.dev UI - Responder scripts](/img/docs/changelog_1.0.0_alpha.5_responders_script.png) * **webhooks:** display `f` icon next to the responder name if it is configured with a script ([secutils-webui@ecb3af7](https://github.com/secutils-dev/secutils-webui/commit/ecb3af7368ed38384c0ab38ebb7ece86d1b16458)) ![Secutils.dev UI - Responder script indicator](/img/docs/changelog_1.0.0_alpha.5_responders_script_indicator.png) * **webhooks:** display full client socket address, responder request path and query string in captured requests grid ([secutils-webui@f78f5e2](https://github.com/secutils-dev/secutils-webui/commit/f78f5e28909a86691d9517b22360b0fd76e7f7dc)) ![Secutils.dev UI - Responder additional fields](/img/docs/changelog_1.0.0_alpha.5_responders_client_socket_and_query_string.png) **Full Changelog**: [secutils-webui@v1.0.0-alpha.4...v1.0.0-alpha.5](https://github.com/secutils-dev/secutils-webui/compare/v1.0.0-alpha.4...v1.0.0-alpha.5) ### [Secutils.dev Web Scraper](https://github.com/secutils-dev/secutils-web-scraper) Maintenance release (dependency upgrades and other chores). **Full Changelog**: [secutils-web-scraper@v1.0.0-alpha.4...v1.0.0-alpha.5](https://github.com/secutils-dev/secutils-web-scraper/compare/v1.0.0-alpha.4...v1.0.0-alpha.5) --- ## 2026 Changelog ## 1.0.0-beta.2 (in progress) **May 2024 – April 2026** :::info MONO-REPO Since `1.0.0-beta.1` the project has transitioned to a **mono-repo**. The Web UI, documentation site, and developer tools now live in the main [secutils](https://github.com/secutils-dev/secutils) repository. The changelog below is organized by feature area instead of per-component. ::: ### ⚠ Breaking Changes * **platform:** consolidate Web UI, documentation, and developer tooling into the main repository ([secutils@4b5e054](https://github.com/secutils-dev/secutils/commit/4b5e0542eb545707769edd8ee0f8d2c328e835ec), [secutils@ad1431b](https://github.com/secutils-dev/secutils/commit/ad1431b203b296e9f2c934f6909e6a4fe1626797)) * **web-scraping:** migrate content and resource trackers to the [Retrack](https://github.com/nickkuk/retrack) scheduling service, unifying both tracker types under a single "page trackers" umbrella ([secutils@9ed3c26](https://github.com/secutils-dev/secutils/commit/9ed3c261e88c203be61df74c7639a7bd99f4bc58)) * **platform:** remove support for favourite utils ([secutils@d038a09](https://github.com/secutils-dev/secutils/commit/d038a09ed0feb4143295280a4b12bdde3c9c6a16)) ### Web Scraping & Trackers #### Features * **web-scraping:** implement API trackers - track changes in arbitrary HTTP API responses alongside page trackers ([secutils@2c1f687](https://github.com/secutils-dev/secutils/commit/2c1f68753ac3cba94f8f92befb6f6b4982873fe0)) * **web-scraping:** add debug functionality to page trackers with live execution output ([secutils@ed867f5](https://github.com/secutils-dev/secutils/commit/ed867f52f6e4bcd7303c41b32c03d9d3d90c8989)) * **web-scraping:** capture page screenshots in debug mode ([secutils@c7083c9](https://github.com/secutils-dev/secutils/commit/c7083c9da9fc966c8c88f5eada0d161aa1862226)) * **web-scraping:** support "Camoufox" browser engine for page trackers to improve stealth scraping ([secutils@c88c540](https://github.com/secutils-dev/secutils/commit/c88c540055aef28bceba1a738be2601d480e4938)) * **web-scraping:** add support for tracker execution log ([secutils@3b5a80e](https://github.com/secutils-dev/secutils/commit/3b5a80e719c836055cb2e1d67f9ca777d5ec67bd)) * **web-scraping:** introduce support for `scheduled_at` and `last_ran_at` tracker fields ([secutils@61febb8](https://github.com/secutils-dev/secutils/commit/61febb8bdddf6fff42fed500345b7987a1e29bb3)) * **web-scraping:** add support for importing Playwright scenarios as page tracker extractor scripts ([secutils@1947376](https://github.com/secutils-dev/secutils/commit/19473760eabc97173ab07c718d73e672d40e4a03)) * **web-scraping:** allow users to configure page trackers to bypass HTTPS errors ([secutils@98a9134](https://github.com/secutils-dev/secutils/commit/98a913438449b10fd6e859ac17566733539ff01f)) * **web-scraping:** allow users to define custom cron schedules for tracker jobs ([secutils@acbd82a](https://github.com/secutils-dev/secutils/commit/acbd82ad3933c5b3132ba8643af44af3b71e25a8)) * **web-scraping:** make tracker schedule picker more flexible ([secutils@563d1e0](https://github.com/secutils-dev/secutils/commit/563d1e0700a077e702cb8cb35a3daaf698d551e4)) * **web-scraping:** switch to Monaco-based diff viewer for tracker revisions ([secutils@3cb086a](https://github.com/secutils-dev/secutils/commit/3cb086a60282ad452856e5b5484d79ed94defbda)) * **web-scraping:** add diffs to tracker notification emails ([secutils@58688f3](https://github.com/secutils-dev/secutils/commit/58688f38d5651073ec58c150df18239d0a862038)) * **web-scraping:** support new Chart mode for numeric tracker revision values ([secutils@0c7d5a5](https://github.com/secutils-dev/secutils/commit/0c7d5a5a8c2f2170463872282cfbed3924951747)) * **web-scraping:** support full screen mode for Chart tracker revision view ([secutils@faf8f56](https://github.com/secutils-dev/secutils/commit/faf8f56a691a4464178e9a9456bcc35c4d856658)) #### Fixes * **web-scraping:** keep non-scheduled trackers at the bottom when sorting by next/last run ([secutils@dcdc040](https://github.com/secutils-dev/secutils/commit/dcdc040b71ef319375e93b7e804fb585b9113656)) * **web-scraping:** properly handle non-string engine type ([secutils@d2e3288](https://github.com/secutils-dev/secutils/commit/d2e32882f51655aa05f30133b6df2713a8410e7e)) ### Webhooks #### Features * **webhooks:** add support for MITM responders - intercept, inspect, and modify requests forwarded to an upstream server ([secutils@7f25620](https://github.com/secutils-dev/secutils/commit/7f25620755134b5fea4e070422d84c348423ee2a)) * **webhooks:** add support for response tracking in MITM mode ([secutils@a518340](https://github.com/secutils-dev/secutils/commit/a5183409c86372b0f68bde611eae38750ee53153)) * **webhooks:** add support for insecure proxy requests and custom proxy request timeouts ([secutils@a157c79](https://github.com/secutils-dev/secutils/commit/a157c7988722a67775af3a2c4bfcbc694a20ae1a)) * **webhooks:** auto-decompress proxy response body and simplify body handling in scripts ([secutils@ba7ab83](https://github.com/secutils-dev/secutils/commit/ba7ab83d2fe5da150e54fa7da950bd1b985ec7ca)) * **webhooks:** switch from custom responder subdomains to subdomain prefix ([secutils@d5522cf](https://github.com/secutils-dev/secutils/commit/d5522cf69b5cf63504d901e2f1318fc2bc39d6c8)) * **webhooks:** improve responders list layout, display last requested timestamp ([secutils@a52821c](https://github.com/secutils-dev/secutils/commit/a52821ccc9903f2380ec916bd5c5e2ffaf21b452)) #### Fixes * **webhooks:** enforce HTTP/1.1 in `op_proxy_request` ([secutils@c7dcd4c](https://github.com/secutils-dev/secutils/commit/c7dcd4c2f4ea2f0d846bb8ea51ad8638c9338a0e)) * **webhooks:** return proper error message when saving responder with non-unique path and method ([secutils@fc038eb](https://github.com/secutils-dev/secutils/commit/fc038ebcf3c112990336207d86146f5d358f9404)) ### Digital Certificates #### Features * **certificates:** add support for importing certificate templates ([secutils@969eef4](https://github.com/secutils-dev/secutils/commit/969eef4bfe99c467f1372c476345c5cf06032668)) #### Fixes * **certificates:** improve private key handling error messages ([secutils@8b69f91](https://github.com/secutils-dev/secutils/commit/8b69f91781000c8846068bf73e1f4eb6ad12d33c)) * **certificates:** properly parse Distinguished Name fields with commas ([secutils@7d79d37](https://github.com/secutils-dev/secutils/commit/7d79d37df67287af6d4652796b80fb18cd8efc1e)) ### Platform #### Features * **platform:** add support for user tags - label and filter items across all utilities ([secutils@c56d3a7](https://github.com/secutils-dev/secutils/commit/c56d3a76d98c43ecc0a0d6107551a08db464d6eb)) * **platform:** introduce support for user scripts - reusable JavaScript/TypeScript snippets ([secutils@3427ec8](https://github.com/secutils-dev/secutils/commit/3427ec8aa5c3f81133b6f7d812d109a11ac18dce)) * **platform:** introduce support for user secrets - securely store and reference sensitive values in scripts ([secutils@9d9bc39](https://github.com/secutils-dev/secutils/commit/9d9bc392062352114ad64babcc3bf266ebadcf22)) * **platform:** add support for data export and data import ([secutils@a879a84](https://github.com/secutils-dev/secutils/commit/a879a848097c65a4bef79fda5926fcc7e8d4cc36)) * **platform:** include user settings in data export/import ([secutils@a1afefc](https://github.com/secutils-dev/secutils/commit/a1afefc5802b7c058b806c5a38b53c33ebce4f97)) * **platform:** add `updated_at` field for all user data types ([secutils@beb8ebe](https://github.com/secutils-dev/secutils/commit/beb8ebec49909e0a1e0d7c83d5fed48241457029)) * **platform:** store global scope value as a user setting ([secutils@8f0b9f4](https://github.com/secutils-dev/secutils/commit/8f0b9f4e84459382b5447b31e7cf9ffb03143389)) * **platform:** use dedicated "empty state" for the case when no items are visible due to filters ([secutils@2e3b608](https://github.com/secutils-dev/secutils/commit/2e3b60837557436e264f4a25a5c7d0978197be26)) * **platform:** add support for user API keys ([secutils@64a261b](https://github.com/secutils-dev/secutils/commit/64a261b)) * **platform:** make top-level sidebar groups collapsible ([secutils@0df267e](https://github.com/secutils-dev/secutils/commit/0df267e)) #### Fixes * **platform:** properly handle import conflicts in responders ([secutils@8adaf82](https://github.com/secutils-dev/secutils/commit/8adaf82c41500db7ac15e0603ed6ee55b6640fec)) * **platform:** properly handle large import sizes ([secutils@4438a40](https://github.com/secutils-dev/secutils/commit/4438a401bf6a7dbb4e5d292eb568e910ed183d89)) * **platform:** fix import for responders without history ([secutils@53de317](https://github.com/secutils-dev/secutils/commit/53de3176fe2319c8e89b2c3fea75d3145003d81f)) * **platform:** fix a typo in the activation email template ([secutils@8b98cf5](https://github.com/secutils-dev/secutils/commit/8b98cf5b5a7bce332025648fff46eb17d333b973)) * **platform:** disable bulk conflict resolution actions if there are no conflicts selected ([secutils@8c88a5b](https://github.com/secutils-dev/secutils/commit/8c88a5bd20620e1e11e0887ba4c1bd51a199a32a)) * **platform:** properly strip responder sub-domain prefixes during import when necessary ([secutils@1afd567](https://github.com/secutils-dev/secutils/commit/1afd567)) * **platform:** consistently handle expired session ([secutils@50b8ea1](https://github.com/secutils-dev/secutils/commit/50b8ea1)) ### UI Improvements #### Features * **ui:** make sidebar collapsible with persistent state ([secutils@3fc90ad](https://github.com/secutils-dev/secutils/commit/3fc90ad6f34aa5f7cb5c04a3eb634efc2f839a2f), [secutils@23ddbf8](https://github.com/secutils-dev/secutils/commit/23ddbf86fe8defd49d58ca96efef2bd3d92095de)) * **ui:** add support for search filters in grids ([secutils@755d609](https://github.com/secutils-dev/secutils/commit/755d609ae10a682a5197e07ed99da15d5cf0df4d)) * **ui:** introduce "duplicate" action for all utilities ([secutils@035df34](https://github.com/secutils-dev/secutils/commit/035df34ef9f20e8a1361bf7330f04ac81438ce00)) * **ui:** add full screen support for the script editor ([secutils@f8fac34](https://github.com/secutils-dev/secutils/commit/f8fac34ee39674e5930ef7785c1abb12d919dc7e)) * **ui:** add support for auto-refresh functionality in responder requests grid ([secutils@99889bf](https://github.com/secutils-dev/secutils/commit/99889bf8c203c33c4fe3af4c857c217fb8030ad2)) * **ui:** ask user for confirmation before discarding unsaved changes ([secutils@4ac60e3](https://github.com/secutils-dev/secutils/commit/4ac60e3af32bfeaf8b6b6bb2fe67aac1e0997c9b)) * **ui:** add context menu item to copy entity ID ([secutils@788ccf0](https://github.com/secutils-dev/secutils/commit/788ccf035da890d48a35aeab983826d059594583)) * **ui:** add support for example scripts in the script editor ([secutils@960b524](https://github.com/secutils-dev/secutils/commit/960b524f10c7bd05dc678a5509d7f7a31e73bb26)) * **ui:** make home/welcome page more useful with recent items and summary ([secutils@5a01c23](https://github.com/secutils-dev/secutils/commit/5a01c239d765a5c7369d6c3f5c99b65051a5401f), [secutils@593ddec](https://github.com/secutils-dev/secutils/commit/593ddecdea00db660066588a93f98c17fc18176d)) * **ui:** add support for default "system" color mode ([secutils@b3398db](https://github.com/secutils-dev/secutils/commit/b3398db)) * **ui:** support Cmd/Ctrl-K to open workspace search ([secutils@fdeff0c](https://github.com/secutils-dev/secutils/commit/fdeff0c)) * **ui:** add sidebar icons for utilities and flatten CSP utility group ([secutils@4775932](https://github.com/secutils-dev/secutils/commit/4775932)) * **ui:** move tags, secrets and scripts management from settings to workspace ([secutils@97ed43d](https://github.com/secutils-dev/secutils/commit/97ed43d)) #### Fixes * **ui:** update all util tables to default sort by "Last updated" descending ([secutils@c2e7897](https://github.com/secutils-dev/secutils/commit/c2e78978a60b460d62eefea1ec79d70974be485a)) * **ui:** fix side-bar navigation and improve hooks handling ([secutils@5e004fc](https://github.com/secutils-dev/secutils/commit/5e004fc44ab250473e3c4a85bd36fdddca994b50)) * **ui:** improve handling of the tags column width ([secutils@0e5c06a](https://github.com/secutils-dev/secutils/commit/0e5c06aa77e4b16b230a6ec0e1857b28480d48bc)) * **ui:** increase spacing between icons in responder and tracker grids ([secutils@0217c33](https://github.com/secutils-dev/secutils/commit/0217c33cee0414311ade8d7d4e532ec0aee7eb73)) #### Performance * **ui:** lazy load editor flyouts to reduce initial bundle size ([secutils@6e5f2f9](https://github.com/secutils-dev/secutils/commit/6e5f2f98d01355ffd504bc9da2e720b92661c567)) * **ui:** load Scripts tab lazily to reduce main bundle size ([secutils@10d0d07](https://github.com/secutils-dev/secutils/commit/10d0d07ffa3a786a8e0b393a5542b6e26e518494)) * **ui:** replace `axios` with native `fetch` ([secutils@457efc0](https://github.com/secutils-dev/secutils/commit/457efc0153a64ae2e0101eddba17e1fe51bceca0)) ### HTML Apps * **html-app:** create JWT Debugger HTML app ([secutils@893e871](https://github.com/secutils-dev/secutils/commit/893e87165bb1fe5ddfacfe633fa986b71e048bde)) * **html-app:** create SAML Decoder HTML app ([secutils@8b609a8](https://github.com/secutils-dev/secutils/commit/8b609a86d7edaf4951f63cae83c928cbe437159b)) * **html-app:** create SAML Mock IDP HTML app ([secutils@af75f92](https://github.com/secutils-dev/secutils/commit/af75f92b04f2b539e7bdfeb380f1ed98682e9b8c)) * **html-app:** create Markdown-to-HTML converter HTML app ([secutils@c2cac23](https://github.com/secutils-dev/secutils/commit/c2cac23e9d64a1eac4d029c13d780f3211b71eda)) ### API & OpenAPI #### Features * **api:** comprehensive OpenAPI spec coverage - all HTTP routes now documented with utoipa ([secutils@697bb57](https://github.com/secutils-dev/secutils/commit/697bb5744e639ec7cb9a46ae906ec792d12a8547), [secutils@803f752](https://github.com/secutils-dev/secutils/commit/803f75231676a9af50232cc9004aaf4a41990e09), [secutils@6f91db6](https://github.com/secutils-dev/secutils/commit/6f91db69a046ff6dac0280a1cb5cd6d5e4ce4afe), [secutils@3c1b19a](https://github.com/secutils-dev/secutils/commit/3c1b19af6e484c9c57fe69f3f0ebfc706a6b88d8), [secutils@fe7f6dc](https://github.com/secutils-dev/secutils/commit/fe7f6dcbc92b248b43060fa7bc61efc26f1bf5ce), [secutils@afb76ad](https://github.com/secutils-dev/secutils/commit/afb76adbb3cac5456ad19f63ec825d8c3539bd5c), [secutils@00cd555](https://github.com/secutils-dev/secutils/commit/00cd555333fa3fcd4099e492d60a15fef9eb4951)) * **api:** add authentication requirements to OpenAPI spec ([secutils@d1e6fcd](https://github.com/secutils-dev/secutils/commit/d1e6fcd4cbe1a461ee28587f02938017b2405559)) * **api:** include a link with the tracker ID in tracker notification emails ([secutils@43ec3e7](https://github.com/secutils-dev/secutils/commit/43ec3e70b2a5bf728d221be16b1ca7d3d1838a88)) #### Fixes * **api:** throw `403 Forbidden` when operator credentials are invalid ([secutils@f2f8913](https://github.com/secutils-dev/secutils/commit/f2f891345141fba189b18336992bdbd4b34cb377)) * **security:** reorganize Ory error handling ([secutils@ec88825](https://github.com/secutils-dev/secutils/commit/ec88825c5d1d6a9d7fd2042797be4b2cded4ccb7)) ### Documentation * **docs:** allow importing samples directly from documentation ([secutils@748c60f](https://github.com/secutils-dev/secutils/commit/748c60fb885a22766188545bb87121bd91ab457a)) * **docs:** migrate towards fully automated docs screenshot generation ([secutils@20fe1b8](https://github.com/secutils-dev/secutils/commit/20fe1b8ac12071fd6ec8717ebdcada07682e95fc)) * **docs:** add `ARCHITECTURE.md` ([secutils@b4ec50f](https://github.com/secutils-dev/secutils/commit/b4ec50f3d9114c4b63338c67527a14d53b95c1a4)) * **docs:** rename `/llms.txt` to `/llms-index.txt` and `/llms-full.txt` to `/llms.txt` ([secutils@71e2f70](https://github.com/secutils-dev/secutils/commit/71e2f706d541855259b8d0259d48fe218b692351)) ### Infrastructure & DevOps #### Enhancements * **api:** switch to `tracing` crate for structured logging ([secutils@3b8655c](https://github.com/secutils-dev/secutils/commit/3b8655c28b454f61259de65d9b85b3d099170410)) * **api:** migrate from unmaintained `trust_dns_resolver` to `hickory_resolver` ([secutils@2770170](https://github.com/secutils-dev/secutils/commit/277017036bbc75728add07d225e3c95e27268a0c)) * **api:** switch to Debian distroless runtime image ([secutils@87f945e](https://github.com/secutils-dev/secutils/commit/87f945ed3aaea57aa9590e6ae82411a8d9632b21)) * **security:** upgrade to Ory Kratos `1.2.0` ([secutils@1227b81](https://github.com/secutils-dev/secutils/commit/1227b8195ddbc8ae5eb99e70dc80d2b2d681af5f)), `1.3.0` ([secutils@17f70d4](https://github.com/secutils-dev/secutils/commit/17f70d4f8e4875fd069cb55bb44098bf431707ce)), `v25.4.0` ([secutils@a239293](https://github.com/secutils-dev/secutils/commit/a2392935483f3793e066fdc7a463bd5fd6be2547)), and `v26.2.0` * **api:** switch to `jemalloc` memory allocator ([secutils@398a2fb](https://github.com/secutils-dev/secutils/commit/398a2fb)) * **api:** improve DB connection reliability and include DB status in API status ([secutils@0586ac2](https://github.com/secutils-dev/secutils/commit/0586ac2)) * **build:** introduce E2E test infrastructure with Playwright and Docker Compose ([secutils@663ad4c](https://github.com/secutils-dev/secutils/commit/663ad4ca0357c77efc8d88a4ad16b9522f8d358d)) * **build:** add commands to deploy components to private Docker registry ([secutils@dd62fd0](https://github.com/secutils-dev/secutils/commit/dd62fd0ac534860f17da1d7b6073cbc548b43f99)) * **build:** self-host Google fonts ([secutils@c6208be](https://github.com/secutils-dev/secutils/commit/c6208be721ba74c81c305665824c605727a740c4)) * **build:** pin base Docker images ([secutils@b9c44a6](https://github.com/secutils-dev/secutils/commit/b9c44a62aa824b755a9550bb8791cf7a201ef631)) * **web-security:** use `BTreeSet` instead of `HashSet` for CSP directives for stable ordering ([secutils@3f52864](https://github.com/secutils-dev/secutils/commit/3f528645a73cbe5d0078f2fa8110c232f10246d6)) * Dependency upgrades across all components --- ## What is Secutils.dev? Secutils.dev is an [open-source](https://github.com/secutils-dev) security toolbox for engineers. It brings together the utilities you need to develop and test secure web applications - webhook responders, certificate generation, content security policies, and web page tracking - all in one straightforward interface. ![Secutils.dev workspace hub](/img/docs/home/workspace_hub.png) ## What can you do with it? - **Webhooks** - Create mock HTTP APIs, test webhook integrations, and set up honeypot endpoints with custom JavaScript logic. - **Digital Certificates** - Generate X.509 certificate templates and manage private keys for HTTPS, code signing, and more. - **Content Security Policy** - Create, import from a live URL, and test Content Security Policies for your web applications. - **Web Scraping** - Track changes in web pages and API responses over time with scheduled checks and notifications. Head over to the [Guides](/docs/category/guides) to see each tool in action, or sign in and start exploring from the workspace hub. ## Get involved If you have a question or idea, we encourage you to use [GitHub Discussions](https://github.com/secutils-dev/secutils/discussions). For bug reports, please submit them directly to [GitHub Issues](https://github.com/secutils-dev/secutils/issues). If you need to contact us for anything else, feel free to do so using the "Contact" form. --- ## Roadmap You can find the roadmap for Secutils.dev on [GitHub](https://github.com/orgs/secutils-dev/projects/1).