It's an understatement to say that there is a lot of hype around AI these days. It seems to be integrated into everything. The company I work for, Elastic, is also keeping up with the trend by recently releasing the [Elasticsearch Relevance Engine™ (ESRE™)]https://www.elastic.co/search-labs/blog/articles/may-2023-launch-announcement) and the Elastic AI Assistant. Even though I'm typically quite skeptical about over-hyped stuff, I must admit that AI is certainly making waves!
Unsurprisingly, I also find myself asking a question: could the integration of AI tools be useful to the users of Secutils.dev? Let's attempt to answer this question by exploring a simple proof-of-concept that I have been tinkering with this week!
For those who prefer a visual demonstration, here's a video showcasing the "Explain" functionality in the Webhooks feature. If you're interested in more details, keep reading below.
In this post, I will use the generic term "AI" without going into specific details, as it is sufficient to illustrate the point.
In my previous post, Exploring third-party services with webhooks, I demonstrated how Webhooks can be used in Secutils.dev to understand how third-party services implement certain functionalities. Specifically, we analyzed the HTTP requests made by Notion while rendering rich web bookmarks. However, to derive maximum value from this analysis, one would either need extensive experience in the field or have to search for information on every little piece to get a complete picture. This is where AI can step in and really boost the analysis, making it way more efficient.
In the proof-of-concept, I integrated OpenAI so that users can ask the AI to "explain" specific HTTP requests received by the webhook. For instance, consider the following input:
The output generated by AI would look something like this:
Based on this information, we can make the following observations:
- The request is a HEAD request, which is typically used to retrieve only the headers of a resource without transferring the actual body.
- The request is sent to the host "secutils.dev".
- The headers include information related to tracing and logging, such as x-datadog-parent-id, x-datadog-trace-id, and x-datadog-sampling-priority. These headers suggest the usage of Datadog for tracing and monitoring purposes.
- The x-forwarded-for, x-forwarded-proto, x-forwarded-host, x-forwarded-port, and x-forwarded-server headers indicate that the request has been forwarded from another server, possibly a load balancer or proxy server.
- The user-agent header indicates that the request is made by a client identified as "NotionEmbedder", which might be a custom user agent used by the client application.
- The accept header indicates that the client can accept any type of response.
- The referer header suggests that the request was triggered from the URL https://www.notion.so/.
Not bad for the first attempt! I've highlighted the parts that I believe bring significant value to the user. And this is just a "generic" model, imagine how much more useful and cost-efficient it can be with a pre-trained and fine-tuned one. Due to the time constraints of this proof-of-concept, I didn't have a chance to implement an interactive prompt that would allow users to dig deeper and ask more specific follow-up questions. However, adding this feature would be straightforward and incredibly valuable to users.
In summary, I strongly believe that AI in security tools has an incredibly promising future, with endless opportunities. Here are just a few examples that would be relevant to Secutils.dev:
- Users can provide multiple related requests to the AI, allowing it to gain more context and reconstruct the entire flow.
- Users can ask follow-up questions based on the initial analysis provided by the AI.
- Users can utilize AI to generate responses for auto-responders, such as "generate CORS headers", "generate HTML with xxx meta tags", or "generate HTML form and scripts for submission to xxx".
- Users can use AI to analyze web page resources and gain insights into the structure of web applications, the frameworks used, and potential problems that the web page might have.
- And the list goes on…
That wraps up today's post, thanks for taking the time to read it!