docs: Lighthouse multi LLM provider support (#9306)

Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
This commit is contained in:
Chandrapal Badshah
2025-11-25 17:34:30 +05:30
committed by GitHub
parent 0d59441c5f
commit 5ebf455e04
6 changed files with 158 additions and 20 deletions

View File

@@ -110,6 +110,7 @@
]
},
"user-guide/tutorials/prowler-app-lighthouse",
"user-guide/tutorials/prowler-app-lighthouse-multi-llm",
"user-guide/tutorials/prowler-cloud-public-ips",
{
"group": "Tutorials",

Binary file not shown.

After

Width:  |  Height:  |  Size: 540 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 KiB

View File

@@ -0,0 +1,126 @@
---
title: 'Using Multiple LLM Providers with Lighthouse'
---
import { VersionBadge } from "/snippets/version-badge.mdx"
<VersionBadge version="5.14.0" />
Prowler Lighthouse AI supports multiple Large Language Model (LLM) providers, offering flexibility to choose the provider that best fits infrastructure, compliance requirements, and cost considerations. This guide explains how to configure and use different LLM providers with Lighthouse AI.
## Supported Providers
Lighthouse AI supports the following LLM providers:
- **OpenAI**: For GPT models (GPT-4o, GPT-4, etc.)
- **Amazon Bedrock**: AWS service providing access to Claude, Llama, Titan, and other models
- **OpenAI Compatible**: Custom endpoints like OpenRouter, Ollama, or any OpenAI-compatible service
## Understanding Default Providers
All three providers can be configured for a tenant, but only one can be set as the default provider. The first configured provider automatically becomes the default.
When visiting Lighthouse AI chat, the default provider's default model loads automatically. Users can switch to any available LLM model (including those from non-default providers) using the dropdown in chat.
<img src="/images/prowler-app/lighthouse-switch-models.png" alt="Switch models in Lighthouse AI chat interface" />
## Configuring Providers
Navigate to **Configuration** → **Lighthouse AI** to see all three provider options with a **Connect** button under each.
<img src="/images/prowler-app/lighthouse-configuration.png" alt="Prowler Lighthouse Configuration" />
### Connecting a Provider
1. Click **Connect** under the desired provider
2. Enter the required credentials
3. Select a default model for that provider
4. Click **Connect** to save
### OpenAI
#### Required Information
- **API Key**: OpenAI API key (starts with `sk-` or `sk-proj-`)
<Note>
To generate an OpenAI API key, visit https://platform.openai.com/api-keys
</Note>
### Amazon Bedrock
#### Required Information
- **AWS Access Key ID**: AWS access key ID
- **AWS Secret Access Key**: AWS secret access key
- **AWS Region**: Region where Bedrock is available (e.g., `us-east-1`, `us-west-2`)
#### Required Permissions
The AWS user must have the `AmazonBedrockLimitedAccess` managed policy attached:
```
arn:aws:iam::aws:policy/AmazonBedrockLimitedAccess
```
<Note>
Currently, only AWS access key and secret key authentication is supported. Amazon Bedrock API key support will be available soon.
</Note>
<Note>
Available models depend on AWS region and account entitlements. Lighthouse AI displays only accessible models.
</Note>
### OpenAI Compatible
Use this option to connect to any LLM provider exposing OpenAI compatible API endpoint (OpenRouter, Ollama, etc.).
#### Required Information
- **API Key**: API key from the compatible service
- **Base URL**: API endpoint URL including the API version (e.g., `https://openrouter.ai/api/v1`)
#### Example: OpenRouter
1. Create an account at [OpenRouter](https://openrouter.ai/)
2. [Generate an API key](https://openrouter.ai/docs/guides/overview/auth/provisioning-api-keys) from the OpenRouter dashboard
3. Configure in Lighthouse AI:
- **API Key**: OpenRouter API key
- **Base URL**: `https://openrouter.ai/api/v1`
## Changing the Default Provider
To set a different provider as default:
1. Navigate to **Configuration** → **Lighthouse AI**
2. Click **Configure** under the provider you want as default
3. Click **Set as Default**
<img src="/images/prowler-app/lighthouse-set-default-provider.png" alt="Set default LLM provider" />
## Updating Provider Credentials
To update credentials for a connected provider:
1. Navigate to **Configuration** → **Lighthouse AI**
2. Click **Configure** under the provider
3. Enter the new credentials
4. Click **Update**
## Deleting a Provider
To remove a configured provider:
1. Navigate to **Configuration** → **Lighthouse AI**
2. Click **Configure** under the provider
3. Click **Delete**
## Model Recommendations
For best results with Lighthouse AI, the recommended model is `gpt-5` from OpenAI.
Models from other providers such as Amazon Bedrock and OpenAI Compatible endpoints can be connected and used, but performance is not guaranteed.
## Getting Help
For issues or suggestions, [reach out through our Slack channel](https://goto.prowler.com/slack).

View File

@@ -12,15 +12,15 @@ Prowler Lighthouse AI is a Cloud Security Analyst chatbot that helps you underst
## How It Works
Prowler Lighthouse AI uses OpenAI's language models and integrates with your Prowler security findings data.
Prowler Lighthouse AI integrates Large Language Models (LLMs) with your Prowler security findings data.
Here's what's happening behind the scenes:
- The system uses a multi-agent architecture built with [LanggraphJS](https://github.com/langchain-ai/langgraphjs) for LLM logic and [Vercel AI SDK UI](https://sdk.vercel.ai/docs/ai-sdk-ui/overview) for frontend chatbot.
- It uses a ["supervisor" architecture](https://langchain-ai.lang.chat/langgraphjs/tutorials/multi_agent/agent_supervisor/) that interacts with different agents for specialized tasks. For example, `findings_agent` can analyze detected security findings, while `overview_agent` provides a summary of connected cloud accounts.
- The system connects to OpenAI models to understand, fetch the right data, and respond to the user's query.
- The system connects to the configured LLM provider to understand user's query, fetches the right data, and responds to the query.
<Note>
Lighthouse AI is tested against `gpt-4o` and `gpt-4o-mini` OpenAI models.
Lighthouse AI supports multiple LLM providers including OpenAI, Amazon Bedrock, and OpenAI-compatible services. For configuration details, see [Using Multiple LLM Providers with Lighthouse](/user-guide/tutorials/prowler-app-lighthouse-multi-llm).
</Note>
- The supervisor agent is the main contact point. It is what users interact with directly from the chat interface. It coordinates with other agents to answer users' questions comprehensively.
@@ -30,16 +30,22 @@ Lighthouse AI is tested against `gpt-4o` and `gpt-4o-mini` OpenAI models.
All agents can only read relevant security data. They cannot modify your data or access sensitive information like configured secrets or tenant details.
</Note>
## Set up
Getting started with Prowler Lighthouse AI is easy:
1. Go to the configuration page in your Prowler dashboard.
2. Enter your OpenAI API key.
3. Select your preferred model. The recommended one for best results is `gpt-4o`.
4. (Optional) Add business context to improve response quality and prioritization.
1. Navigate to **Configuration** → **Lighthouse AI**
2. Click **Connect** under the desired provider (OpenAI, Amazon Bedrock, or OpenAI Compatible)
3. Enter the required credentials
4. Select a default model
5. Click **Connect** to save
<img src="/images/prowler-app/lighthouse-config.png" alt="Lighthouse AI Configuration" />
<Note>
For detailed configuration instructions for each provider, see [Using Multiple LLM Providers with Lighthouse](/user-guide/tutorials/prowler-app-lighthouse-multi-llm).
</Note>
<img src="/images/prowler-app/lighthouse-configuration.png" alt="Lighthouse AI Configuration" />
### Adding Business Context
@@ -95,15 +101,15 @@ Prowler Lighthouse AI is powerful, but there are limitations:
- **Continuous improvement**: Please report any issues, as the feature may make mistakes or encounter errors, despite extensive testing.
- **Access limitations**: Lighthouse AI can only access data the logged-in user can view. If you can't see certain information, Lighthouse AI can't see it either.
- **NextJS session dependence**: If your Prowler application session expires or logs out, Lighthouse AI will error out. Refresh and log back in to continue.
- **Response quality**: The response quality depends on the selected OpenAI model. For best results, use gpt-4o.
- **Response quality**: The response quality depends on the selected LLM provider and model. Choose models with strong tool-calling capabilities for best results. We recommend `gpt-5` model from OpenAI.
### Getting Help
If you encounter issues with Prowler Lighthouse AI or have suggestions for improvements, please [reach out through our Slack channel](https://goto.prowler.com/slack).
### What Data Is Shared to OpenAI?
### What Data Is Shared to LLM Providers?
The following API endpoints are accessible to Prowler Lighthouse AI. Data from the following API endpoints could be shared with OpenAI depending on the scope of user's query:
The following API endpoints are accessible to Prowler Lighthouse AI. Data from the following API endpoints could be shared with LLM provider depending on the scope of user's query:
#### Accessible API Endpoints
@@ -183,30 +189,35 @@ Not all Prowler API endpoints are integrated with Lighthouse AI. They are intent
**Lighthouse AI Configuration:**
- List OpenAI configuration - `/api/v1/lighthouse-config`
- Retrieve OpenAI key and configuration - `/api/v1/lighthouse-config/{id}`
- List LLM providers - `/api/v1/lighthouse/providers`
- Retrieve LLM provider - `/api/v1/lighthouse/providers/{id}`
- List available models - `/api/v1/lighthouse/models`
- Retrieve tenant configuration - `/api/v1/lighthouse/configuration`
<Note>
Agents only have access to hit GET endpoints. They don't have access to other HTTP methods.
</Note>
## FAQs
**1. Why only OpenAI models?**
**1. Which LLM providers are supported?**
During feature development, we evaluated other LLM models.
Lighthouse AI supports three providers:
- **Claude AI** - Claude models have [tier-based ratelimits](https://docs.anthropic.com/en/api/rate-limits#requirements-to-advance-tier). For Lighthouse AI to answer slightly complex questions, there are a handful of API calls to the LLM provider within few seconds. With Claude's tiering system, users must purchase $400 credits or convert their subscription to monthly invoicing after talking to their sales team. This pricing may not suit all Prowler users.
- **Gemini Models** - Gemini lacks a solid tool calling feature like OpenAI. It calls functions recursively until exceeding limits. Gemini-2.5-Pro-Experimental is better than previous models regarding tool calling and responding, but it's still experimental.
- **Deepseek V3** - Doesn't support system prompt messages.
- **OpenAI** - GPT models (GPT-5, GPT-4o, etc.)
- **Amazon Bedrock** - Claude, Llama, Titan, and other models via AWS
- **OpenAI Compatible** - Custom endpoints like OpenRouter, Ollama, or any OpenAI-compatible service
For detailed configuration instructions, see [Using Multiple LLM Providers with Lighthouse](/user-guide/tutorials/prowler-app-lighthouse-multi-llm).
**2. Why a multi-agent supervisor model?**
Context windows are limited. While demo data fits inside the context window, querying real-world data often exceeds it. A multi-agent architecture is used so different agents fetch different sizes of data and respond with the minimum required data to the supervisor. This spreads the context window usage across agents.
**3. Is my security data shared with OpenAI?**
**3. Is my security data shared with LLM providers?**
Minimal data is shared to generate useful responses. Agents can access security findings and remediation details when needed. Provider secrets are protected by design and cannot be read. The OpenAI key configured with Lighthouse AI is only accessible to our NextJS server and is never sent to LLMs. Resource metadata (names, tags, account/project IDs, etc) may be shared with OpenAI based on your query requirements.
Minimal data is shared to generate useful responses. Agents can access security findings and remediation details when needed. Provider secrets are protected by design and cannot be read. The LLM provider credentials configured with Lighthouse AI are only accessible to our NextJS server and are never sent to the LLM providers. Resource metadata (names, tags, account/project IDs, etc) may be shared with your configured LLM provider based on your query requirements.
**4. Can the Lighthouse AI change my cloud environment?**