mirror of
https://github.com/prowler-cloud/prowler.git
synced 2026-05-15 00:33:27 +00:00
Compare commits
150 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 5b3ba1320c | |||
| 079e65a097 | |||
| c401104a61 | |||
| ef4e28da03 | |||
| ee2d3ed052 | |||
| 66a04b5547 | |||
| a97fb3d993 | |||
| b68097ebea | |||
| bb1d76978a | |||
| b3b2bf6440 | |||
| 5c76e09c21 | |||
| 1fe934d26f | |||
| b200b7f4fe | |||
| fb9eda208e | |||
| f0b1c4c29e | |||
| a73a79f420 | |||
| 5d4b7445f8 | |||
| 13e4866507 | |||
| 7d5c4d32ee | |||
| 7e03b423dd | |||
| 0ad5bbf350 | |||
| 38f60966e5 | |||
| 7bbc0d8e1b | |||
| edfef51e7a | |||
| 788113b539 | |||
| 8ab77b7dba | |||
| e038b2fd11 | |||
| 2e5f17538d | |||
| 54294c862b | |||
| ace2b88c07 | |||
| 3de8159de9 | |||
| 1a4ae33235 | |||
| e0260b91e6 | |||
| 66590f2128 | |||
| 33bb2782f0 | |||
| 2f61c88f74 | |||
| b25ed9fd27 | |||
| 191d51675c | |||
| 5b20fd1b3b | |||
| 02489a5eef | |||
| f16f94acf3 | |||
| 1e584c5b58 | |||
| 1bb6bc148e | |||
| 166ab1d2c1 | |||
| dd85ca7c72 | |||
| b9aef85aa2 | |||
| 601495166c | |||
| 61a66f2bbf | |||
| 8b0b9cad32 | |||
| 000b48b492 | |||
| a564d6a04e | |||
| 82bacef7c7 | |||
| a4ac7bb067 | |||
| a41f8dcb18 | |||
| 2bf93c0de6 | |||
| 39710a6841 | |||
| f330440c54 | |||
| c3940c7454 | |||
| df39f332e4 | |||
| 4a364d91be | |||
| 4b99c7b651 | |||
| c441423d6a | |||
| 7e7f160b9a | |||
| aaae73cd1c | |||
| c5e88f4a74 | |||
| 5d4415d090 | |||
| 5d840385df | |||
| f831171a21 | |||
| 2740d73fe7 | |||
| 1c906b37cd | |||
| 98056b7c85 | |||
| f15ef0d16c | |||
| c42ce6242f | |||
| 702d652de1 | |||
| fff02073cf | |||
| 23e3ea4a41 | |||
| f9afb50ed9 | |||
| 3b95aad6ce | |||
| ac5737d8c4 | |||
| a452c8c3eb | |||
| aa8be0b2fe | |||
| 46bf8e0fef | |||
| c0df0cd1a8 | |||
| 80d58a7b50 | |||
| 2c28d74598 | |||
| 4feab1be55 | |||
| 5bc9b09490 | |||
| fcf817618a | |||
| cad97f25ac | |||
| b854563854 | |||
| 573975f3fe | |||
| f4081f92a1 | |||
| 374496e7ff | |||
| 2a9c2b926d | |||
| f2f1e6bce6 | |||
| 25c823076f | |||
| 6ff559c0d4 | |||
| 899db55f56 | |||
| 22d801ade2 | |||
| 1dc6d41198 | |||
| 456712a0ef | |||
| 885ee62062 | |||
| bbeccaf085 | |||
| d1aca5641a | |||
| 3b7eba64aa | |||
| e9e0797642 | |||
| aaa5abdead | |||
| 0a2749b716 | |||
| 8f8bf63086 | |||
| ea27817a2c | |||
| 9068e6bcd0 | |||
| a4907d8098 | |||
| caee7830a5 | |||
| 65d2989bea | |||
| 6c34945829 | |||
| ce859ddd1f | |||
| 0ca059b45b | |||
| dad100b87a | |||
| 662296aa0e | |||
| b6d49416f0 | |||
| 42be77e82e | |||
| 63169289b0 | |||
| 43d310356d | |||
| 59ae503681 | |||
| bd62f56df4 | |||
| 90fbad16b9 | |||
| affd0c5ffb | |||
| 929bbe3550 | |||
| eb7ef4a8b9 | |||
| 017e19ac18 | |||
| be7680786a | |||
| efba5d2a8d | |||
| 44431a56de | |||
| 969ca8863a | |||
| 03c6f98db4 | |||
| 8ebefb8aa1 | |||
| c3694fdc5b | |||
| df10bc0c4c | |||
| e694b0f634 | |||
| 81e3f87003 | |||
| 7ffe2aeec9 | |||
| 672aa6eb2f | |||
| 2e999f55f9 | |||
| 18998b8867 | |||
| ff4a186df6 | |||
| b8dab5e0ed | |||
| 0b3142f7a8 | |||
| f5dc0c9ee0 | |||
| a230809095 | |||
| e6d1b5639b |
@@ -10,13 +10,15 @@ NEXT_PUBLIC_API_BASE_URL=${API_BASE_URL}
|
||||
NEXT_PUBLIC_API_DOCS_URL=http://prowler-api:8080/api/v1/docs
|
||||
AUTH_TRUST_HOST=true
|
||||
UI_PORT=3000
|
||||
# Temp URL for feeds need to use actual
|
||||
RSS_FEED_URL=https://prowler.com/blog/rss
|
||||
# openssl rand -base64 32
|
||||
AUTH_SECRET="N/c6mnaS5+SWq81+819OrzQZlmx1Vxtp/orjttJSmw8="
|
||||
# Google Tag Manager ID
|
||||
NEXT_PUBLIC_GOOGLE_TAG_MANAGER_ID=""
|
||||
|
||||
#### Code Review Configuration ####
|
||||
# Enable Claude Code standards validation on pre-push hook
|
||||
# Set to 'true' to validate changes against AGENTS.md standards via Claude Code
|
||||
# Set to 'false' to skip validation
|
||||
CODE_REVIEW_ENABLED=true
|
||||
|
||||
#### Prowler API Configuration ####
|
||||
PROWLER_API_VERSION="stable"
|
||||
@@ -126,3 +128,13 @@ LANGSMITH_TRACING=false
|
||||
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
|
||||
LANGSMITH_API_KEY=""
|
||||
LANGCHAIN_PROJECT=""
|
||||
|
||||
# RSS Feed Configuration
|
||||
# Multiple feed sources can be configured as a JSON array (must be valid JSON, no trailing commas)
|
||||
# Each source requires: id, name, type (github_releases|blog|custom), url, and enabled flag
|
||||
# IMPORTANT: Must be a single line with valid JSON (no newlines, no trailing commas)
|
||||
# Example with one source:
|
||||
RSS_FEED_SOURCES='[{"id":"prowler-releases","name":"Prowler Releases","type":"github_releases","url":"https://github.com/prowler-cloud/prowler/releases.atom","enabled":true}]'
|
||||
# Example with multiple sources (no trailing comma after last item):
|
||||
# RSS_FEED_SOURCES='[{"id":"prowler-releases","name":"Prowler Releases","type":"github_releases","url":"https://github.com/prowler-cloud/prowler/releases.atom","enabled":true},{"id":"prowler-blog","name":"Prowler Blog","type":"blog","url":"https://prowler.com/blog/rss","enabled":false}]'
|
||||
|
||||
|
||||
@@ -22,6 +22,13 @@ Please add a detailed description of how to review this PR.
|
||||
- [ ] Review if is needed to change the [Readme.md](https://github.com/prowler-cloud/prowler/blob/master/README.md)
|
||||
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/prowler/CHANGELOG.md), if applicable.
|
||||
|
||||
#### UI
|
||||
- [ ] All issue/task requirements work as expected on the UI
|
||||
- [ ] Screenshots/Video of the functionality flow (if applicable) - Mobile (X < 640px)
|
||||
- [ ] Screenshots/Video of the functionality flow (if applicable) - Table (640px > X < 1024px)
|
||||
- [ ] Screenshots/Video of the functionality flow (if applicable) - Desktop (X > 1024px)
|
||||
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/ui/CHANGELOG.md), if applicable.
|
||||
|
||||
#### API
|
||||
- [ ] Verify if API specs need to be regenerated.
|
||||
- [ ] Check if version updates are required (e.g., specs, Poetry, etc.).
|
||||
|
||||
@@ -45,12 +45,12 @@ jobs:
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
|
||||
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/api-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
|
||||
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
|
||||
with:
|
||||
category: '/language:${{ matrix.language }}'
|
||||
|
||||
@@ -38,7 +38,7 @@ jobs:
|
||||
|
||||
- name: Backport PR
|
||||
if: steps.label_check.outputs.label_check == 'success'
|
||||
uses: sorenlouv/backport-github-action@ad888e978060bc1b2798690dd9d03c4036560947 # v9.5.1
|
||||
uses: sorenlouv/backport-github-action@516854e7c9f962b9939085c9a92ea28411d1ae90 # v10.2.0
|
||||
with:
|
||||
github_token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
auto_backport_label_prefix: ${{ env.BACKPORT_LABEL_PREFIX }}
|
||||
|
||||
@@ -26,6 +26,6 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Check PR title format
|
||||
uses: agenthunt/conventional-commit-checker-action@9e552d650d0e205553ec7792d447929fc78e012b # v2.0.0
|
||||
uses: agenthunt/conventional-commit-checker-action@f1823f632e95a64547566dcd2c7da920e67117ad # v2.0.1
|
||||
with:
|
||||
pr-title-regex: '^(feat|fix|docs|style|refactor|perf|test|chore|build|ci|revert)(\([^)]+\))?!?: .+'
|
||||
|
||||
@@ -28,6 +28,6 @@ jobs:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Scan for secrets with TruffleHog
|
||||
uses: trufflesecurity/trufflehog@ad6fc8fb446b8fafbf7ea8193d2d6bfd42f45690 # v3.90.11
|
||||
uses: trufflesecurity/trufflehog@b84c3d14d189e16da175e2c27fa8136603783ffc # v3.90.12
|
||||
with:
|
||||
extra_args: '--results=verified,unknown'
|
||||
|
||||
@@ -83,7 +83,7 @@ jobs:
|
||||
|
||||
- name: Update PR comment with changelog status
|
||||
if: github.event.pull_request.head.repo.full_name == github.repository
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
comment-id: ${{ steps.find-comment.outputs.comment-id }}
|
||||
|
||||
@@ -97,7 +97,7 @@ jobs:
|
||||
body-includes: '<!-- conflict-checker-comment -->'
|
||||
|
||||
- name: Create or update comment
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
|
||||
with:
|
||||
comment-id: ${{ steps.find-comment.outputs.comment-id }}
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
|
||||
@@ -382,7 +382,7 @@ jobs:
|
||||
no-changelog
|
||||
|
||||
- name: Create draft release
|
||||
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836 # v2.3.3
|
||||
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090 # v2.4.1
|
||||
with:
|
||||
tag_name: ${{ env.PROWLER_VERSION }}
|
||||
name: Prowler ${{ env.PROWLER_VERSION }}
|
||||
|
||||
@@ -52,12 +52,12 @@ jobs:
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
|
||||
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/sdk-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
|
||||
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
|
||||
with:
|
||||
category: '/language:${{ matrix.language }}'
|
||||
|
||||
@@ -39,7 +39,7 @@ jobs:
|
||||
run: pip install boto3
|
||||
|
||||
- name: Configure AWS credentials
|
||||
uses: aws-actions/configure-aws-credentials@a03048d87541d1d9fcf2ecf528a4a65ba9bd7838 # v5.0.0
|
||||
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
|
||||
with:
|
||||
aws-region: ${{ env.AWS_REGION }}
|
||||
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
|
||||
|
||||
@@ -48,12 +48,12 @@ jobs:
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
|
||||
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/ui-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
|
||||
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
|
||||
with:
|
||||
category: '/language:${{ matrix.language }}'
|
||||
|
||||
@@ -75,7 +75,7 @@ jobs:
|
||||
echo "All database fixtures loaded successfully!"
|
||||
'
|
||||
- name: Setup Node.js environment
|
||||
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
|
||||
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
|
||||
with:
|
||||
node-version: '20.x'
|
||||
cache: 'npm'
|
||||
|
||||
+6
-1
@@ -39,6 +39,12 @@ secrets-*/
|
||||
# JUnit Reports
|
||||
junit-reports/
|
||||
|
||||
# Test and coverage artifacts
|
||||
*_coverage.xml
|
||||
pytest_*.xml
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# VSCode files
|
||||
.vscode/
|
||||
|
||||
@@ -64,7 +70,6 @@ junit-reports/
|
||||
ui/.env*
|
||||
api/.env*
|
||||
mcp_server/.env*
|
||||
.env.local
|
||||
|
||||
# Coverage
|
||||
.coverage*
|
||||
|
||||
+1
-1
@@ -10,4 +10,4 @@
|
||||
Want some swag as appreciation for your contribution?
|
||||
|
||||
# Prowler Developer Guide
|
||||
https://docs.prowler.com/projects/prowler-open-source/en/latest/developer-guide/introduction/
|
||||
https://goto.prowler.com/devguide
|
||||
|
||||
@@ -56,7 +56,7 @@ Prowler includes hundreds of built-in controls to ensure compliance with standar
|
||||
|
||||
Prowler App is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
|
||||
|
||||

|
||||

|
||||
|
||||
>For more details, refer to the [Prowler App Documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-app-installation)
|
||||
|
||||
@@ -73,26 +73,26 @@ prowler <provider>
|
||||
```console
|
||||
prowler dashboard
|
||||
```
|
||||

|
||||

|
||||
|
||||
# Prowler at a Glance
|
||||
> [!Tip]
|
||||
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
|
||||
|
||||
|
||||
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Stage | Interface |
|
||||
|---|---|---|---|---|---|---|---|
|
||||
| AWS | 576 | 82 | 38 | 10 | Official | Stable | UI, API, CLI |
|
||||
| GCP | 79 | 13 | 11 | 3 | Official | Stable | UI, API, CLI |
|
||||
| Azure | 162 | 19 | 12 | 4 | Official | Stable | UI, API, CLI |
|
||||
| Kubernetes | 83 | 7 | 5 | 7 | Official | Stable | UI, API, CLI |
|
||||
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Interface |
|
||||
|---|---|---|---|---|---|---|
|
||||
| AWS | 576 | 82 | 38 | 10 | Official | UI, API, CLI |
|
||||
| GCP | 79 | 13 | 12 | 3 | Official | UI, API, CLI |
|
||||
| Azure | 162 | 19 | 12 | 4 | Official | UI, API, CLI |
|
||||
| Kubernetes | 83 | 7 | 5 | 7 | Official | UI, API, CLI |
|
||||
| GitHub | 17 | 2 | 1 | 0 | Official | Stable | UI, API, CLI |
|
||||
| M365 | 70 | 7 | 3 | 2 | Official | Stable | UI, API, CLI |
|
||||
| OCI | 51 | 13 | 1 | 10 | Official | Stable | CLI |
|
||||
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | Beta | CLI |
|
||||
| MongoDB Atlas | 10 | 3 | 0 | 0 | Official | Beta | CLI |
|
||||
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | Beta | CLI |
|
||||
| NHN | 6 | 2 | 1 | 0 | Unofficial | Beta | CLI |
|
||||
| M365 | 70 | 7 | 3 | 2 | Official | UI, API, CLI |
|
||||
| OCI | 51 | 13 | 1 | 10 | Official | UI, API, CLI |
|
||||
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | UI, API, CLI |
|
||||
| MongoDB Atlas | 10 | 3 | 0 | 0 | Official | CLI, API |
|
||||
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | CLI |
|
||||
| NHN | 6 | 2 | 1 | 0 | Unofficial | CLI |
|
||||
|
||||
> [!Note]
|
||||
> The numbers in the table are updated periodically.
|
||||
|
||||
@@ -2,6 +2,27 @@
|
||||
|
||||
All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
## [1.15.0] (Prowler UNRELEASED)
|
||||
|
||||
### Added
|
||||
- IaC (Infrastructure as Code) provider support for remote repositories [(#8751)](https://github.com/prowler-cloud/prowler/pull/8751)
|
||||
- Extend `GET /api/v1/providers` with provider-type filters and optional pagination disable to support the new Overview filters [(#8975)](https://github.com/prowler-cloud/prowler/pull/8975)
|
||||
- New endpoint to retrieve the number of providers grouped by provider type [(#8975)](https://github.com/prowler-cloud/prowler/pull/8975)
|
||||
- Support for configuring multiple LLM providers [(#8772)](https://github.com/prowler-cloud/prowler/pull/8772)
|
||||
- Support C5 compliance framework for Azure provider [(#9081)](https://github.com/prowler-cloud/prowler/pull/9081)
|
||||
- Support for Oracle Cloud Infrastructure (OCI) provider [(#8927)](https://github.com/prowler-cloud/prowler/pull/8927)
|
||||
- Support muting findings based on simple rules with custom reason [(#9051)](https://github.com/prowler-cloud/prowler/pull/9051)
|
||||
- Support C5 compliance framework for the GCP provider [(#9097)](https://github.com/prowler-cloud/prowler/pull/9097)
|
||||
- Support for Amazon Bedrock and OpenAI compatible providers in Lighthouse AI [(#8957)](https://github.com/prowler-cloud/prowler/pull/8957)
|
||||
- Support for MongoDB Atlas provider [(#9167)](https://github.com/prowler-cloud/prowler/pull/9167)
|
||||
|
||||
---
|
||||
|
||||
## [1.14.2] (Prowler 5.13.2)
|
||||
|
||||
### Fixed
|
||||
- Update unique constraint for `Provider` model to exclude soft-deleted entries, resolving duplicate errors when re-deleting providers.
|
||||
|
||||
## [1.14.1] (Prowler 5.13.1)
|
||||
|
||||
### Fixed
|
||||
|
||||
@@ -5,6 +5,9 @@ LABEL maintainer="https://github.com/prowler-cloud/api"
|
||||
ARG POWERSHELL_VERSION=7.5.0
|
||||
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
|
||||
|
||||
ARG TRIVY_VERSION=0.66.0
|
||||
ENV TRIVY_VERSION=${TRIVY_VERSION}
|
||||
|
||||
# hadolint ignore=DL3008
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
wget \
|
||||
@@ -36,6 +39,24 @@ RUN ARCH=$(uname -m) && \
|
||||
ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh && \
|
||||
rm /tmp/powershell.tar.gz
|
||||
|
||||
# Install Trivy for IaC scanning
|
||||
RUN ARCH=$(uname -m) && \
|
||||
if [ "$ARCH" = "x86_64" ]; then \
|
||||
TRIVY_ARCH="Linux-64bit" ; \
|
||||
elif [ "$ARCH" = "aarch64" ]; then \
|
||||
TRIVY_ARCH="Linux-ARM64" ; \
|
||||
else \
|
||||
echo "Unsupported architecture for Trivy: $ARCH" && exit 1 ; \
|
||||
fi && \
|
||||
wget --progress=dot:giga "https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_${TRIVY_ARCH}.tar.gz" -O /tmp/trivy.tar.gz && \
|
||||
tar zxf /tmp/trivy.tar.gz -C /tmp && \
|
||||
mv /tmp/trivy /usr/local/bin/trivy && \
|
||||
chmod +x /usr/local/bin/trivy && \
|
||||
rm /tmp/trivy.tar.gz && \
|
||||
# Create trivy cache directory with proper permissions
|
||||
mkdir -p /tmp/.cache/trivy && \
|
||||
chmod 777 /tmp/.cache/trivy
|
||||
|
||||
# Add prowler user
|
||||
RUN addgroup --gid 1000 prowler && \
|
||||
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
|
||||
|
||||
Generated
+4
-59
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.1.3 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 2.2.0 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "about-time"
|
||||
@@ -1164,18 +1164,6 @@ files = [
|
||||
{file = "charset_normalizer-3.4.3.tar.gz", hash = "sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "circuitbreaker"
|
||||
version = "2.1.3"
|
||||
description = "Python Circuit Breaker pattern implementation"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "circuitbreaker-2.1.3-py3-none-any.whl", hash = "sha256:87ba6a3ed03fdc7032bc175561c2b04d52ade9d5faf94ca2b035fbdc5e6b1dd1"},
|
||||
{file = "circuitbreaker-2.1.3.tar.gz", hash = "sha256:1a4baee510f7bea3c91b194dcce7c07805fe96c4423ed5594b75af438531d084"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.2.1"
|
||||
@@ -4058,29 +4046,6 @@ rsa = ["cryptography (>=3.0.0)"]
|
||||
signals = ["blinker (>=1.4.0)"]
|
||||
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
|
||||
|
||||
[[package]]
|
||||
name = "oci"
|
||||
version = "2.160.3"
|
||||
description = "Oracle Cloud Infrastructure Python SDK"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "oci-2.160.3-py3-none-any.whl", hash = "sha256:858bff3e697098bdda44833d2476bfb4632126f0182178e7dbde4dbd156d71f0"},
|
||||
{file = "oci-2.160.3.tar.gz", hash = "sha256:57514889be3b713a8385d86e3ba8a33cf46e3563c2a7e29a93027fb30b8a2537"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = "*"
|
||||
circuitbreaker = {version = ">=1.3.1,<3.0.0", markers = "python_version >= \"3.7\""}
|
||||
cryptography = ">=3.2.1,<46.0.0"
|
||||
pyOpenSSL = ">=17.5.0,<25.0.0"
|
||||
python-dateutil = ">=2.5.3,<3.0.0"
|
||||
pytz = ">=2016.10"
|
||||
|
||||
[package.extras]
|
||||
adk = ["docstring-parser (>=0.16) ; python_version >= \"3.10\" and python_version < \"4\"", "mcp (>=1.6.0) ; python_version >= \"3.10\" and python_version < \"4\"", "pydantic (>=2.10.6) ; python_version >= \"3.10\" and python_version < \"4\"", "rich (>=13.9.4) ; python_version >= \"3.10\" and python_version < \"4\""]
|
||||
|
||||
[[package]]
|
||||
name = "openai"
|
||||
version = "1.101.0"
|
||||
@@ -4669,7 +4634,6 @@ markdown = "3.9.0"
|
||||
microsoft-kiota-abstractions = "1.9.2"
|
||||
msgraph-sdk = "1.23.0"
|
||||
numpy = "2.0.2"
|
||||
oci = "2.160.3"
|
||||
pandas = "2.2.3"
|
||||
py-iam-expand = "0.1.0"
|
||||
py-ocsf-models = "0.5.0"
|
||||
@@ -4686,8 +4650,8 @@ tzlocal = "5.3.1"
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/prowler-cloud/prowler.git"
|
||||
reference = "v5.13"
|
||||
resolved_reference = "b1856e42f0143a64e8cc26c7aa3c7643bd1083d3"
|
||||
reference = "master"
|
||||
resolved_reference = "a52697bfdfee83d14a49c11dcbe96888b5cd767e"
|
||||
|
||||
[[package]]
|
||||
name = "psutil"
|
||||
@@ -5172,25 +5136,6 @@ cffi = ">=1.4.1"
|
||||
docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"]
|
||||
tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyopenssl"
|
||||
version = "24.3.0"
|
||||
description = "Python wrapper module around the OpenSSL library"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyOpenSSL-24.3.0-py3-none-any.whl", hash = "sha256:e474f5a473cd7f92221cc04976e48f4d11502804657a08a989fb3be5514c904a"},
|
||||
{file = "pyopenssl-24.3.0.tar.gz", hash = "sha256:49f7a019577d834746bc55c5fce6ecbcec0f2b4ec5ce1cf43a9a173b8138bb36"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cryptography = ">=41.0.5,<45"
|
||||
|
||||
[package.extras]
|
||||
docs = ["sphinx (!=5.2.0,!=5.2.0.post0,!=7.2.5)", "sphinx_rtd_theme"]
|
||||
test = ["pretend", "pytest (>=3.0.1)", "pytest-rerunfailures"]
|
||||
|
||||
[[package]]
|
||||
name = "pyparsing"
|
||||
version = "3.2.3"
|
||||
@@ -6841,4 +6786,4 @@ type = ["pytest-mypy"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = ">=3.11,<3.13"
|
||||
content-hash = "8fcb616e55530e7940019d3da33e955b026b9105e1216a3c5f39b411c015b6d7"
|
||||
content-hash = "3c9164d668d37d6373eb5200bbe768232ead934d9312b9c68046b1df922789f3"
|
||||
|
||||
+2
-2
@@ -24,7 +24,7 @@ dependencies = [
|
||||
"drf-spectacular-jsonapi==0.5.1",
|
||||
"gunicorn==23.0.0",
|
||||
"lxml==5.3.2",
|
||||
"prowler @ git+https://github.com/prowler-cloud/prowler.git@v5.13",
|
||||
"prowler @ git+https://github.com/prowler-cloud/prowler.git@master",
|
||||
"psycopg2-binary==2.9.9",
|
||||
"pytest-celery[redis] (>=1.0.1,<2.0.0)",
|
||||
"sentry-sdk[django] (>=2.20.0,<3.0.0)",
|
||||
@@ -43,7 +43,7 @@ name = "prowler-api"
|
||||
package-mode = false
|
||||
# Needed for the SDK compatibility
|
||||
requires-python = ">=3.11,<3.13"
|
||||
version = "1.14.1"
|
||||
version = "1.15.0"
|
||||
|
||||
[project.scripts]
|
||||
celery = "src.backend.config.settings.celery"
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
# from django.contrib import admin
|
||||
|
||||
# Register your models here.
|
||||
@@ -27,7 +27,10 @@ from api.models import (
|
||||
Finding,
|
||||
Integration,
|
||||
Invitation,
|
||||
LighthouseProviderConfiguration,
|
||||
LighthouseProviderModels,
|
||||
Membership,
|
||||
MuteRule,
|
||||
OverviewStatusChoices,
|
||||
PermissionChoices,
|
||||
Processor,
|
||||
@@ -245,6 +248,14 @@ class ProviderFilter(FilterSet):
|
||||
choices=Provider.ProviderChoices.choices,
|
||||
lookup_expr="in",
|
||||
)
|
||||
provider_type = ChoiceFilter(
|
||||
choices=Provider.ProviderChoices.choices, field_name="provider"
|
||||
)
|
||||
provider_type__in = ChoiceInFilter(
|
||||
field_name="provider",
|
||||
choices=Provider.ProviderChoices.choices,
|
||||
lookup_expr="in",
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = Provider
|
||||
@@ -928,3 +939,62 @@ class TenantApiKeyFilter(FilterSet):
|
||||
"revoked": ["exact"],
|
||||
"name": ["exact", "icontains"],
|
||||
}
|
||||
|
||||
|
||||
class LighthouseProviderConfigFilter(FilterSet):
|
||||
provider_type = ChoiceFilter(
|
||||
choices=LighthouseProviderConfiguration.LLMProviderChoices.choices
|
||||
)
|
||||
provider_type__in = ChoiceInFilter(
|
||||
choices=LighthouseProviderConfiguration.LLMProviderChoices.choices,
|
||||
field_name="provider_type",
|
||||
lookup_expr="in",
|
||||
)
|
||||
is_active = BooleanFilter()
|
||||
|
||||
class Meta:
|
||||
model = LighthouseProviderConfiguration
|
||||
fields = {
|
||||
"provider_type": ["exact", "in"],
|
||||
"is_active": ["exact"],
|
||||
}
|
||||
|
||||
|
||||
class LighthouseProviderModelsFilter(FilterSet):
|
||||
provider_type = ChoiceFilter(
|
||||
choices=LighthouseProviderConfiguration.LLMProviderChoices.choices,
|
||||
field_name="provider_configuration__provider_type",
|
||||
)
|
||||
provider_type__in = ChoiceInFilter(
|
||||
choices=LighthouseProviderConfiguration.LLMProviderChoices.choices,
|
||||
field_name="provider_configuration__provider_type",
|
||||
lookup_expr="in",
|
||||
)
|
||||
|
||||
# Allow filtering by model id
|
||||
model_id = CharFilter(field_name="model_id", lookup_expr="exact")
|
||||
model_id__icontains = CharFilter(field_name="model_id", lookup_expr="icontains")
|
||||
model_id__in = CharInFilter(field_name="model_id", lookup_expr="in")
|
||||
|
||||
class Meta:
|
||||
model = LighthouseProviderModels
|
||||
fields = {
|
||||
"model_id": ["exact", "icontains", "in"],
|
||||
}
|
||||
|
||||
|
||||
class MuteRuleFilter(FilterSet):
|
||||
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
|
||||
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
|
||||
created_by = UUIDFilter(field_name="created_by__id", lookup_expr="exact")
|
||||
|
||||
class Meta:
|
||||
model = MuteRule
|
||||
fields = {
|
||||
"id": ["exact", "in"],
|
||||
"name": ["exact", "icontains"],
|
||||
"reason": ["icontains"],
|
||||
"enabled": ["exact"],
|
||||
"inserted_at": ["gte", "lte"],
|
||||
"updated_at": ["gte", "lte"],
|
||||
}
|
||||
|
||||
@@ -0,0 +1,266 @@
|
||||
# Generated by Django 5.1.12 on 2025-10-09 07:50
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
|
||||
import django.db.models.deletion
|
||||
from config.custom_logging import BackendLogger
|
||||
from cryptography.fernet import Fernet
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
|
||||
import api.rls
|
||||
from api.db_router import MainRouter
|
||||
|
||||
logger = logging.getLogger(BackendLogger.API)
|
||||
|
||||
|
||||
def migrate_lighthouse_configs_forward(apps, schema_editor):
|
||||
"""
|
||||
Migrate data from old LighthouseConfiguration to new multi-provider models.
|
||||
Old system: one LighthouseConfiguration per tenant (always OpenAI).
|
||||
"""
|
||||
LighthouseConfiguration = apps.get_model("api", "LighthouseConfiguration")
|
||||
LighthouseProviderConfiguration = apps.get_model(
|
||||
"api", "LighthouseProviderConfiguration"
|
||||
)
|
||||
LighthouseTenantConfiguration = apps.get_model(
|
||||
"api", "LighthouseTenantConfiguration"
|
||||
)
|
||||
LighthouseProviderModels = apps.get_model("api", "LighthouseProviderModels")
|
||||
|
||||
fernet = Fernet(settings.SECRETS_ENCRYPTION_KEY.encode())
|
||||
|
||||
# Migrate only tenants that actually have a LighthouseConfiguration
|
||||
for old_config in (
|
||||
LighthouseConfiguration.objects.using(MainRouter.admin_db)
|
||||
.select_related("tenant")
|
||||
.all()
|
||||
):
|
||||
tenant = old_config.tenant
|
||||
tenant_id = str(tenant.id)
|
||||
|
||||
try:
|
||||
# Create OpenAI provider configuration for this tenant
|
||||
api_key_decrypted = fernet.decrypt(bytes(old_config.api_key)).decode()
|
||||
credentials_encrypted = fernet.encrypt(
|
||||
json.dumps({"api_key": api_key_decrypted}).encode()
|
||||
)
|
||||
provider_config = LighthouseProviderConfiguration.objects.using(
|
||||
MainRouter.admin_db
|
||||
).create(
|
||||
tenant=tenant,
|
||||
provider_type="openai",
|
||||
credentials=credentials_encrypted,
|
||||
is_active=old_config.is_active,
|
||||
)
|
||||
|
||||
# Create tenant configuration from old values
|
||||
LighthouseTenantConfiguration.objects.using(MainRouter.admin_db).create(
|
||||
tenant=tenant,
|
||||
business_context=old_config.business_context or "",
|
||||
default_provider="openai",
|
||||
default_models={"openai": old_config.model},
|
||||
)
|
||||
|
||||
# Create initial provider model record
|
||||
LighthouseProviderModels.objects.using(MainRouter.admin_db).create(
|
||||
tenant=tenant,
|
||||
provider_configuration=provider_config,
|
||||
model_id=old_config.model,
|
||||
model_name=old_config.model,
|
||||
default_parameters={},
|
||||
)
|
||||
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Failed to migrate lighthouse config for tenant %s", tenant_id
|
||||
)
|
||||
continue
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0049_compliancerequirementoverview_passed_failed_findings"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="LighthouseProviderConfiguration",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.UUIDField(
|
||||
default=uuid.uuid4,
|
||||
editable=False,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
),
|
||||
),
|
||||
("inserted_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
(
|
||||
"provider_type",
|
||||
models.CharField(
|
||||
choices=[("openai", "OpenAI")],
|
||||
help_text="LLM provider name",
|
||||
max_length=50,
|
||||
),
|
||||
),
|
||||
("base_url", models.URLField(blank=True, null=True)),
|
||||
(
|
||||
"credentials",
|
||||
models.BinaryField(
|
||||
help_text="Encrypted JSON credentials for the provider"
|
||||
),
|
||||
),
|
||||
("is_active", models.BooleanField(default=True)),
|
||||
],
|
||||
options={
|
||||
"db_table": "lighthouse_provider_configurations",
|
||||
"abstract": False,
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="LighthouseProviderModels",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.UUIDField(
|
||||
default=uuid.uuid4,
|
||||
editable=False,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
),
|
||||
),
|
||||
("inserted_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
("model_id", models.CharField(max_length=100)),
|
||||
("model_name", models.CharField(max_length=100)),
|
||||
("default_parameters", models.JSONField(blank=True, default=dict)),
|
||||
],
|
||||
options={
|
||||
"db_table": "lighthouse_provider_models",
|
||||
"abstract": False,
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="LighthouseTenantConfiguration",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.UUIDField(
|
||||
default=uuid.uuid4,
|
||||
editable=False,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
),
|
||||
),
|
||||
("inserted_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
("business_context", models.TextField(blank=True, default="")),
|
||||
("default_provider", models.CharField(blank=True, max_length=50)),
|
||||
("default_models", models.JSONField(blank=True, default=dict)),
|
||||
],
|
||||
options={
|
||||
"db_table": "lighthouse_tenant_config",
|
||||
"abstract": False,
|
||||
},
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="lighthouseproviderconfiguration",
|
||||
name="tenant",
|
||||
field=models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
|
||||
),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="lighthouseprovidermodels",
|
||||
name="provider_configuration",
|
||||
field=models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="available_models",
|
||||
to="api.lighthouseproviderconfiguration",
|
||||
),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="lighthouseprovidermodels",
|
||||
name="tenant",
|
||||
field=models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
|
||||
),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="lighthousetenantconfiguration",
|
||||
name="tenant",
|
||||
field=models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
|
||||
),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="lighthouseproviderconfiguration",
|
||||
index=models.Index(
|
||||
fields=["tenant_id", "provider_type"], name="lh_pc_tenant_type_idx"
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="lighthouseproviderconfiguration",
|
||||
constraint=api.rls.RowLevelSecurityConstraint(
|
||||
"tenant_id",
|
||||
name="rls_on_lighthouseproviderconfiguration",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="lighthouseproviderconfiguration",
|
||||
constraint=models.UniqueConstraint(
|
||||
fields=("tenant_id", "provider_type"),
|
||||
name="unique_provider_config_per_tenant",
|
||||
),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="lighthouseprovidermodels",
|
||||
index=models.Index(
|
||||
fields=["tenant_id", "provider_configuration"],
|
||||
name="lh_prov_models_cfg_idx",
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="lighthouseprovidermodels",
|
||||
constraint=api.rls.RowLevelSecurityConstraint(
|
||||
"tenant_id",
|
||||
name="rls_on_lighthouseprovidermodels",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="lighthouseprovidermodels",
|
||||
constraint=models.UniqueConstraint(
|
||||
fields=("tenant_id", "provider_configuration", "model_id"),
|
||||
name="unique_provider_model_per_configuration",
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="lighthousetenantconfiguration",
|
||||
constraint=api.rls.RowLevelSecurityConstraint(
|
||||
"tenant_id",
|
||||
name="rls_on_lighthousetenantconfiguration",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="lighthousetenantconfiguration",
|
||||
constraint=models.UniqueConstraint(
|
||||
fields=("tenant_id",), name="unique_tenant_lighthouse_config"
|
||||
),
|
||||
),
|
||||
# Migrate data from old LighthouseConfiguration to new tables
|
||||
# This runs after all tables, indexes, and constraints are created
|
||||
# The old Lighthouse configuration table is not removed, so reverse_code is noop
|
||||
# During rollbacks, the old Lighthouse configuration remains intact while the new tables are removed
|
||||
migrations.RunPython(
|
||||
migrate_lighthouse_configs_forward,
|
||||
reverse_code=migrations.RunPython.noop,
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,34 @@
|
||||
# Generated by Django 5.1.7 on 2025-10-14 00:00
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
import api.db_utils
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0050_lighthouse_multi_llm"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="provider",
|
||||
name="provider",
|
||||
field=api.db_utils.ProviderEnumField(
|
||||
choices=[
|
||||
("aws", "AWS"),
|
||||
("azure", "Azure"),
|
||||
("gcp", "GCP"),
|
||||
("kubernetes", "Kubernetes"),
|
||||
("m365", "M365"),
|
||||
("github", "GitHub"),
|
||||
("oraclecloud", "Oracle Cloud Infrastructure"),
|
||||
],
|
||||
default="aws",
|
||||
),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'oraclecloud';",
|
||||
reverse_sql=migrations.RunSQL.noop,
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,117 @@
|
||||
# Generated by Django 5.1.13 on 2025-10-22 11:56
|
||||
|
||||
import uuid
|
||||
|
||||
import django.contrib.postgres.fields
|
||||
import django.core.validators
|
||||
import django.db.models.deletion
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
|
||||
import api.rls
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0051_oraclecloud_provider"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="MuteRule",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.UUIDField(
|
||||
default=uuid.uuid4,
|
||||
editable=False,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
),
|
||||
),
|
||||
("inserted_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
(
|
||||
"name",
|
||||
models.CharField(
|
||||
help_text="Human-readable name for this rule",
|
||||
max_length=100,
|
||||
validators=[django.core.validators.MinLengthValidator(3)],
|
||||
),
|
||||
),
|
||||
(
|
||||
"reason",
|
||||
models.TextField(
|
||||
help_text="Reason for muting",
|
||||
max_length=500,
|
||||
validators=[django.core.validators.MinLengthValidator(3)],
|
||||
),
|
||||
),
|
||||
(
|
||||
"enabled",
|
||||
models.BooleanField(
|
||||
default=True, help_text="Whether this rule is currently enabled"
|
||||
),
|
||||
),
|
||||
(
|
||||
"finding_uids",
|
||||
django.contrib.postgres.fields.ArrayField(
|
||||
base_field=models.CharField(max_length=255),
|
||||
help_text="List of finding UIDs to mute",
|
||||
size=None,
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"db_table": "mute_rules",
|
||||
"abstract": False,
|
||||
},
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="finding",
|
||||
name="muted_at",
|
||||
field=models.DateTimeField(
|
||||
blank=True, help_text="Timestamp when this finding was muted", null=True
|
||||
),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name="tenantapikey",
|
||||
name="name",
|
||||
field=models.CharField(
|
||||
max_length=100,
|
||||
validators=[django.core.validators.MinLengthValidator(3)],
|
||||
),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="muterule",
|
||||
name="created_by",
|
||||
field=models.ForeignKey(
|
||||
help_text="User who created this rule",
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.SET_NULL,
|
||||
related_name="created_mute_rules",
|
||||
to=settings.AUTH_USER_MODEL,
|
||||
),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="muterule",
|
||||
name="tenant",
|
||||
field=models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="muterule",
|
||||
constraint=api.rls.RowLevelSecurityConstraint(
|
||||
"tenant_id",
|
||||
name="rls_on_muterule",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="muterule",
|
||||
constraint=models.UniqueConstraint(
|
||||
fields=("tenant_id", "name"), name="unique_mute_rule_name_per_tenant"
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,25 @@
|
||||
# Generated by Django 5.1.12 on 2025-10-14 11:46
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0052_mute_rules"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="lighthouseproviderconfiguration",
|
||||
name="provider_type",
|
||||
field=models.CharField(
|
||||
choices=[
|
||||
("openai", "OpenAI"),
|
||||
("bedrock", "AWS Bedrock"),
|
||||
("openai_compatible", "OpenAI Compatible"),
|
||||
],
|
||||
help_text="LLM provider name",
|
||||
max_length=50,
|
||||
),
|
||||
)
|
||||
]
|
||||
@@ -0,0 +1,35 @@
|
||||
# Generated by Django 5.1.10 on 2025-09-09 09:25
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
import api.db_utils
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0053_lighthouse_bedrock_openai_compatible"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="provider",
|
||||
name="provider",
|
||||
field=api.db_utils.ProviderEnumField(
|
||||
choices=[
|
||||
("aws", "AWS"),
|
||||
("azure", "Azure"),
|
||||
("gcp", "GCP"),
|
||||
("kubernetes", "Kubernetes"),
|
||||
("m365", "M365"),
|
||||
("github", "GitHub"),
|
||||
("oci", "Oracle Cloud Infrastructure"),
|
||||
("iac", "IaC"),
|
||||
],
|
||||
default="aws",
|
||||
),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'iac';",
|
||||
reverse_sql=migrations.RunSQL.noop,
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,32 @@
|
||||
# Generated by Django 5.1.13 on 2025-11-05 08:37
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
import api.db_utils
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0054_iac_provider"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="provider",
|
||||
name="provider",
|
||||
field=api.db_utils.ProviderEnumField(
|
||||
choices=[
|
||||
("aws", "AWS"),
|
||||
("azure", "Azure"),
|
||||
("gcp", "GCP"),
|
||||
("kubernetes", "Kubernetes"),
|
||||
("m365", "M365"),
|
||||
("github", "GitHub"),
|
||||
("mongodbatlas", "MongoDB Atlas"),
|
||||
("iac", "IaC"),
|
||||
("oraclecloud", "Oracle Cloud Infrastructure"),
|
||||
],
|
||||
default="aws",
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,24 @@
|
||||
# Generated by Django 5.1.13 on 2025-11-06 09:20
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0055_mongodbatlas_provider"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RemoveConstraint(
|
||||
model_name="provider",
|
||||
name="unique_provider_uids",
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="provider",
|
||||
constraint=models.UniqueConstraint(
|
||||
condition=models.Q(("is_deleted", False)),
|
||||
fields=("tenant_id", "provider", "uid"),
|
||||
name="unique_provider_uids",
|
||||
),
|
||||
),
|
||||
]
|
||||
+281
-26
@@ -284,6 +284,9 @@ class Provider(RowLevelSecurityProtectedModel):
|
||||
KUBERNETES = "kubernetes", _("Kubernetes")
|
||||
M365 = "m365", _("M365")
|
||||
GITHUB = "github", _("GitHub")
|
||||
MONGODBATLAS = "mongodbatlas", _("MongoDB Atlas")
|
||||
IAC = "iac", _("IaC")
|
||||
ORACLECLOUD = "oraclecloud", _("Oracle Cloud Infrastructure")
|
||||
|
||||
@staticmethod
|
||||
def validate_aws_uid(value):
|
||||
@@ -354,6 +357,40 @@ class Provider(RowLevelSecurityProtectedModel):
|
||||
pointer="/data/attributes/uid",
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def validate_iac_uid(value):
|
||||
# Validate that it's a valid repository URL (git URL format)
|
||||
if not re.match(
|
||||
r"^(https?://|git@|ssh://)[^\s/]+[^\s]*\.git$|^(https?://)[^\s/]+[^\s]*$",
|
||||
value,
|
||||
):
|
||||
raise ModelValidationError(
|
||||
detail="IaC provider ID must be a valid repository URL (e.g., https://github.com/user/repo or https://github.com/user/repo.git).",
|
||||
code="iac-uid",
|
||||
pointer="/data/attributes/uid",
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def validate_oraclecloud_uid(value):
|
||||
if not re.match(
|
||||
r"^ocid1\.([a-z0-9_-]+)\.([a-z0-9_-]+)\.([a-z0-9_-]*)\.([a-z0-9]+)$", value
|
||||
):
|
||||
raise ModelValidationError(
|
||||
detail="Oracle Cloud Infrastructure provider ID must be a valid tenancy OCID in the format: "
|
||||
"ocid1.<resource_type>.<realm>.<region>.<unique_id>",
|
||||
code="oraclecloud-uid",
|
||||
pointer="/data/attributes/uid",
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def validate_mongodbatlas_uid(value):
|
||||
if not re.match(r"^[0-9a-fA-F]{24}$", value):
|
||||
raise ModelValidationError(
|
||||
detail="MongoDB Atlas organization ID must be a 24-character hexadecimal string.",
|
||||
code="mongodbatlas-uid",
|
||||
pointer="/data/attributes/uid",
|
||||
)
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
updated_at = models.DateTimeField(auto_now=True, editable=False)
|
||||
@@ -388,7 +425,8 @@ class Provider(RowLevelSecurityProtectedModel):
|
||||
|
||||
constraints = [
|
||||
models.UniqueConstraint(
|
||||
fields=("tenant_id", "provider", "uid", "is_deleted"),
|
||||
fields=("tenant_id", "provider", "uid"),
|
||||
condition=Q(is_deleted=False),
|
||||
name="unique_provider_uids",
|
||||
),
|
||||
RowLevelSecurityConstraint(
|
||||
@@ -810,6 +848,9 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
|
||||
muted_reason = models.TextField(
|
||||
blank=True, null=True, validators=[MinLengthValidator(3)], max_length=500
|
||||
)
|
||||
muted_at = models.DateTimeField(
|
||||
null=True, blank=True, help_text="Timestamp when this finding was muted"
|
||||
)
|
||||
compliance = models.JSONField(default=dict, null=True, blank=True)
|
||||
|
||||
# Denormalize resource data for performance
|
||||
@@ -1873,22 +1914,6 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
|
||||
def clean(self):
|
||||
super().clean()
|
||||
|
||||
# Validate temperature
|
||||
if not 0 <= self.temperature <= 1:
|
||||
raise ModelValidationError(
|
||||
detail="Temperature must be between 0 and 1",
|
||||
code="invalid_temperature",
|
||||
pointer="/data/attributes/temperature",
|
||||
)
|
||||
|
||||
# Validate max_tokens
|
||||
if not 500 <= self.max_tokens <= 5000:
|
||||
raise ModelValidationError(
|
||||
detail="Max tokens must be between 500 and 5000",
|
||||
code="invalid_max_tokens",
|
||||
pointer="/data/attributes/max_tokens",
|
||||
)
|
||||
|
||||
@property
|
||||
def api_key_decoded(self):
|
||||
"""Return the decrypted API key, or None if unavailable or invalid."""
|
||||
@@ -1913,15 +1938,6 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
|
||||
code="invalid_api_key",
|
||||
pointer="/data/attributes/api_key",
|
||||
)
|
||||
|
||||
# Validate OpenAI API key format
|
||||
openai_key_pattern = r"^sk-[\w-]+T3BlbkFJ[\w-]+$"
|
||||
if not re.match(openai_key_pattern, value):
|
||||
raise ModelValidationError(
|
||||
detail="Invalid OpenAI API key format.",
|
||||
code="invalid_api_key",
|
||||
pointer="/data/attributes/api_key",
|
||||
)
|
||||
self.api_key = fernet.encrypt(value.encode())
|
||||
|
||||
def save(self, *args, **kwargs):
|
||||
@@ -1947,6 +1963,59 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
|
||||
resource_name = "lighthouse-configurations"
|
||||
|
||||
|
||||
class MuteRule(RowLevelSecurityProtectedModel):
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
updated_at = models.DateTimeField(auto_now=True, editable=False)
|
||||
|
||||
# Rule metadata
|
||||
name = models.CharField(
|
||||
max_length=100,
|
||||
validators=[MinLengthValidator(3)],
|
||||
help_text="Human-readable name for this rule",
|
||||
)
|
||||
reason = models.TextField(
|
||||
validators=[MinLengthValidator(3)],
|
||||
max_length=500,
|
||||
help_text="Reason for muting",
|
||||
)
|
||||
enabled = models.BooleanField(
|
||||
default=True, help_text="Whether this rule is currently enabled"
|
||||
)
|
||||
|
||||
# Audit fields
|
||||
created_by = models.ForeignKey(
|
||||
User,
|
||||
on_delete=models.SET_NULL,
|
||||
null=True,
|
||||
related_name="created_mute_rules",
|
||||
help_text="User who created this rule",
|
||||
)
|
||||
|
||||
# Rule criteria - array of finding UIDs
|
||||
finding_uids = ArrayField(
|
||||
models.CharField(max_length=255), help_text="List of finding UIDs to mute"
|
||||
)
|
||||
|
||||
class Meta(RowLevelSecurityProtectedModel.Meta):
|
||||
db_table = "mute_rules"
|
||||
|
||||
constraints = [
|
||||
RowLevelSecurityConstraint(
|
||||
field="tenant_id",
|
||||
name="rls_on_%(class)s",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
models.UniqueConstraint(
|
||||
fields=("tenant_id", "name"),
|
||||
name="unique_mute_rule_name_per_tenant",
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "mute-rules"
|
||||
|
||||
|
||||
class Processor(RowLevelSecurityProtectedModel):
|
||||
class ProcessorChoices(models.TextChoices):
|
||||
MUTELIST = "mutelist", _("Mutelist")
|
||||
@@ -1984,3 +2053,189 @@ class Processor(RowLevelSecurityProtectedModel):
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "processors"
|
||||
|
||||
|
||||
class LighthouseProviderConfiguration(RowLevelSecurityProtectedModel):
|
||||
"""
|
||||
Per-tenant configuration for an LLM provider (credentials, base URL, activation).
|
||||
|
||||
One configuration per provider type per tenant.
|
||||
"""
|
||||
|
||||
class LLMProviderChoices(models.TextChoices):
|
||||
OPENAI = "openai", _("OpenAI")
|
||||
BEDROCK = "bedrock", _("AWS Bedrock")
|
||||
OPENAI_COMPATIBLE = "openai_compatible", _("OpenAI Compatible")
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
updated_at = models.DateTimeField(auto_now=True, editable=False)
|
||||
|
||||
provider_type = models.CharField(
|
||||
max_length=50,
|
||||
choices=LLMProviderChoices.choices,
|
||||
help_text="LLM provider name",
|
||||
)
|
||||
|
||||
# For OpenAI-compatible providers
|
||||
base_url = models.URLField(blank=True, null=True)
|
||||
|
||||
# Encrypted JSON for provider-specific auth
|
||||
credentials = models.BinaryField(
|
||||
blank=False, null=False, help_text="Encrypted JSON credentials for the provider"
|
||||
)
|
||||
|
||||
is_active = models.BooleanField(default=True)
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.get_provider_type_display()} ({self.tenant_id})"
|
||||
|
||||
def clean(self):
|
||||
super().clean()
|
||||
|
||||
@property
|
||||
def credentials_decoded(self):
|
||||
if not self.credentials:
|
||||
return None
|
||||
try:
|
||||
decrypted_data = fernet.decrypt(bytes(self.credentials))
|
||||
return json.loads(decrypted_data.decode())
|
||||
except (InvalidToken, json.JSONDecodeError) as e:
|
||||
logger.warning("Failed to decrypt provider credentials: %s", e)
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.exception(
|
||||
"Unexpected error while decrypting provider credentials: %s", e
|
||||
)
|
||||
return None
|
||||
|
||||
@credentials_decoded.setter
|
||||
def credentials_decoded(self, value):
|
||||
"""
|
||||
Set and encrypt credentials (assumes serializer performed validation).
|
||||
"""
|
||||
if not value:
|
||||
raise ModelValidationError(
|
||||
detail="Credentials are required",
|
||||
code="invalid_credentials",
|
||||
pointer="/data/attributes/credentials",
|
||||
)
|
||||
self.credentials = fernet.encrypt(json.dumps(value).encode())
|
||||
|
||||
class Meta(RowLevelSecurityProtectedModel.Meta):
|
||||
db_table = "lighthouse_provider_configurations"
|
||||
|
||||
constraints = [
|
||||
RowLevelSecurityConstraint(
|
||||
field="tenant_id",
|
||||
name="rls_on_%(class)s",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
models.UniqueConstraint(
|
||||
fields=["tenant_id", "provider_type"],
|
||||
name="unique_provider_config_per_tenant",
|
||||
),
|
||||
]
|
||||
|
||||
indexes = [
|
||||
models.Index(
|
||||
fields=["tenant_id", "provider_type"],
|
||||
name="lh_pc_tenant_type_idx",
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "lighthouse-providers"
|
||||
|
||||
|
||||
class LighthouseTenantConfiguration(RowLevelSecurityProtectedModel):
|
||||
"""
|
||||
Tenant-level Lighthouse settings (business context and defaults).
|
||||
One record per tenant.
|
||||
"""
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
updated_at = models.DateTimeField(auto_now=True, editable=False)
|
||||
|
||||
business_context = models.TextField(blank=True, default="")
|
||||
|
||||
# Preferred provider key (e.g., "openai", "bedrock", "openai_compatible")
|
||||
default_provider = models.CharField(max_length=50, blank=True)
|
||||
|
||||
# Mapping of provider -> model id, e.g., {"openai": "gpt-4o", "bedrock": "anthropic.claude-v2"}
|
||||
default_models = models.JSONField(default=dict, blank=True)
|
||||
|
||||
def __str__(self):
|
||||
return f"Lighthouse Tenant Config for {self.tenant_id}"
|
||||
|
||||
def clean(self):
|
||||
super().clean()
|
||||
|
||||
class Meta(RowLevelSecurityProtectedModel.Meta):
|
||||
db_table = "lighthouse_tenant_config"
|
||||
|
||||
constraints = [
|
||||
RowLevelSecurityConstraint(
|
||||
field="tenant_id",
|
||||
name="rls_on_%(class)s",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
models.UniqueConstraint(
|
||||
fields=["tenant_id"], name="unique_tenant_lighthouse_config"
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "lighthouse-configurations"
|
||||
|
||||
|
||||
class LighthouseProviderModels(RowLevelSecurityProtectedModel):
|
||||
"""
|
||||
Per-tenant, per-provider configuration list of available LLM models.
|
||||
RLS-protected; populated via provider API using tenant-scoped credentials.
|
||||
"""
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
updated_at = models.DateTimeField(auto_now=True, editable=False)
|
||||
|
||||
# Scope to a specific provider configuration within a tenant
|
||||
provider_configuration = models.ForeignKey(
|
||||
LighthouseProviderConfiguration,
|
||||
on_delete=models.CASCADE,
|
||||
related_name="available_models",
|
||||
)
|
||||
model_id = models.CharField(max_length=100)
|
||||
|
||||
# Human-friendly model name
|
||||
model_name = models.CharField(max_length=100)
|
||||
|
||||
# Model-specific default parameters (e.g., temperature, max_tokens)
|
||||
default_parameters = models.JSONField(default=dict, blank=True)
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.provider_configuration.provider_type}:{self.model_id} ({self.tenant_id})"
|
||||
|
||||
class Meta(RowLevelSecurityProtectedModel.Meta):
|
||||
db_table = "lighthouse_provider_models"
|
||||
constraints = [
|
||||
RowLevelSecurityConstraint(
|
||||
field="tenant_id",
|
||||
name="rls_on_%(class)s",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
models.UniqueConstraint(
|
||||
fields=["tenant_id", "provider_configuration", "model_id"],
|
||||
name="unique_provider_model_per_configuration",
|
||||
),
|
||||
]
|
||||
indexes = [
|
||||
models.Index(
|
||||
fields=["tenant_id", "provider_configuration"],
|
||||
name="lh_prov_models_cfg_idx",
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "lighthouse-models"
|
||||
|
||||
@@ -6,7 +6,14 @@ from django.dispatch import receiver
|
||||
from django_celery_results.backends.database import DatabaseBackend
|
||||
|
||||
from api.db_utils import delete_related_daily_task
|
||||
from api.models import Membership, Provider, TenantAPIKey, User
|
||||
from api.models import (
|
||||
LighthouseProviderConfiguration,
|
||||
LighthouseTenantConfiguration,
|
||||
Membership,
|
||||
Provider,
|
||||
TenantAPIKey,
|
||||
User,
|
||||
)
|
||||
|
||||
|
||||
def create_task_result_on_publish(sender=None, headers=None, **kwargs): # noqa: F841
|
||||
@@ -56,3 +63,33 @@ def revoke_membership_api_keys(sender, instance, **kwargs): # noqa: F841
|
||||
TenantAPIKey.objects.filter(
|
||||
entity=instance.user, tenant_id=instance.tenant.id
|
||||
).update(revoked=True)
|
||||
|
||||
|
||||
@receiver(pre_delete, sender=LighthouseProviderConfiguration)
|
||||
def cleanup_lighthouse_defaults_before_delete(sender, instance, **kwargs): # noqa: F841
|
||||
"""
|
||||
Ensure tenant Lighthouse defaults do not reference a soon-to-be-deleted provider.
|
||||
|
||||
This runs for both per-instance deletes and queryset (bulk) deletes.
|
||||
"""
|
||||
try:
|
||||
tenant_cfg = LighthouseTenantConfiguration.objects.get(
|
||||
tenant_id=instance.tenant_id
|
||||
)
|
||||
except LighthouseTenantConfiguration.DoesNotExist:
|
||||
return
|
||||
|
||||
updated = False
|
||||
defaults = tenant_cfg.default_models or {}
|
||||
|
||||
if instance.provider_type in defaults:
|
||||
defaults.pop(instance.provider_type, None)
|
||||
tenant_cfg.default_models = defaults
|
||||
updated = True
|
||||
|
||||
if tenant_cfg.default_provider == instance.provider_type:
|
||||
tenant_cfg.default_provider = ""
|
||||
updated = True
|
||||
|
||||
if updated:
|
||||
tenant_cfg.save()
|
||||
|
||||
+2323
-36
File diff suppressed because it is too large
Load Diff
@@ -20,8 +20,11 @@ from prowler.providers.aws.aws_provider import AwsProvider
|
||||
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHubConnection
|
||||
from prowler.providers.azure.azure_provider import AzureProvider
|
||||
from prowler.providers.gcp.gcp_provider import GcpProvider
|
||||
from prowler.providers.github.github_provider import GithubProvider
|
||||
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
|
||||
from prowler.providers.m365.m365_provider import M365Provider
|
||||
from prowler.providers.mongodbatlas.mongodbatlas_provider import MongodbatlasProvider
|
||||
from prowler.providers.oraclecloud.oraclecloud_provider import OraclecloudProvider
|
||||
|
||||
|
||||
class TestMergeDicts:
|
||||
@@ -108,6 +111,9 @@ class TestReturnProwlerProvider:
|
||||
(Provider.ProviderChoices.AZURE.value, AzureProvider),
|
||||
(Provider.ProviderChoices.KUBERNETES.value, KubernetesProvider),
|
||||
(Provider.ProviderChoices.M365.value, M365Provider),
|
||||
(Provider.ProviderChoices.GITHUB.value, GithubProvider),
|
||||
(Provider.ProviderChoices.MONGODBATLAS.value, MongodbatlasProvider),
|
||||
(Provider.ProviderChoices.ORACLECLOUD.value, OraclecloudProvider),
|
||||
],
|
||||
)
|
||||
def test_return_prowler_provider(self, provider_type, expected_provider):
|
||||
@@ -203,6 +209,14 @@ class TestGetProwlerProviderKwargs:
|
||||
Provider.ProviderChoices.GITHUB.value,
|
||||
{"organizations": ["provider_uid"]},
|
||||
),
|
||||
(
|
||||
Provider.ProviderChoices.ORACLECLOUD.value,
|
||||
{},
|
||||
),
|
||||
(
|
||||
Provider.ProviderChoices.MONGODBATLAS.value,
|
||||
{"atlas_organization_id": "provider_uid"},
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_get_prowler_provider_kwargs(self, provider_type, expected_extra_kwargs):
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -18,8 +18,11 @@ from prowler.providers.azure.azure_provider import AzureProvider
|
||||
from prowler.providers.common.models import Connection
|
||||
from prowler.providers.gcp.gcp_provider import GcpProvider
|
||||
from prowler.providers.github.github_provider import GithubProvider
|
||||
from prowler.providers.iac.iac_provider import IacProvider
|
||||
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
|
||||
from prowler.providers.m365.m365_provider import M365Provider
|
||||
from prowler.providers.mongodbatlas.mongodbatlas_provider import MongodbatlasProvider
|
||||
from prowler.providers.oraclecloud.oraclecloud_provider import OraclecloudProvider
|
||||
|
||||
|
||||
class CustomOAuth2Client(OAuth2Client):
|
||||
@@ -65,8 +68,11 @@ def return_prowler_provider(
|
||||
| AzureProvider
|
||||
| GcpProvider
|
||||
| GithubProvider
|
||||
| IacProvider
|
||||
| KubernetesProvider
|
||||
| M365Provider
|
||||
| MongodbatlasProvider
|
||||
| OraclecloudProvider
|
||||
]:
|
||||
"""Return the Prowler provider class based on the given provider type.
|
||||
|
||||
@@ -74,7 +80,7 @@ def return_prowler_provider(
|
||||
provider (Provider): The provider object containing the provider type and associated secrets.
|
||||
|
||||
Returns:
|
||||
AwsProvider | AzureProvider | GcpProvider | GithubProvider | KubernetesProvider | M365Provider: The corresponding provider class.
|
||||
AwsProvider | AzureProvider | GcpProvider | GithubProvider | IacProvider | KubernetesProvider | M365Provider | OraclecloudProvider | MongodbatlasProvider: The corresponding provider class.
|
||||
|
||||
Raises:
|
||||
ValueError: If the provider type specified in `provider.provider` is not supported.
|
||||
@@ -92,6 +98,12 @@ def return_prowler_provider(
|
||||
prowler_provider = M365Provider
|
||||
case Provider.ProviderChoices.GITHUB.value:
|
||||
prowler_provider = GithubProvider
|
||||
case Provider.ProviderChoices.MONGODBATLAS.value:
|
||||
prowler_provider = MongodbatlasProvider
|
||||
case Provider.ProviderChoices.IAC.value:
|
||||
prowler_provider = IacProvider
|
||||
case Provider.ProviderChoices.ORACLECLOUD.value:
|
||||
prowler_provider = OraclecloudProvider
|
||||
case _:
|
||||
raise ValueError(f"Provider type {provider.provider} not supported")
|
||||
return prowler_provider
|
||||
@@ -128,6 +140,21 @@ def get_prowler_provider_kwargs(
|
||||
**prowler_provider_kwargs,
|
||||
"organizations": [provider.uid],
|
||||
}
|
||||
elif provider.provider == Provider.ProviderChoices.IAC.value:
|
||||
# For IaC provider, uid contains the repository URL
|
||||
# Extract the access token if present in the secret
|
||||
prowler_provider_kwargs = {
|
||||
"scan_repository_url": provider.uid,
|
||||
}
|
||||
if "access_token" in provider.secret.secret:
|
||||
prowler_provider_kwargs["oauth_app_token"] = provider.secret.secret[
|
||||
"access_token"
|
||||
]
|
||||
elif provider.provider == Provider.ProviderChoices.MONGODBATLAS.value:
|
||||
prowler_provider_kwargs = {
|
||||
**prowler_provider_kwargs,
|
||||
"atlas_organization_id": provider.uid,
|
||||
}
|
||||
|
||||
if mutelist_processor:
|
||||
mutelist_content = mutelist_processor.configuration.get("Mutelist", {})
|
||||
@@ -145,8 +172,11 @@ def initialize_prowler_provider(
|
||||
| AzureProvider
|
||||
| GcpProvider
|
||||
| GithubProvider
|
||||
| IacProvider
|
||||
| KubernetesProvider
|
||||
| M365Provider
|
||||
| MongodbatlasProvider
|
||||
| OraclecloudProvider
|
||||
):
|
||||
"""Initialize a Prowler provider instance based on the given provider type.
|
||||
|
||||
@@ -155,8 +185,8 @@ def initialize_prowler_provider(
|
||||
mutelist_processor (Processor): The mutelist processor object containing the mutelist configuration.
|
||||
|
||||
Returns:
|
||||
AwsProvider | AzureProvider | GcpProvider | GithubProvider | KubernetesProvider | M365Provider: An instance of the corresponding provider class
|
||||
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `GithubProvider`, `KubernetesProvider` or `M365Provider`) initialized with the
|
||||
AwsProvider | AzureProvider | GcpProvider | GithubProvider | IacProvider | KubernetesProvider | M365Provider | OraclecloudProvider | MongodbatlasProvider: An instance of the corresponding provider class
|
||||
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `GithubProvider`, `IacProvider`, `KubernetesProvider`, `M365Provider`, `OraclecloudProvider` or `MongodbatlasProvider`) initialized with the
|
||||
provider's secrets.
|
||||
"""
|
||||
prowler_provider = return_prowler_provider(provider)
|
||||
@@ -180,9 +210,23 @@ def prowler_provider_connection_test(provider: Provider) -> Connection:
|
||||
except Provider.secret.RelatedObjectDoesNotExist as secret_error:
|
||||
return Connection(is_connected=False, error=secret_error)
|
||||
|
||||
return prowler_provider.test_connection(
|
||||
**prowler_provider_kwargs, provider_id=provider.uid, raise_on_exception=False
|
||||
)
|
||||
# For IaC provider, construct the kwargs properly for test_connection
|
||||
if provider.provider == Provider.ProviderChoices.IAC.value:
|
||||
# Don't pass repository_url from secret, use scan_repository_url with the UID
|
||||
iac_test_kwargs = {
|
||||
"scan_repository_url": provider.uid,
|
||||
"raise_on_exception": False,
|
||||
}
|
||||
# Add access_token if present in the secret
|
||||
if "access_token" in prowler_provider_kwargs:
|
||||
iac_test_kwargs["access_token"] = prowler_provider_kwargs["access_token"]
|
||||
return prowler_provider.test_connection(**iac_test_kwargs)
|
||||
else:
|
||||
return prowler_provider.test_connection(
|
||||
**prowler_provider_kwargs,
|
||||
provider_id=provider.uid,
|
||||
raise_on_exception=False,
|
||||
)
|
||||
|
||||
|
||||
def prowler_integration_connection_test(integration: Integration) -> Connection:
|
||||
|
||||
@@ -12,6 +12,24 @@ from api.models import StateChoices, Task
|
||||
from api.v1.serializers import TaskSerializer
|
||||
|
||||
|
||||
class DisablePaginationMixin:
|
||||
disable_pagination_query_param = "page[disable]"
|
||||
disable_pagination_truthy_values = {"true"}
|
||||
|
||||
def should_disable_pagination(self) -> bool:
|
||||
if not hasattr(self, "request"):
|
||||
return False
|
||||
value = self.request.query_params.get(self.disable_pagination_query_param)
|
||||
if value is None:
|
||||
return False
|
||||
return str(value).lower() in self.disable_pagination_truthy_values
|
||||
|
||||
def paginate_queryset(self, queryset):
|
||||
if self.should_disable_pagination():
|
||||
return None
|
||||
return super().paginate_queryset(queryset)
|
||||
|
||||
|
||||
class PaginateByPkMixin:
|
||||
"""
|
||||
Mixin to paginate on a list of PKs (cheaper than heavy JOINs),
|
||||
|
||||
@@ -0,0 +1,209 @@
|
||||
import re
|
||||
|
||||
from drf_spectacular.utils import extend_schema_field
|
||||
from rest_framework_json_api import serializers
|
||||
|
||||
|
||||
class OpenAICredentialsSerializer(serializers.Serializer):
|
||||
api_key = serializers.CharField()
|
||||
|
||||
def validate_api_key(self, value: str) -> str:
|
||||
pattern = r"^sk-[\w-]+$"
|
||||
if not re.match(pattern, value or ""):
|
||||
raise serializers.ValidationError("Invalid OpenAI API key format.")
|
||||
return value
|
||||
|
||||
def to_internal_value(self, data):
|
||||
"""Check for unknown fields before DRF filters them out."""
|
||||
if not isinstance(data, dict):
|
||||
raise serializers.ValidationError(
|
||||
{"non_field_errors": ["Credentials must be an object"]}
|
||||
)
|
||||
|
||||
allowed_fields = set(self.fields.keys())
|
||||
provided_fields = set(data.keys())
|
||||
extra_fields = provided_fields - allowed_fields
|
||||
|
||||
if extra_fields:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"non_field_errors": [
|
||||
f"Unknown fields in credentials: {', '.join(sorted(extra_fields))}"
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
return super().to_internal_value(data)
|
||||
|
||||
|
||||
class BedrockCredentialsSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for AWS Bedrock credentials validation.
|
||||
|
||||
Validates long-term AWS credentials (AKIA) and region format.
|
||||
"""
|
||||
|
||||
access_key_id = serializers.CharField()
|
||||
secret_access_key = serializers.CharField()
|
||||
region = serializers.CharField()
|
||||
|
||||
def validate_access_key_id(self, value: str) -> str:
|
||||
"""Validate AWS access key ID format (AKIA for long-term credentials)."""
|
||||
pattern = r"^AKIA[0-9A-Z]{16}$"
|
||||
if not re.match(pattern, value or ""):
|
||||
raise serializers.ValidationError(
|
||||
"Invalid AWS access key ID format. Must be AKIA followed by 16 alphanumeric characters."
|
||||
)
|
||||
return value
|
||||
|
||||
def validate_secret_access_key(self, value: str) -> str:
|
||||
"""Validate AWS secret access key format (40 base64 characters)."""
|
||||
pattern = r"^[A-Za-z0-9/+=]{40}$"
|
||||
if not re.match(pattern, value or ""):
|
||||
raise serializers.ValidationError(
|
||||
"Invalid AWS secret access key format. Must be 40 base64 characters."
|
||||
)
|
||||
return value
|
||||
|
||||
def validate_region(self, value: str) -> str:
|
||||
"""Validate AWS region format."""
|
||||
pattern = r"^[a-z]{2}-[a-z]+-\d+$"
|
||||
if not re.match(pattern, value or ""):
|
||||
raise serializers.ValidationError(
|
||||
"Invalid AWS region format. Expected format like 'us-east-1' or 'eu-west-2'."
|
||||
)
|
||||
return value
|
||||
|
||||
def to_internal_value(self, data):
|
||||
"""Check for unknown fields before DRF filters them out."""
|
||||
if not isinstance(data, dict):
|
||||
raise serializers.ValidationError(
|
||||
{"non_field_errors": ["Credentials must be an object"]}
|
||||
)
|
||||
|
||||
allowed_fields = set(self.fields.keys())
|
||||
provided_fields = set(data.keys())
|
||||
extra_fields = provided_fields - allowed_fields
|
||||
|
||||
if extra_fields:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"non_field_errors": [
|
||||
f"Unknown fields in credentials: {', '.join(sorted(extra_fields))}"
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
return super().to_internal_value(data)
|
||||
|
||||
|
||||
class BedrockCredentialsUpdateSerializer(BedrockCredentialsSerializer):
|
||||
"""
|
||||
Serializer for AWS Bedrock credentials during UPDATE operations.
|
||||
|
||||
Inherits all validation logic from BedrockCredentialsSerializer but makes
|
||||
all fields optional to support partial updates.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
# Make all fields optional for updates
|
||||
for field in self.fields.values():
|
||||
field.required = False
|
||||
|
||||
|
||||
class OpenAICompatibleCredentialsSerializer(serializers.Serializer):
|
||||
"""
|
||||
Minimal serializer for OpenAI-compatible credentials.
|
||||
|
||||
Many OpenAI-compatible providers do not use the same key format as OpenAI.
|
||||
We only require a non-empty API key string. Additional fields can be added later
|
||||
without breaking existing configurations.
|
||||
"""
|
||||
|
||||
api_key = serializers.CharField()
|
||||
|
||||
def validate_api_key(self, value: str) -> str:
|
||||
if not isinstance(value, str) or not value.strip():
|
||||
raise serializers.ValidationError("API key is required.")
|
||||
return value.strip()
|
||||
|
||||
def to_internal_value(self, data):
|
||||
"""Check for unknown fields before DRF filters them out."""
|
||||
if not isinstance(data, dict):
|
||||
raise serializers.ValidationError(
|
||||
{"non_field_errors": ["Credentials must be an object"]}
|
||||
)
|
||||
|
||||
allowed_fields = set(self.fields.keys())
|
||||
provided_fields = set(data.keys())
|
||||
extra_fields = provided_fields - allowed_fields
|
||||
|
||||
if extra_fields:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"non_field_errors": [
|
||||
f"Unknown fields in credentials: {', '.join(sorted(extra_fields))}"
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
return super().to_internal_value(data)
|
||||
|
||||
|
||||
@extend_schema_field(
|
||||
{
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"title": "OpenAI Credentials",
|
||||
"properties": {
|
||||
"api_key": {
|
||||
"type": "string",
|
||||
"description": "OpenAI API key. Must start with 'sk-' followed by alphanumeric characters, "
|
||||
"hyphens, or underscores.",
|
||||
"pattern": "^sk-[\\w-]+$",
|
||||
}
|
||||
},
|
||||
"required": ["api_key"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "AWS Bedrock Credentials",
|
||||
"properties": {
|
||||
"access_key_id": {
|
||||
"type": "string",
|
||||
"description": "AWS access key ID.",
|
||||
"pattern": "^AKIA[0-9A-Z]{16}$",
|
||||
},
|
||||
"secret_access_key": {
|
||||
"type": "string",
|
||||
"description": "AWS secret access key.",
|
||||
"pattern": "^[A-Za-z0-9/+=]{40}$",
|
||||
},
|
||||
"region": {
|
||||
"type": "string",
|
||||
"description": "AWS region identifier where Bedrock is available. Examples: us-east-1, "
|
||||
"us-west-2, eu-west-1, ap-northeast-1.",
|
||||
"pattern": "^[a-z]{2}-[a-z]+-\\d+$",
|
||||
},
|
||||
},
|
||||
"required": ["access_key_id", "secret_access_key", "region"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "OpenAI Compatible Credentials",
|
||||
"properties": {
|
||||
"api_key": {
|
||||
"type": "string",
|
||||
"description": "API key for OpenAI-compatible provider. The format varies by provider. "
|
||||
"Note: The 'base_url' field (separate from credentials) is required when using this provider type.",
|
||||
}
|
||||
},
|
||||
"required": ["api_key"],
|
||||
},
|
||||
]
|
||||
}
|
||||
)
|
||||
class LighthouseCredentialsField(serializers.JSONField):
|
||||
pass
|
||||
@@ -239,6 +239,71 @@ from rest_framework_json_api import serializers
|
||||
},
|
||||
"required": ["github_app_id", "github_app_key"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "IaC Repository Credentials",
|
||||
"properties": {
|
||||
"repository_url": {
|
||||
"type": "string",
|
||||
"description": "Repository URL to scan for IaC files.",
|
||||
},
|
||||
"access_token": {
|
||||
"type": "string",
|
||||
"description": "Optional access token for private repositories.",
|
||||
},
|
||||
},
|
||||
"required": ["repository_url"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "Oracle Cloud Infrastructure (OCI) API Key Credentials",
|
||||
"properties": {
|
||||
"user": {
|
||||
"type": "string",
|
||||
"description": "The OCID of the user to authenticate with.",
|
||||
},
|
||||
"fingerprint": {
|
||||
"type": "string",
|
||||
"description": "The fingerprint of the API signing key.",
|
||||
},
|
||||
"key_file": {
|
||||
"type": "string",
|
||||
"description": "The path to the private key file for API signing. Either key_file or key_content must be provided.",
|
||||
},
|
||||
"key_content": {
|
||||
"type": "string",
|
||||
"description": "The content of the private key for API signing (base64 encoded). Either key_file or key_content must be provided.",
|
||||
},
|
||||
"tenancy": {
|
||||
"type": "string",
|
||||
"description": "The OCID of the tenancy.",
|
||||
},
|
||||
"region": {
|
||||
"type": "string",
|
||||
"description": "The OCI region identifier (e.g., us-ashburn-1, us-phoenix-1).",
|
||||
},
|
||||
"pass_phrase": {
|
||||
"type": "string",
|
||||
"description": "The passphrase for the private key, if encrypted.",
|
||||
},
|
||||
},
|
||||
"required": ["user", "fingerprint", "tenancy", "region"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "MongoDB Atlas API Key",
|
||||
"properties": {
|
||||
"atlas_public_key": {
|
||||
"type": "string",
|
||||
"description": "MongoDB Atlas API public key.",
|
||||
},
|
||||
"atlas_private_key": {
|
||||
"type": "string",
|
||||
"description": "MongoDB Atlas API private key.",
|
||||
},
|
||||
},
|
||||
"required": ["atlas_public_key", "atlas_private_key"],
|
||||
},
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
@@ -6,8 +6,10 @@ from django.conf import settings
|
||||
from django.contrib.auth import authenticate
|
||||
from django.contrib.auth.models import update_last_login
|
||||
from django.contrib.auth.password_validation import validate_password
|
||||
from django.db import IntegrityError
|
||||
from drf_spectacular.utils import extend_schema_field
|
||||
from jwt.exceptions import InvalidKeyError
|
||||
from rest_framework.reverse import reverse
|
||||
from rest_framework.validators import UniqueTogetherValidator
|
||||
from rest_framework_json_api import serializers
|
||||
from rest_framework_json_api.relations import SerializerMethodResourceRelatedField
|
||||
@@ -25,7 +27,11 @@ from api.models import (
|
||||
Invitation,
|
||||
InvitationRoleRelationship,
|
||||
LighthouseConfiguration,
|
||||
LighthouseProviderConfiguration,
|
||||
LighthouseProviderModels,
|
||||
LighthouseTenantConfiguration,
|
||||
Membership,
|
||||
MuteRule,
|
||||
Processor,
|
||||
Provider,
|
||||
ProviderGroup,
|
||||
@@ -54,6 +60,13 @@ from api.v1.serializer_utils.integrations import (
|
||||
S3ConfigSerializer,
|
||||
SecurityHubConfigSerializer,
|
||||
)
|
||||
from api.v1.serializer_utils.lighthouse import (
|
||||
BedrockCredentialsSerializer,
|
||||
BedrockCredentialsUpdateSerializer,
|
||||
LighthouseCredentialsField,
|
||||
OpenAICompatibleCredentialsSerializer,
|
||||
OpenAICredentialsSerializer,
|
||||
)
|
||||
from api.v1.serializer_utils.processors import ProcessorConfigField
|
||||
from api.v1.serializer_utils.providers import ProviderSecretField
|
||||
from prowler.lib.mutelist.mutelist import Mutelist
|
||||
@@ -1349,10 +1362,16 @@ class BaseWriteProviderSecretSerializer(BaseWriteSerializer):
|
||||
serializer = GCPProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.GITHUB.value:
|
||||
serializer = GithubProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.IAC.value:
|
||||
serializer = IacProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.KUBERNETES.value:
|
||||
serializer = KubernetesProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.M365.value:
|
||||
serializer = M365ProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.ORACLECLOUD.value:
|
||||
serializer = OracleCloudProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.MONGODBATLAS.value:
|
||||
serializer = MongoDBAtlasProviderSecret(data=secret)
|
||||
else:
|
||||
raise serializers.ValidationError(
|
||||
{"provider": f"Provider type not supported {provider_type}"}
|
||||
@@ -1449,6 +1468,14 @@ class GCPServiceAccountProviderSecret(serializers.Serializer):
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class MongoDBAtlasProviderSecret(serializers.Serializer):
|
||||
atlas_public_key = serializers.CharField()
|
||||
atlas_private_key = serializers.CharField()
|
||||
|
||||
class Meta:
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class KubernetesProviderSecret(serializers.Serializer):
|
||||
kubeconfig_content = serializers.CharField()
|
||||
|
||||
@@ -1466,6 +1493,27 @@ class GithubProviderSecret(serializers.Serializer):
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class IacProviderSecret(serializers.Serializer):
|
||||
repository_url = serializers.CharField()
|
||||
access_token = serializers.CharField(required=False)
|
||||
|
||||
class Meta:
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class OracleCloudProviderSecret(serializers.Serializer):
|
||||
user = serializers.CharField()
|
||||
fingerprint = serializers.CharField()
|
||||
key_file = serializers.CharField(required=False)
|
||||
key_content = serializers.CharField(required=False)
|
||||
tenancy = serializers.CharField()
|
||||
region = serializers.CharField()
|
||||
pass_phrase = serializers.CharField(required=False)
|
||||
|
||||
class Meta:
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class AWSRoleAssumptionProviderSecret(serializers.Serializer):
|
||||
role_arn = serializers.CharField()
|
||||
external_id = serializers.CharField()
|
||||
@@ -2099,6 +2147,17 @@ class OverviewProviderSerializer(serializers.Serializer):
|
||||
}
|
||||
|
||||
|
||||
class OverviewProviderCountSerializer(serializers.Serializer):
|
||||
id = serializers.CharField(source="provider")
|
||||
count = serializers.IntegerField()
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "providers-count-overview"
|
||||
|
||||
def get_root_meta(self, _resource, _many):
|
||||
return {"version": "v1"}
|
||||
|
||||
|
||||
class OverviewFindingSerializer(serializers.Serializer):
|
||||
id = serializers.CharField(default="n/a")
|
||||
new = serializers.IntegerField()
|
||||
@@ -2750,6 +2809,16 @@ class LighthouseConfigCreateSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
"updated_at": {"read_only": True},
|
||||
}
|
||||
|
||||
def validate_temperature(self, value):
|
||||
if not 0 <= value <= 1:
|
||||
raise ValidationError("Temperature must be between 0 and 1.")
|
||||
return value
|
||||
|
||||
def validate_max_tokens(self, value):
|
||||
if not 500 <= value <= 5000:
|
||||
raise ValidationError("Max tokens must be between 500 and 5000.")
|
||||
return value
|
||||
|
||||
def validate(self, attrs):
|
||||
tenant_id = self.context.get("request").tenant_id
|
||||
if LighthouseConfiguration.objects.filter(tenant_id=tenant_id).exists():
|
||||
@@ -2758,6 +2827,11 @@ class LighthouseConfigCreateSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
"tenant_id": "Lighthouse configuration already exists for this tenant."
|
||||
}
|
||||
)
|
||||
api_key = attrs.get("api_key")
|
||||
if api_key is not None:
|
||||
OpenAICredentialsSerializer(data={"api_key": api_key}).is_valid(
|
||||
raise_exception=True
|
||||
)
|
||||
return super().validate(attrs)
|
||||
|
||||
def create(self, validated_data):
|
||||
@@ -2802,6 +2876,24 @@ class LighthouseConfigUpdateSerializer(BaseWriteSerializer):
|
||||
"max_tokens": {"required": False},
|
||||
}
|
||||
|
||||
def validate_temperature(self, value):
|
||||
if not 0 <= value <= 1:
|
||||
raise ValidationError("Temperature must be between 0 and 1.")
|
||||
return value
|
||||
|
||||
def validate_max_tokens(self, value):
|
||||
if not 500 <= value <= 5000:
|
||||
raise ValidationError("Max tokens must be between 500 and 5000.")
|
||||
return value
|
||||
|
||||
def validate(self, attrs):
|
||||
api_key = attrs.get("api_key", None)
|
||||
if api_key is not None:
|
||||
OpenAICredentialsSerializer(data={"api_key": api_key}).is_valid(
|
||||
raise_exception=True
|
||||
)
|
||||
return super().validate(attrs)
|
||||
|
||||
def update(self, instance, validated_data):
|
||||
api_key = validated_data.pop("api_key", None)
|
||||
instance = super().update(instance, validated_data)
|
||||
@@ -2931,3 +3023,606 @@ class TenantApiKeyUpdateSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
):
|
||||
raise ValidationError("An API key with this name already exists.")
|
||||
return value
|
||||
|
||||
|
||||
# Lighthouse: Provider configurations
|
||||
|
||||
|
||||
class LighthouseProviderConfigSerializer(RLSSerializer):
|
||||
"""
|
||||
Read serializer for LighthouseProviderConfiguration.
|
||||
"""
|
||||
|
||||
# Decrypted credentials are only returned in to_representation when requested
|
||||
credentials = serializers.JSONField(required=False, read_only=True)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseProviderConfiguration
|
||||
fields = [
|
||||
"id",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
"provider_type",
|
||||
"base_url",
|
||||
"is_active",
|
||||
"credentials",
|
||||
"url",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"inserted_at": {"read_only": True},
|
||||
"updated_at": {"read_only": True},
|
||||
"is_active": {"read_only": True},
|
||||
"url": {"read_only": True, "view_name": "lighthouse-providers-detail"},
|
||||
}
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "lighthouse-providers"
|
||||
|
||||
def to_representation(self, instance):
|
||||
data = super().to_representation(instance)
|
||||
# Support JSON:API fields filter: fields[lighthouse-providers]=credentials,base_url
|
||||
fields_param = self.context.get("request", None) and self.context[
|
||||
"request"
|
||||
].query_params.get("fields[lighthouse-providers]", "")
|
||||
|
||||
creds = instance.credentials_decoded
|
||||
|
||||
requested_fields = (
|
||||
[f.strip() for f in fields_param.split(",")] if fields_param else []
|
||||
)
|
||||
|
||||
if "credentials" in requested_fields:
|
||||
# Return full decrypted credentials JSON
|
||||
data["credentials"] = creds
|
||||
else:
|
||||
# Return masked credentials by default
|
||||
def mask_value(value):
|
||||
if isinstance(value, str):
|
||||
return "*" * len(value)
|
||||
if isinstance(value, dict):
|
||||
return {k: mask_value(v) for k, v in value.items()}
|
||||
if isinstance(value, list):
|
||||
return [mask_value(v) for v in value]
|
||||
return value
|
||||
|
||||
# Always return masked credentials, even if creds is None
|
||||
if creds is not None:
|
||||
data["credentials"] = mask_value(creds)
|
||||
else:
|
||||
# If credentials_decoded returns None, return None for credentials field
|
||||
data["credentials"] = None
|
||||
|
||||
return data
|
||||
|
||||
|
||||
class LighthouseProviderConfigCreateSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
"""
|
||||
Create serializer for LighthouseProviderConfiguration.
|
||||
Accepts credentials as JSON; stored encrypted via credentials_decoded.
|
||||
"""
|
||||
|
||||
credentials = LighthouseCredentialsField(write_only=True, required=True)
|
||||
base_url = serializers.URLField(
|
||||
required=False,
|
||||
allow_null=True,
|
||||
help_text="Base URL for the LLM provider API. Required for 'openai_compatible' provider type.",
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseProviderConfiguration
|
||||
fields = [
|
||||
"provider_type",
|
||||
"base_url",
|
||||
"credentials",
|
||||
"is_active",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"is_active": {"required": False},
|
||||
"provider_type": {
|
||||
"help_text": "LLM provider type. Determines which credential format to use. "
|
||||
"See 'credentials' field documentation for provider-specific requirements."
|
||||
},
|
||||
}
|
||||
|
||||
def create(self, validated_data):
|
||||
credentials = validated_data.pop("credentials")
|
||||
|
||||
instance = LighthouseProviderConfiguration(**validated_data)
|
||||
instance.tenant_id = self.context.get("tenant_id")
|
||||
instance.credentials_decoded = credentials
|
||||
|
||||
try:
|
||||
instance.save()
|
||||
return instance
|
||||
except IntegrityError:
|
||||
raise ValidationError(
|
||||
{
|
||||
"provider_type": "Configuration for this provider already exists for the tenant."
|
||||
}
|
||||
)
|
||||
|
||||
def validate(self, attrs):
|
||||
provider_type = attrs.get("provider_type")
|
||||
credentials = attrs.get("credentials") or {}
|
||||
base_url = attrs.get("base_url")
|
||||
|
||||
if provider_type == LighthouseProviderConfiguration.LLMProviderChoices.OPENAI:
|
||||
try:
|
||||
OpenAICredentialsSerializer(data=credentials).is_valid(
|
||||
raise_exception=True
|
||||
)
|
||||
except ValidationError as e:
|
||||
details = e.detail.copy()
|
||||
for key, value in details.items():
|
||||
e.detail[f"credentials/{key}"] = value
|
||||
del e.detail[key]
|
||||
raise e
|
||||
elif (
|
||||
provider_type == LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
|
||||
):
|
||||
try:
|
||||
BedrockCredentialsSerializer(data=credentials).is_valid(
|
||||
raise_exception=True
|
||||
)
|
||||
except ValidationError as e:
|
||||
details = e.detail.copy()
|
||||
for key, value in details.items():
|
||||
e.detail[f"credentials/{key}"] = value
|
||||
del e.detail[key]
|
||||
raise e
|
||||
elif (
|
||||
provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE
|
||||
):
|
||||
if not base_url:
|
||||
raise ValidationError({"base_url": "Base URL is required."})
|
||||
try:
|
||||
OpenAICompatibleCredentialsSerializer(data=credentials).is_valid(
|
||||
raise_exception=True
|
||||
)
|
||||
except ValidationError as e:
|
||||
details = e.detail.copy()
|
||||
for key, value in details.items():
|
||||
e.detail[f"credentials/{key}"] = value
|
||||
del e.detail[key]
|
||||
raise e
|
||||
|
||||
return super().validate(attrs)
|
||||
|
||||
|
||||
class LighthouseProviderConfigUpdateSerializer(BaseWriteSerializer):
|
||||
"""
|
||||
Update serializer for LighthouseProviderConfiguration.
|
||||
"""
|
||||
|
||||
credentials = LighthouseCredentialsField(write_only=True, required=False)
|
||||
base_url = serializers.URLField(
|
||||
required=False,
|
||||
allow_null=True,
|
||||
help_text="Base URL for the LLM provider API. Required for 'openai_compatible' provider type.",
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseProviderConfiguration
|
||||
fields = [
|
||||
"id",
|
||||
"provider_type",
|
||||
"base_url",
|
||||
"credentials",
|
||||
"is_active",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"provider_type": {"read_only": True},
|
||||
"is_active": {"required": False},
|
||||
}
|
||||
|
||||
def update(self, instance, validated_data):
|
||||
credentials = validated_data.pop("credentials", None)
|
||||
|
||||
for attr, value in validated_data.items():
|
||||
setattr(instance, attr, value)
|
||||
|
||||
if credentials is not None:
|
||||
# Merge partial credentials with existing ones
|
||||
# New values overwrite existing ones, but unspecified fields are preserved
|
||||
existing_credentials = instance.credentials_decoded or {}
|
||||
merged_credentials = {**existing_credentials, **credentials}
|
||||
instance.credentials_decoded = merged_credentials
|
||||
|
||||
instance.save()
|
||||
return instance
|
||||
|
||||
def validate(self, attrs):
|
||||
provider_type = getattr(self.instance, "provider_type", None)
|
||||
credentials = attrs.get("credentials", None)
|
||||
base_url = attrs.get("base_url", None)
|
||||
|
||||
if (
|
||||
credentials is not None
|
||||
and provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
|
||||
):
|
||||
try:
|
||||
OpenAICredentialsSerializer(data=credentials).is_valid(
|
||||
raise_exception=True
|
||||
)
|
||||
except ValidationError as e:
|
||||
details = e.detail.copy()
|
||||
for key, value in details.items():
|
||||
e.detail[f"credentials/{key}"] = value
|
||||
del e.detail[key]
|
||||
raise e
|
||||
elif (
|
||||
credentials is not None
|
||||
and provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
|
||||
):
|
||||
try:
|
||||
BedrockCredentialsUpdateSerializer(data=credentials).is_valid(
|
||||
raise_exception=True
|
||||
)
|
||||
except ValidationError as e:
|
||||
details = e.detail.copy()
|
||||
for key, value in details.items():
|
||||
e.detail[f"credentials/{key}"] = value
|
||||
del e.detail[key]
|
||||
raise e
|
||||
elif (
|
||||
credentials is not None
|
||||
and provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE
|
||||
):
|
||||
if base_url is None:
|
||||
pass
|
||||
elif not base_url:
|
||||
raise ValidationError({"base_url": "Base URL cannot be empty."})
|
||||
try:
|
||||
OpenAICompatibleCredentialsSerializer(data=credentials).is_valid(
|
||||
raise_exception=True
|
||||
)
|
||||
except ValidationError as e:
|
||||
details = e.detail.copy()
|
||||
for key, value in details.items():
|
||||
e.detail[f"credentials/{key}"] = value
|
||||
del e.detail[key]
|
||||
raise e
|
||||
|
||||
return super().validate(attrs)
|
||||
|
||||
|
||||
# Lighthouse: Tenant configuration
|
||||
|
||||
|
||||
class LighthouseTenantConfigSerializer(RLSSerializer):
|
||||
"""
|
||||
Read serializer for LighthouseTenantConfiguration.
|
||||
"""
|
||||
|
||||
# Build singleton URL without pk
|
||||
url = serializers.SerializerMethodField()
|
||||
|
||||
def get_url(self, obj):
|
||||
request = self.context.get("request")
|
||||
return reverse("lighthouse-configurations", request=request)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseTenantConfiguration
|
||||
fields = [
|
||||
"id",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
"business_context",
|
||||
"default_provider",
|
||||
"default_models",
|
||||
"url",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"inserted_at": {"read_only": True},
|
||||
"updated_at": {"read_only": True},
|
||||
"url": {"read_only": True},
|
||||
}
|
||||
|
||||
|
||||
class LighthouseTenantConfigUpdateSerializer(BaseWriteSerializer):
|
||||
class Meta:
|
||||
model = LighthouseTenantConfiguration
|
||||
fields = [
|
||||
"id",
|
||||
"business_context",
|
||||
"default_provider",
|
||||
"default_models",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
}
|
||||
|
||||
def validate(self, attrs):
|
||||
request = self.context.get("request")
|
||||
tenant_id = self.context.get("tenant_id") or (
|
||||
getattr(request, "tenant_id", None) if request else None
|
||||
)
|
||||
|
||||
default_provider = attrs.get(
|
||||
"default_provider", getattr(self.instance, "default_provider", "")
|
||||
)
|
||||
default_models = attrs.get(
|
||||
"default_models", getattr(self.instance, "default_models", {})
|
||||
)
|
||||
|
||||
if default_provider:
|
||||
supported = set(LighthouseProviderConfiguration.LLMProviderChoices.values)
|
||||
if default_provider not in supported:
|
||||
raise ValidationError(
|
||||
{"default_provider": f"Unsupported provider '{default_provider}'."}
|
||||
)
|
||||
if not LighthouseProviderConfiguration.objects.filter(
|
||||
tenant_id=tenant_id, provider_type=default_provider, is_active=True
|
||||
).exists():
|
||||
raise ValidationError(
|
||||
{
|
||||
"default_provider": f"No active configuration found for '{default_provider}'."
|
||||
}
|
||||
)
|
||||
|
||||
if default_models is not None and not isinstance(default_models, dict):
|
||||
raise ValidationError(
|
||||
{"default_models": "Must be an object mapping provider -> model_id."}
|
||||
)
|
||||
|
||||
for provider_type, model_id in (default_models or {}).items():
|
||||
provider_cfg = LighthouseProviderConfiguration.objects.filter(
|
||||
tenant_id=tenant_id, provider_type=provider_type, is_active=True
|
||||
).first()
|
||||
if not provider_cfg:
|
||||
raise ValidationError(
|
||||
{
|
||||
"default_models": f"No active configuration for provider '{provider_type}'."
|
||||
}
|
||||
)
|
||||
if not LighthouseProviderModels.objects.filter(
|
||||
tenant_id=tenant_id,
|
||||
provider_configuration=provider_cfg,
|
||||
model_id=model_id,
|
||||
).exists():
|
||||
raise ValidationError(
|
||||
{
|
||||
"default_models": f"Invalid model '{model_id}' for provider '{provider_type}'."
|
||||
}
|
||||
)
|
||||
|
||||
return super().validate(attrs)
|
||||
|
||||
|
||||
# Lighthouse: Provider models
|
||||
|
||||
|
||||
class LighthouseProviderModelsSerializer(RLSSerializer):
|
||||
"""
|
||||
Read serializer for LighthouseProviderModels.
|
||||
"""
|
||||
|
||||
provider_configuration = serializers.ResourceRelatedField(read_only=True)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseProviderModels
|
||||
fields = [
|
||||
"id",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
"provider_configuration",
|
||||
"model_id",
|
||||
"model_name",
|
||||
"default_parameters",
|
||||
"url",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"inserted_at": {"read_only": True},
|
||||
"updated_at": {"read_only": True},
|
||||
"url": {"read_only": True, "view_name": "lighthouse-models-detail"},
|
||||
}
|
||||
|
||||
|
||||
class LighthouseProviderModelsCreateSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
provider_configuration = serializers.ResourceRelatedField(
|
||||
queryset=LighthouseProviderConfiguration.objects.all()
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseProviderModels
|
||||
fields = [
|
||||
"provider_configuration",
|
||||
"model_id",
|
||||
"default_parameters",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"default_parameters": {"required": False},
|
||||
}
|
||||
|
||||
|
||||
class LighthouseProviderModelsUpdateSerializer(BaseWriteSerializer):
|
||||
class Meta:
|
||||
model = LighthouseProviderModels
|
||||
fields = [
|
||||
"id",
|
||||
"default_parameters",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
}
|
||||
|
||||
|
||||
# Mute Rules
|
||||
|
||||
|
||||
class MuteRuleSerializer(RLSSerializer):
|
||||
"""
|
||||
Serializer for reading MuteRule instances.
|
||||
"""
|
||||
|
||||
finding_uids = serializers.ListField(
|
||||
child=serializers.CharField(),
|
||||
read_only=True,
|
||||
help_text="List of finding UIDs that are muted by this rule",
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = MuteRule
|
||||
fields = [
|
||||
"id",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
"name",
|
||||
"reason",
|
||||
"enabled",
|
||||
"created_by",
|
||||
"finding_uids",
|
||||
]
|
||||
|
||||
included_serializers = {
|
||||
"created_by": "api.v1.serializers.UserIncludeSerializer",
|
||||
}
|
||||
|
||||
|
||||
class MuteRuleCreateSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
"""
|
||||
Serializer for creating new MuteRule instances.
|
||||
|
||||
Accepts finding_ids in the request, converts them to UIDs, and stores in finding_uids.
|
||||
"""
|
||||
|
||||
finding_ids = serializers.ListField(
|
||||
child=serializers.UUIDField(),
|
||||
write_only=True,
|
||||
required=True,
|
||||
help_text="List of Finding IDs to mute (will be converted to UIDs)",
|
||||
)
|
||||
finding_uids = serializers.ListField(
|
||||
child=serializers.CharField(),
|
||||
read_only=True,
|
||||
help_text="List of finding UIDs that are muted by this rule",
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = MuteRule
|
||||
fields = [
|
||||
"id",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
"name",
|
||||
"reason",
|
||||
"enabled",
|
||||
"created_by",
|
||||
"finding_ids",
|
||||
"finding_uids",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"inserted_at": {"read_only": True},
|
||||
"updated_at": {"read_only": True},
|
||||
"enabled": {"read_only": True},
|
||||
"created_by": {"read_only": True},
|
||||
}
|
||||
|
||||
def validate_name(self, value):
|
||||
"""Validate that the name is unique within the tenant."""
|
||||
tenant_id = self.context.get("tenant_id")
|
||||
if MuteRule.objects.filter(tenant_id=tenant_id, name=value).exists():
|
||||
raise ValidationError("A mute rule with this name already exists.")
|
||||
return value
|
||||
|
||||
def validate_finding_ids(self, value):
|
||||
"""Validate that all finding IDs exist and belong to the tenant."""
|
||||
if not value:
|
||||
raise ValidationError("At least one finding_id must be provided.")
|
||||
|
||||
tenant_id = self.context.get("tenant_id")
|
||||
|
||||
# Check that all findings exist and belong to this tenant
|
||||
findings = Finding.all_objects.filter(tenant_id=tenant_id, id__in=value)
|
||||
found_ids = set(findings.values_list("id", flat=True))
|
||||
provided_ids = set(value)
|
||||
|
||||
missing_ids = provided_ids - found_ids
|
||||
if missing_ids:
|
||||
raise ValidationError(
|
||||
f"The following finding IDs do not exist or do not belong to your tenant: {missing_ids}"
|
||||
)
|
||||
|
||||
return value
|
||||
|
||||
def validate(self, data):
|
||||
"""Validate the entire mute rule, including overlap detection."""
|
||||
data = super().validate(data)
|
||||
|
||||
tenant_id = self.context.get("tenant_id")
|
||||
finding_ids = data.get("finding_ids", [])
|
||||
|
||||
if not finding_ids:
|
||||
return data
|
||||
|
||||
# Convert finding IDs to UIDs (deduplicate in case multiple findings have same UID)
|
||||
findings = Finding.all_objects.filter(id__in=finding_ids, tenant_id=tenant_id)
|
||||
finding_uids = list(set(findings.values_list("uid", flat=True)))
|
||||
|
||||
# Check for overlaps with existing enabled rules
|
||||
existing_rules = MuteRule.objects.filter(tenant_id=tenant_id, enabled=True)
|
||||
|
||||
for rule in existing_rules:
|
||||
overlap = set(finding_uids) & set(rule.finding_uids)
|
||||
if overlap:
|
||||
raise ConflictException(
|
||||
detail=f"The following finding UIDs are already muted by rule '{rule.name}': {overlap}"
|
||||
)
|
||||
|
||||
# Store finding_uids in validated_data for create
|
||||
data["finding_uids"] = finding_uids
|
||||
|
||||
return data
|
||||
|
||||
def create(self, validated_data):
|
||||
"""Create a new mute rule and set created_by."""
|
||||
# Remove finding_ids from validated_data (we've already converted to finding_uids)
|
||||
validated_data.pop("finding_ids", None)
|
||||
|
||||
# Set created_by to the current user
|
||||
request = self.context.get("request")
|
||||
if request and hasattr(request, "user"):
|
||||
validated_data["created_by"] = request.user
|
||||
|
||||
return super().create(validated_data)
|
||||
|
||||
|
||||
class MuteRuleUpdateSerializer(BaseWriteSerializer):
|
||||
"""
|
||||
Serializer for updating MuteRule instances.
|
||||
"""
|
||||
|
||||
class Meta:
|
||||
model = MuteRule
|
||||
fields = [
|
||||
"id",
|
||||
"name",
|
||||
"reason",
|
||||
"enabled",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"name": {"required": False},
|
||||
"reason": {"required": False},
|
||||
"enabled": {"required": False},
|
||||
}
|
||||
|
||||
def validate_name(self, value):
|
||||
"""Validate that the name is unique within the tenant, excluding current instance."""
|
||||
tenant_id = self.context.get("tenant_id")
|
||||
if (
|
||||
MuteRule.objects.filter(tenant_id=tenant_id, name=value)
|
||||
.exclude(id=self.instance.id)
|
||||
.exists()
|
||||
):
|
||||
raise ValidationError("A mute rule with this name already exists.")
|
||||
return value
|
||||
|
||||
@@ -17,7 +17,11 @@ from api.v1.views import (
|
||||
InvitationAcceptViewSet,
|
||||
InvitationViewSet,
|
||||
LighthouseConfigViewSet,
|
||||
LighthouseProviderConfigViewSet,
|
||||
LighthouseProviderModelsViewSet,
|
||||
LighthouseTenantConfigViewSet,
|
||||
MembershipViewSet,
|
||||
MuteRuleViewSet,
|
||||
OverviewViewSet,
|
||||
ProcessorViewSet,
|
||||
ProviderGroupProvidersRelationshipView,
|
||||
@@ -34,12 +38,12 @@ from api.v1.views import (
|
||||
ScheduleViewSet,
|
||||
SchemaView,
|
||||
TaskViewSet,
|
||||
TenantApiKeyViewSet,
|
||||
TenantFinishACSView,
|
||||
TenantMembersViewSet,
|
||||
TenantViewSet,
|
||||
UserRoleRelationshipView,
|
||||
UserViewSet,
|
||||
TenantApiKeyViewSet,
|
||||
)
|
||||
|
||||
router = routers.DefaultRouter(trailing_slash=False)
|
||||
@@ -67,6 +71,17 @@ router.register(
|
||||
basename="lighthouseconfiguration",
|
||||
)
|
||||
router.register(r"api-keys", TenantApiKeyViewSet, basename="api-key")
|
||||
router.register(
|
||||
r"lighthouse/providers",
|
||||
LighthouseProviderConfigViewSet,
|
||||
basename="lighthouse-providers",
|
||||
)
|
||||
router.register(
|
||||
r"lighthouse/models",
|
||||
LighthouseProviderModelsViewSet,
|
||||
basename="lighthouse-models",
|
||||
)
|
||||
router.register(r"mute-rules", MuteRuleViewSet, basename="mute-rule")
|
||||
|
||||
tenants_router = routers.NestedSimpleRouter(router, r"tenants", lookup="tenant")
|
||||
tenants_router.register(
|
||||
@@ -137,6 +152,13 @@ urlpatterns = [
|
||||
),
|
||||
name="provider_group-providers-relationship",
|
||||
),
|
||||
path(
|
||||
"lighthouse/configuration",
|
||||
LighthouseTenantConfigViewSet.as_view(
|
||||
{"get": "list", "patch": "partial_update"}
|
||||
),
|
||||
name="lighthouse-configurations",
|
||||
),
|
||||
# API endpoint to start SAML SSO flow
|
||||
path(
|
||||
"auth/saml/initiate/", SAMLInitiateAPIView.as_view(), name="api_saml_initiate"
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import fnmatch
|
||||
import glob
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from datetime import datetime, timedelta, timezone
|
||||
@@ -60,11 +61,14 @@ from tasks.tasks import (
|
||||
backfill_scan_resource_summaries_task,
|
||||
check_integration_connection_task,
|
||||
check_lighthouse_connection_task,
|
||||
check_lighthouse_provider_connection_task,
|
||||
check_provider_connection_task,
|
||||
delete_provider_task,
|
||||
delete_tenant_task,
|
||||
jira_integration_task,
|
||||
mute_historical_findings_task,
|
||||
perform_scan_task,
|
||||
refresh_lighthouse_provider_models_task,
|
||||
)
|
||||
|
||||
from api.base_views import BaseRLSViewSet, BaseTenantViewset, BaseUserViewset
|
||||
@@ -84,7 +88,10 @@ from api.filters import (
|
||||
InvitationFilter,
|
||||
LatestFindingFilter,
|
||||
LatestResourceFilter,
|
||||
LighthouseProviderConfigFilter,
|
||||
LighthouseProviderModelsFilter,
|
||||
MembershipFilter,
|
||||
MuteRuleFilter,
|
||||
ProcessorFilter,
|
||||
ProviderFilter,
|
||||
ProviderGroupFilter,
|
||||
@@ -106,7 +113,11 @@ from api.models import (
|
||||
Integration,
|
||||
Invitation,
|
||||
LighthouseConfiguration,
|
||||
LighthouseProviderConfiguration,
|
||||
LighthouseProviderModels,
|
||||
LighthouseTenantConfiguration,
|
||||
Membership,
|
||||
MuteRule,
|
||||
Processor,
|
||||
Provider,
|
||||
ProviderGroup,
|
||||
@@ -139,7 +150,7 @@ from api.utils import (
|
||||
validate_invitation,
|
||||
)
|
||||
from api.uuid_utils import datetime_to_uuid7, uuid7_start
|
||||
from api.v1.mixins import PaginateByPkMixin, TaskManagementMixin
|
||||
from api.v1.mixins import DisablePaginationMixin, PaginateByPkMixin, TaskManagementMixin
|
||||
from api.v1.serializers import (
|
||||
ComplianceOverviewAttributesSerializer,
|
||||
ComplianceOverviewDetailSerializer,
|
||||
@@ -160,8 +171,18 @@ from api.v1.serializers import (
|
||||
LighthouseConfigCreateSerializer,
|
||||
LighthouseConfigSerializer,
|
||||
LighthouseConfigUpdateSerializer,
|
||||
LighthouseProviderConfigCreateSerializer,
|
||||
LighthouseProviderConfigSerializer,
|
||||
LighthouseProviderConfigUpdateSerializer,
|
||||
LighthouseProviderModelsSerializer,
|
||||
LighthouseTenantConfigSerializer,
|
||||
LighthouseTenantConfigUpdateSerializer,
|
||||
MembershipSerializer,
|
||||
MuteRuleCreateSerializer,
|
||||
MuteRuleSerializer,
|
||||
MuteRuleUpdateSerializer,
|
||||
OverviewFindingSerializer,
|
||||
OverviewProviderCountSerializer,
|
||||
OverviewProviderSerializer,
|
||||
OverviewServiceSerializer,
|
||||
OverviewSeveritySerializer,
|
||||
@@ -307,7 +328,7 @@ class SchemaView(SpectacularAPIView):
|
||||
|
||||
def get(self, request, *args, **kwargs):
|
||||
spectacular_settings.TITLE = "Prowler API"
|
||||
spectacular_settings.VERSION = "1.14.1"
|
||||
spectacular_settings.VERSION = "1.15.0"
|
||||
spectacular_settings.DESCRIPTION = (
|
||||
"Prowler API specification.\n\nThis file is auto-generated."
|
||||
)
|
||||
@@ -399,6 +420,12 @@ class SchemaView(SpectacularAPIView):
|
||||
"description": "Endpoints for API keys management. These can be used as an alternative to JWT "
|
||||
"authorization.",
|
||||
},
|
||||
{
|
||||
"name": "Mute Rules",
|
||||
"description": "Endpoints for simple mute rules management. These can be used as an alternative to the"
|
||||
" Mutelist Processor if you need to mute specific findings across your tenant with a "
|
||||
"specific reason.",
|
||||
},
|
||||
]
|
||||
return super().get(request, *args, **kwargs)
|
||||
|
||||
@@ -1417,7 +1444,7 @@ class ProviderGroupProvidersRelationshipView(RelationshipView, BaseRLSViewSet):
|
||||
)
|
||||
@method_decorator(CACHE_DECORATOR, name="list")
|
||||
@method_decorator(CACHE_DECORATOR, name="retrieve")
|
||||
class ProviderViewSet(BaseRLSViewSet):
|
||||
class ProviderViewSet(DisablePaginationMixin, BaseRLSViewSet):
|
||||
queryset = Provider.objects.all()
|
||||
serializer_class = ProviderSerializer
|
||||
http_method_names = ["get", "post", "patch", "delete"]
|
||||
@@ -3677,6 +3704,13 @@ class ComplianceOverviewViewSet(BaseRLSViewSet, TaskManagementMixin):
|
||||
"each provider are considered in the aggregation to ensure accurate and up-to-date insights."
|
||||
),
|
||||
),
|
||||
providers_count=extend_schema(
|
||||
summary="Get provider counts grouped by type",
|
||||
description=(
|
||||
"Retrieve the number of providers grouped by provider type. "
|
||||
"This endpoint counts every provider in the tenant, including those without completed scans."
|
||||
),
|
||||
),
|
||||
findings=extend_schema(
|
||||
summary="Get aggregated findings data",
|
||||
description=(
|
||||
@@ -3728,6 +3762,8 @@ class OverviewViewSet(BaseRLSViewSet):
|
||||
def get_serializer_class(self):
|
||||
if self.action == "providers":
|
||||
return OverviewProviderSerializer
|
||||
elif self.action == "providers_count":
|
||||
return OverviewProviderCountSerializer
|
||||
elif self.action == "findings":
|
||||
return OverviewFindingSerializer
|
||||
elif self.action == "findings_severity":
|
||||
@@ -3815,6 +3851,36 @@ class OverviewViewSet(BaseRLSViewSet):
|
||||
status=status.HTTP_200_OK,
|
||||
)
|
||||
|
||||
@action(
|
||||
detail=False,
|
||||
methods=["get"],
|
||||
url_path="providers/count",
|
||||
url_name="providers-count",
|
||||
)
|
||||
def providers_count(self, request):
|
||||
tenant_id = self.request.tenant_id
|
||||
providers_qs = Provider.objects.filter(tenant_id=tenant_id)
|
||||
|
||||
if hasattr(self, "allowed_providers"):
|
||||
allowed_ids = list(self.allowed_providers.values_list("id", flat=True))
|
||||
if not allowed_ids:
|
||||
overview = []
|
||||
return Response(
|
||||
self.get_serializer(overview, many=True).data,
|
||||
status=status.HTTP_200_OK,
|
||||
)
|
||||
providers_qs = providers_qs.filter(id__in=allowed_ids)
|
||||
|
||||
overview = (
|
||||
providers_qs.values("provider")
|
||||
.annotate(count=Count("id"))
|
||||
.order_by("provider")
|
||||
)
|
||||
return Response(
|
||||
self.get_serializer(overview, many=True).data,
|
||||
status=status.HTTP_200_OK,
|
||||
)
|
||||
|
||||
@action(detail=False, methods=["get"], url_name="findings")
|
||||
def findings(self, request):
|
||||
tenant_id = self.request.tenant_id
|
||||
@@ -4177,21 +4243,25 @@ class IntegrationJiraViewSet(BaseRLSViewSet):
|
||||
tags=["Lighthouse AI"],
|
||||
summary="List all Lighthouse AI configurations",
|
||||
description="Retrieve a list of all Lighthouse AI configurations.",
|
||||
deprecated=True,
|
||||
),
|
||||
create=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Create a new Lighthouse AI configuration",
|
||||
description="Create a new Lighthouse AI configuration with the specified details.",
|
||||
deprecated=True,
|
||||
),
|
||||
partial_update=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Partially update a Lighthouse AI configuration",
|
||||
description="Update certain fields of an existing Lighthouse AI configuration.",
|
||||
deprecated=True,
|
||||
),
|
||||
destroy=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Delete a Lighthouse AI configuration",
|
||||
description="Remove a Lighthouse AI configuration by its ID.",
|
||||
deprecated=True,
|
||||
),
|
||||
connection=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
@@ -4199,6 +4269,7 @@ class IntegrationJiraViewSet(BaseRLSViewSet):
|
||||
description="Verify the connection to the OpenAI API for a specific Lighthouse AI configuration.",
|
||||
request=None,
|
||||
responses={202: OpenApiResponse(response=TaskSerializer)},
|
||||
deprecated=True,
|
||||
),
|
||||
)
|
||||
class LighthouseConfigViewSet(BaseRLSViewSet):
|
||||
@@ -4249,6 +4320,255 @@ class LighthouseConfigViewSet(BaseRLSViewSet):
|
||||
)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="List all LLM provider configurations",
|
||||
description="Retrieve all LLM provider configurations for the current tenant",
|
||||
),
|
||||
retrieve=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Retrieve LLM provider configuration",
|
||||
description="Get details for a specific provider configuration in the current tenant.",
|
||||
),
|
||||
create=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Create LLM provider configuration",
|
||||
description="Create a per-tenant configuration for an LLM provider. Only one configuration per provider type "
|
||||
"is allowed per tenant.",
|
||||
),
|
||||
partial_update=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Update LLM provider configuration",
|
||||
description="Partially update a provider configuration (e.g., base_url, is_active).",
|
||||
),
|
||||
destroy=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Delete LLM provider configuration",
|
||||
description="Delete a provider configuration. Any tenant defaults that reference this provider are cleared "
|
||||
"during deletion.",
|
||||
),
|
||||
)
|
||||
class LighthouseProviderConfigViewSet(BaseRLSViewSet):
|
||||
queryset = LighthouseProviderConfiguration.objects.all()
|
||||
serializer_class = LighthouseProviderConfigSerializer
|
||||
http_method_names = ["get", "post", "patch", "delete"]
|
||||
filterset_class = LighthouseProviderConfigFilter
|
||||
|
||||
def get_queryset(self):
|
||||
if getattr(self, "swagger_fake_view", False):
|
||||
return LighthouseProviderConfiguration.objects.none()
|
||||
return LighthouseProviderConfiguration.objects.filter(
|
||||
tenant_id=self.request.tenant_id
|
||||
)
|
||||
|
||||
def get_serializer_class(self):
|
||||
if self.action == "create":
|
||||
return LighthouseProviderConfigCreateSerializer
|
||||
elif self.action == "partial_update":
|
||||
return LighthouseProviderConfigUpdateSerializer
|
||||
elif self.action in ["connection", "refresh_models"]:
|
||||
return TaskSerializer
|
||||
return super().get_serializer_class()
|
||||
|
||||
def create(self, request, *args, **kwargs):
|
||||
serializer = self.get_serializer(data=request.data)
|
||||
serializer.is_valid(raise_exception=True)
|
||||
instance = serializer.save()
|
||||
|
||||
read_serializer = LighthouseProviderConfigSerializer(
|
||||
instance, context=self.get_serializer_context()
|
||||
)
|
||||
headers = self.get_success_headers(read_serializer.data)
|
||||
return Response(
|
||||
data=read_serializer.data,
|
||||
status=status.HTTP_201_CREATED,
|
||||
headers=headers,
|
||||
)
|
||||
|
||||
def partial_update(self, request, *args, **kwargs):
|
||||
instance = self.get_object()
|
||||
serializer = self.get_serializer(
|
||||
instance,
|
||||
data=request.data,
|
||||
partial=True,
|
||||
context=self.get_serializer_context(),
|
||||
)
|
||||
serializer.is_valid(raise_exception=True)
|
||||
serializer.save()
|
||||
read_serializer = LighthouseProviderConfigSerializer(
|
||||
instance, context=self.get_serializer_context()
|
||||
)
|
||||
return Response(data=read_serializer.data, status=status.HTTP_200_OK)
|
||||
|
||||
@extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Check LLM provider connection",
|
||||
description="Validate provider credentials asynchronously and toggle is_active.",
|
||||
request=None,
|
||||
responses={202: OpenApiResponse(response=TaskSerializer)},
|
||||
)
|
||||
@action(detail=True, methods=["post"], url_name="connection")
|
||||
def connection(self, request, pk=None):
|
||||
instance = self.get_object()
|
||||
|
||||
with transaction.atomic():
|
||||
task = check_lighthouse_provider_connection_task.delay(
|
||||
provider_config_id=str(instance.id), tenant_id=self.request.tenant_id
|
||||
)
|
||||
|
||||
prowler_task = Task.objects.get(id=task.id)
|
||||
serializer = TaskSerializer(prowler_task)
|
||||
return Response(
|
||||
data=serializer.data,
|
||||
status=status.HTTP_202_ACCEPTED,
|
||||
headers={
|
||||
"Content-Location": reverse(
|
||||
"task-detail", kwargs={"pk": prowler_task.id}
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
@extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Refresh LLM models catalog",
|
||||
description="Fetch available models for this provider configuration and upsert into catalog. Supports OpenAI, OpenAI-compatible, and AWS Bedrock providers.",
|
||||
request=None,
|
||||
responses={202: OpenApiResponse(response=TaskSerializer)},
|
||||
)
|
||||
@action(
|
||||
detail=True,
|
||||
methods=["post"],
|
||||
url_path="refresh-models",
|
||||
url_name="refresh-models",
|
||||
)
|
||||
def refresh_models(self, request, pk=None):
|
||||
instance = self.get_object()
|
||||
|
||||
with transaction.atomic():
|
||||
task = refresh_lighthouse_provider_models_task.delay(
|
||||
provider_config_id=str(instance.id), tenant_id=self.request.tenant_id
|
||||
)
|
||||
|
||||
prowler_task = Task.objects.get(id=task.id)
|
||||
serializer = TaskSerializer(prowler_task)
|
||||
return Response(
|
||||
data=serializer.data,
|
||||
status=status.HTTP_202_ACCEPTED,
|
||||
headers={
|
||||
"Content-Location": reverse(
|
||||
"task-detail", kwargs={"pk": prowler_task.id}
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Get Lighthouse AI Tenant config",
|
||||
description="Retrieve current tenant-level Lighthouse AI settings. Returns a single configuration object.",
|
||||
),
|
||||
partial_update=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Update Lighthouse AI Tenant config",
|
||||
description="Update tenant-level settings. Validates that the default provider is configured and active and that default model IDs exist for the chosen providers. Auto-creates configuration if it doesn't exist.",
|
||||
),
|
||||
)
|
||||
class LighthouseTenantConfigViewSet(BaseRLSViewSet):
|
||||
"""
|
||||
Singleton endpoint for tenant-level Lighthouse AI configuration.
|
||||
|
||||
This viewset implements a true singleton pattern:
|
||||
- GET returns the single configuration object (or 404 if not found)
|
||||
- PATCH updates/creates the configuration (upsert semantics)
|
||||
- No ID is required in the URL
|
||||
"""
|
||||
|
||||
queryset = LighthouseTenantConfiguration.objects.all()
|
||||
serializer_class = LighthouseTenantConfigSerializer
|
||||
http_method_names = ["get", "patch"]
|
||||
|
||||
def get_queryset(self):
|
||||
if getattr(self, "swagger_fake_view", False):
|
||||
return LighthouseTenantConfiguration.objects.none()
|
||||
return LighthouseTenantConfiguration.objects.filter(
|
||||
tenant_id=self.request.tenant_id
|
||||
)
|
||||
|
||||
def get_serializer_class(self):
|
||||
if self.action == "partial_update":
|
||||
return LighthouseTenantConfigUpdateSerializer
|
||||
return super().get_serializer_class()
|
||||
|
||||
def get_object(self):
|
||||
"""Retrieve the singleton instance for the current tenant."""
|
||||
obj = LighthouseTenantConfiguration.objects.filter(
|
||||
tenant_id=self.request.tenant_id
|
||||
).first()
|
||||
if obj is None:
|
||||
raise NotFound("Tenant Lighthouse configuration not found")
|
||||
self.check_object_permissions(self.request, obj)
|
||||
return obj
|
||||
|
||||
def list(self, request, *args, **kwargs):
|
||||
"""GET endpoint for singleton - returns single object, not an array."""
|
||||
instance = self.get_object()
|
||||
serializer = self.get_serializer(instance)
|
||||
return Response(serializer.data)
|
||||
|
||||
def partial_update(self, request, *args, **kwargs):
|
||||
"""PATCH endpoint for singleton - no pk required. Auto-creates if not exists."""
|
||||
# Auto-create tenant config if it doesn't exist (upsert semantics)
|
||||
instance, created = LighthouseTenantConfiguration.objects.get_or_create(
|
||||
tenant_id=self.request.tenant_id,
|
||||
defaults={},
|
||||
)
|
||||
|
||||
# Extract attributes from JSON:API payload
|
||||
try:
|
||||
payload = json.loads(request.body)
|
||||
attributes = payload.get("data", {}).get("attributes", {})
|
||||
except (json.JSONDecodeError, AttributeError):
|
||||
raise ValidationError("Invalid JSON:API payload")
|
||||
|
||||
serializer = self.get_serializer(instance, data=attributes, partial=True)
|
||||
serializer.is_valid(raise_exception=True)
|
||||
serializer.save()
|
||||
read_serializer = LighthouseTenantConfigSerializer(
|
||||
instance, context=self.get_serializer_context()
|
||||
)
|
||||
return Response(read_serializer.data, status=status.HTTP_200_OK)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="List all LLM models",
|
||||
description="List available LLM models per configured provider for the current tenant.",
|
||||
),
|
||||
retrieve=extend_schema(
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Retrieve LLM model details",
|
||||
description="Get details for a specific LLM model.",
|
||||
),
|
||||
)
|
||||
class LighthouseProviderModelsViewSet(BaseRLSViewSet):
|
||||
queryset = LighthouseProviderModels.objects.all()
|
||||
serializer_class = LighthouseProviderModelsSerializer
|
||||
filterset_class = LighthouseProviderModelsFilter
|
||||
# Expose as read-only catalog collection
|
||||
http_method_names = ["get"]
|
||||
|
||||
def get_queryset(self):
|
||||
if getattr(self, "swagger_fake_view", False):
|
||||
return LighthouseProviderModels.objects.none()
|
||||
return LighthouseProviderModels.objects.filter(tenant_id=self.request.tenant_id)
|
||||
|
||||
def get_serializer_class(self):
|
||||
return super().get_serializer_class()
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["Processor"],
|
||||
@@ -4379,3 +4699,95 @@ class TenantApiKeyViewSet(BaseRLSViewSet):
|
||||
|
||||
serializer = self.get_serializer(instance)
|
||||
return Response(data=serializer.data, status=status.HTTP_200_OK)
|
||||
|
||||
|
||||
# MuteRules
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["Mute Rules"],
|
||||
summary="List all mute rules",
|
||||
description="Retrieve a list of all mute rules with filtering options.",
|
||||
),
|
||||
retrieve=extend_schema(
|
||||
tags=["Mute Rules"],
|
||||
summary="Retrieve a mute rule",
|
||||
description="Fetch detailed information about a specific mute rule by ID.",
|
||||
),
|
||||
create=extend_schema(
|
||||
tags=["Mute Rules"],
|
||||
summary="Create a new mute rule",
|
||||
description="Create a new mute rule by providing finding IDs, name, and reason. "
|
||||
"The rule will immediately mute the selected findings and launch a background task "
|
||||
"to mute all historical findings with matching UIDs.",
|
||||
request=MuteRuleCreateSerializer,
|
||||
),
|
||||
partial_update=extend_schema(
|
||||
tags=["Mute Rules"],
|
||||
summary="Partially update a mute rule",
|
||||
description="Update certain fields of an existing mute rule (e.g., name, reason, enabled).",
|
||||
request=MuteRuleUpdateSerializer,
|
||||
responses={200: MuteRuleSerializer},
|
||||
),
|
||||
destroy=extend_schema(
|
||||
tags=["Mute Rules"],
|
||||
summary="Delete a mute rule",
|
||||
description="Remove a mute rule from the system. Note: Previously muted findings remain muted.",
|
||||
),
|
||||
)
|
||||
class MuteRuleViewSet(BaseRLSViewSet):
|
||||
queryset = MuteRule.objects.all()
|
||||
serializer_class = MuteRuleSerializer
|
||||
filterset_class = MuteRuleFilter
|
||||
http_method_names = ["get", "post", "patch", "delete"]
|
||||
search_fields = ["name", "reason"]
|
||||
ordering = ["-inserted_at"]
|
||||
ordering_fields = [
|
||||
"name",
|
||||
"enabled",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
]
|
||||
required_permissions = [Permissions.MANAGE_SCANS]
|
||||
|
||||
def get_queryset(self):
|
||||
queryset = MuteRule.objects.filter(tenant_id=self.request.tenant_id)
|
||||
return queryset.select_related("created_by")
|
||||
|
||||
def get_serializer_class(self):
|
||||
if self.action == "create":
|
||||
return MuteRuleCreateSerializer
|
||||
elif self.action == "partial_update":
|
||||
return MuteRuleUpdateSerializer
|
||||
return super().get_serializer_class()
|
||||
|
||||
def create(self, request, *args, **kwargs):
|
||||
serializer = self.get_serializer(data=request.data)
|
||||
serializer.is_valid(raise_exception=True)
|
||||
|
||||
# Create the mute rule
|
||||
mute_rule = serializer.save()
|
||||
|
||||
tenant_id = str(request.tenant_id)
|
||||
finding_ids = request.data.get("finding_ids", [])
|
||||
|
||||
# Immediately mute the selected findings
|
||||
Finding.all_objects.filter(
|
||||
id__in=finding_ids, tenant_id=tenant_id, muted=False
|
||||
).update(
|
||||
muted=True,
|
||||
muted_at=mute_rule.inserted_at,
|
||||
muted_reason=mute_rule.reason,
|
||||
)
|
||||
|
||||
# Launch background task for historical muting
|
||||
with transaction.atomic():
|
||||
mute_historical_findings_task.apply_async(
|
||||
kwargs={"tenant_id": tenant_id, "mute_rule_id": str(mute_rule.id)}
|
||||
)
|
||||
|
||||
# Return the created mute rule
|
||||
serializer = self.get_serializer(mute_rule)
|
||||
return Response(
|
||||
data=serializer.data,
|
||||
status=status.HTTP_201_CREATED,
|
||||
)
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
from django.contrib import admin
|
||||
from django.urls import include, path
|
||||
|
||||
urlpatterns = [
|
||||
path("admin/", admin.site.urls),
|
||||
path("api/v1/", include("api.v1.urls")),
|
||||
]
|
||||
|
||||
@@ -23,6 +23,7 @@ from api.models import (
|
||||
Invitation,
|
||||
LighthouseConfiguration,
|
||||
Membership,
|
||||
MuteRule,
|
||||
Processor,
|
||||
Provider,
|
||||
ProviderGroup,
|
||||
@@ -499,8 +500,29 @@ def providers_fixture(tenants_fixture):
|
||||
alias="m365_testing",
|
||||
tenant_id=tenant.id,
|
||||
)
|
||||
provider7 = Provider.objects.create(
|
||||
provider="oraclecloud",
|
||||
uid="ocid1.tenancy.oc1..aaaaaaaa3dwoazoox4q7wrvriywpokp5grlhgnkwtyt6dmwyou7no6mdmzda",
|
||||
alias="oci_testing",
|
||||
tenant_id=tenant.id,
|
||||
)
|
||||
provider8 = Provider.objects.create(
|
||||
provider="mongodbatlas",
|
||||
uid="64b1d3c0e4b03b1234567890",
|
||||
alias="mongodbatlas_testing",
|
||||
tenant_id=tenant.id,
|
||||
)
|
||||
|
||||
return provider1, provider2, provider3, provider4, provider5, provider6
|
||||
return (
|
||||
provider1,
|
||||
provider2,
|
||||
provider3,
|
||||
provider4,
|
||||
provider5,
|
||||
provider6,
|
||||
provider7,
|
||||
provider8,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@@ -1419,6 +1441,34 @@ def api_keys_fixture(tenants_fixture, create_test_user):
|
||||
return [api_key1, api_key2, api_key3]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mute_rules_fixture(tenants_fixture, create_test_user, findings_fixture):
|
||||
"""Create test mute rules for testing."""
|
||||
tenant = tenants_fixture[0]
|
||||
user = create_test_user
|
||||
|
||||
# Create two mute rules: one enabled, one disabled
|
||||
mute_rule1 = MuteRule.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
name="Test Rule 1",
|
||||
reason="Security exception for testing",
|
||||
enabled=True,
|
||||
created_by=user,
|
||||
finding_uids=[findings_fixture[0].uid],
|
||||
)
|
||||
|
||||
mute_rule2 = MuteRule.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
name="Test Rule 2",
|
||||
reason="Compliance exception approved",
|
||||
enabled=False,
|
||||
created_by=user,
|
||||
finding_uids=[findings_fixture[1].uid],
|
||||
)
|
||||
|
||||
return mute_rule1, mute_rule2
|
||||
|
||||
|
||||
def get_authorization_header(access_token: str) -> dict:
|
||||
return {"Authorization": f"Bearer {access_token}"}
|
||||
|
||||
|
||||
@@ -21,6 +21,8 @@ from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected im
|
||||
AWSWellArchitected,
|
||||
)
|
||||
from prowler.lib.outputs.compliance.c5.c5_aws import AWSC5
|
||||
from prowler.lib.outputs.compliance.c5.c5_azure import AzureC5
|
||||
from prowler.lib.outputs.compliance.c5.c5_gcp import GCPC5
|
||||
from prowler.lib.outputs.compliance.ccc.ccc_aws import CCC_AWS
|
||||
from prowler.lib.outputs.compliance.ccc.ccc_azure import CCC_Azure
|
||||
from prowler.lib.outputs.compliance.ccc.ccc_gcp import CCC_GCP
|
||||
@@ -30,6 +32,7 @@ from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_github import GithubCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_kubernetes import KubernetesCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_m365 import M365CIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_oraclecloud import OracleCloudCIS
|
||||
from prowler.lib.outputs.compliance.ens.ens_aws import AWSENS
|
||||
from prowler.lib.outputs.compliance.ens.ens_azure import AzureENS
|
||||
from prowler.lib.outputs.compliance.ens.ens_gcp import GCPENS
|
||||
@@ -87,6 +90,7 @@ COMPLIANCE_CLASS_MAP = {
|
||||
(lambda name: name.startswith("iso27001_"), AzureISO27001),
|
||||
(lambda name: name == "ccc_azure", CCC_Azure),
|
||||
(lambda name: name == "prowler_threatscore_azure", ProwlerThreatScoreAzure),
|
||||
(lambda name: name == "c5_azure", AzureC5),
|
||||
],
|
||||
"gcp": [
|
||||
(lambda name: name.startswith("cis_"), GCPCIS),
|
||||
@@ -95,6 +99,7 @@ COMPLIANCE_CLASS_MAP = {
|
||||
(lambda name: name.startswith("iso27001_"), GCPISO27001),
|
||||
(lambda name: name == "prowler_threatscore_gcp", ProwlerThreatScoreGCP),
|
||||
(lambda name: name == "ccc_gcp", CCC_GCP),
|
||||
(lambda name: name == "c5_gcp", GCPC5),
|
||||
],
|
||||
"kubernetes": [
|
||||
(lambda name: name.startswith("cis_"), KubernetesCIS),
|
||||
@@ -108,6 +113,13 @@ COMPLIANCE_CLASS_MAP = {
|
||||
"github": [
|
||||
(lambda name: name.startswith("cis_"), GithubCIS),
|
||||
],
|
||||
"iac": [
|
||||
# IaC provider doesn't have specific compliance frameworks yet
|
||||
# Trivy handles its own compliance checks
|
||||
],
|
||||
"oraclecloud": [
|
||||
(lambda name: name.startswith("cis_"), OracleCloudCIS),
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,452 @@
|
||||
from typing import Dict
|
||||
|
||||
import boto3
|
||||
import openai
|
||||
from botocore.exceptions import BotoCoreError, ClientError
|
||||
from celery.utils.log import get_task_logger
|
||||
|
||||
from api.models import LighthouseProviderConfiguration, LighthouseProviderModels
|
||||
|
||||
logger = get_task_logger(__name__)
|
||||
|
||||
|
||||
def _extract_openai_api_key(
|
||||
provider_cfg: LighthouseProviderConfiguration,
|
||||
) -> str | None:
|
||||
"""
|
||||
Safely extract the OpenAI API key from a provider configuration.
|
||||
|
||||
Args:
|
||||
provider_cfg (LighthouseProviderConfiguration): The provider configuration instance
|
||||
containing the credentials.
|
||||
|
||||
Returns:
|
||||
str | None: The API key string if present and valid, otherwise None.
|
||||
"""
|
||||
creds = provider_cfg.credentials_decoded
|
||||
if not isinstance(creds, dict):
|
||||
return None
|
||||
api_key = creds.get("api_key")
|
||||
if not isinstance(api_key, str) or not api_key:
|
||||
return None
|
||||
return api_key
|
||||
|
||||
|
||||
def _extract_openai_compatible_params(
|
||||
provider_cfg: LighthouseProviderConfiguration,
|
||||
) -> Dict[str, str] | None:
|
||||
"""
|
||||
Extract base_url and api_key for OpenAI-compatible providers.
|
||||
"""
|
||||
creds = provider_cfg.credentials_decoded
|
||||
base_url = provider_cfg.base_url
|
||||
if not isinstance(creds, dict):
|
||||
return None
|
||||
api_key = creds.get("api_key")
|
||||
if not isinstance(api_key, str) or not api_key:
|
||||
return None
|
||||
if not isinstance(base_url, str) or not base_url:
|
||||
return None
|
||||
return {"base_url": base_url, "api_key": api_key}
|
||||
|
||||
|
||||
def _extract_bedrock_credentials(
|
||||
provider_cfg: LighthouseProviderConfiguration,
|
||||
) -> Dict[str, str] | None:
|
||||
"""
|
||||
Safely extract AWS Bedrock credentials from a provider configuration.
|
||||
|
||||
Args:
|
||||
provider_cfg (LighthouseProviderConfiguration): The provider configuration instance
|
||||
containing the credentials.
|
||||
|
||||
Returns:
|
||||
Dict[str, str] | None: Dictionary with 'access_key_id', 'secret_access_key', and
|
||||
'region' if present and valid, otherwise None.
|
||||
"""
|
||||
creds = provider_cfg.credentials_decoded
|
||||
if not isinstance(creds, dict):
|
||||
return None
|
||||
|
||||
access_key_id = creds.get("access_key_id")
|
||||
secret_access_key = creds.get("secret_access_key")
|
||||
region = creds.get("region")
|
||||
|
||||
# Validate all required fields are present and are strings
|
||||
if (
|
||||
not isinstance(access_key_id, str)
|
||||
or not access_key_id
|
||||
or not isinstance(secret_access_key, str)
|
||||
or not secret_access_key
|
||||
or not isinstance(region, str)
|
||||
or not region
|
||||
):
|
||||
return None
|
||||
|
||||
return {
|
||||
"access_key_id": access_key_id,
|
||||
"secret_access_key": secret_access_key,
|
||||
"region": region,
|
||||
}
|
||||
|
||||
|
||||
def check_lighthouse_provider_connection(provider_config_id: str) -> Dict:
|
||||
"""
|
||||
Validate a Lighthouse provider configuration by calling the provider API and
|
||||
toggle its active state accordingly.
|
||||
|
||||
Args:
|
||||
provider_config_id: The primary key of the `LighthouseProviderConfiguration`
|
||||
to validate.
|
||||
|
||||
Returns:
|
||||
dict: A result dictionary with the following keys:
|
||||
- "connected" (bool): Whether the provider credentials are valid.
|
||||
- "error" (str | None): The error message when not connected, otherwise None.
|
||||
|
||||
Side Effects:
|
||||
- Updates and persists `is_active` on the `LighthouseProviderConfiguration`.
|
||||
|
||||
Raises:
|
||||
LighthouseProviderConfiguration.DoesNotExist: If no configuration exists with the given ID.
|
||||
"""
|
||||
provider_cfg = LighthouseProviderConfiguration.objects.get(pk=provider_config_id)
|
||||
|
||||
try:
|
||||
if (
|
||||
provider_cfg.provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
|
||||
):
|
||||
api_key = _extract_openai_api_key(provider_cfg)
|
||||
if not api_key:
|
||||
provider_cfg.is_active = False
|
||||
provider_cfg.save()
|
||||
return {"connected": False, "error": "API key is invalid or missing"}
|
||||
|
||||
# Test connection by listing models
|
||||
client = openai.OpenAI(api_key=api_key)
|
||||
_ = client.models.list()
|
||||
|
||||
elif (
|
||||
provider_cfg.provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
|
||||
):
|
||||
bedrock_creds = _extract_bedrock_credentials(provider_cfg)
|
||||
if not bedrock_creds:
|
||||
provider_cfg.is_active = False
|
||||
provider_cfg.save()
|
||||
return {
|
||||
"connected": False,
|
||||
"error": "AWS credentials are invalid or missing",
|
||||
}
|
||||
|
||||
# Test connection by listing foundation models
|
||||
bedrock_client = boto3.client(
|
||||
"bedrock",
|
||||
aws_access_key_id=bedrock_creds["access_key_id"],
|
||||
aws_secret_access_key=bedrock_creds["secret_access_key"],
|
||||
region_name=bedrock_creds["region"],
|
||||
)
|
||||
_ = bedrock_client.list_foundation_models()
|
||||
|
||||
elif (
|
||||
provider_cfg.provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE
|
||||
):
|
||||
params = _extract_openai_compatible_params(provider_cfg)
|
||||
if not params:
|
||||
provider_cfg.is_active = False
|
||||
provider_cfg.save()
|
||||
return {
|
||||
"connected": False,
|
||||
"error": "Base URL or API key is invalid or missing",
|
||||
}
|
||||
|
||||
# Test connection using OpenAI SDK with custom base_url
|
||||
# Note: base_url should include version (e.g., https://openrouter.ai/api/v1)
|
||||
client = openai.OpenAI(
|
||||
api_key=params["api_key"],
|
||||
base_url=params["base_url"],
|
||||
)
|
||||
_ = client.models.list()
|
||||
|
||||
else:
|
||||
return {"connected": False, "error": "Unsupported provider type"}
|
||||
|
||||
# Connection successful
|
||||
provider_cfg.is_active = True
|
||||
provider_cfg.save()
|
||||
return {"connected": True, "error": None}
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"%s connection check failed: %s", provider_cfg.provider_type, str(e)
|
||||
)
|
||||
provider_cfg.is_active = False
|
||||
provider_cfg.save()
|
||||
return {"connected": False, "error": str(e)}
|
||||
|
||||
|
||||
def _fetch_openai_models(api_key: str) -> Dict[str, str]:
|
||||
"""
|
||||
Fetch available models from OpenAI API.
|
||||
|
||||
Args:
|
||||
api_key: OpenAI API key for authentication.
|
||||
|
||||
Returns:
|
||||
Dict mapping model_id to model_name. For OpenAI, both are the same
|
||||
as the API doesn't provide separate display names.
|
||||
|
||||
Raises:
|
||||
Exception: If the API call fails.
|
||||
"""
|
||||
client = openai.OpenAI(api_key=api_key)
|
||||
models = client.models.list()
|
||||
# OpenAI uses model.id for both ID and display name
|
||||
return {m.id: m.id for m in getattr(models, "data", [])}
|
||||
|
||||
|
||||
def _fetch_openai_compatible_models(base_url: str, api_key: str) -> Dict[str, str]:
|
||||
"""
|
||||
Fetch available models from an OpenAI-compatible API using the OpenAI SDK.
|
||||
|
||||
Returns a mapping of model_id -> model_name. Prefers the 'name' attribute
|
||||
if available (e.g., from OpenRouter), otherwise falls back to 'id'.
|
||||
|
||||
Note: base_url should include version (e.g., https://openrouter.ai/api/v1)
|
||||
"""
|
||||
client = openai.OpenAI(api_key=api_key, base_url=base_url)
|
||||
models = client.models.list()
|
||||
|
||||
available_models: Dict[str, str] = {}
|
||||
for model in models.data:
|
||||
model_id = model.id
|
||||
# Prefer provider-supplied human-friendly name when available
|
||||
name = getattr(model, "name", None)
|
||||
if name:
|
||||
available_models[model_id] = name
|
||||
else:
|
||||
available_models[model_id] = model_id
|
||||
|
||||
return available_models
|
||||
|
||||
|
||||
def _fetch_bedrock_models(bedrock_creds: Dict[str, str]) -> Dict[str, str]:
|
||||
"""
|
||||
Fetch available models from AWS Bedrock with entitlement verification.
|
||||
|
||||
This function:
|
||||
1. Lists foundation models with TEXT modality support
|
||||
2. Lists inference profiles with TEXT modality support
|
||||
3. Verifies user has entitlement access to each model
|
||||
|
||||
Args:
|
||||
bedrock_creds: Dictionary with 'access_key_id', 'secret_access_key', and 'region'.
|
||||
|
||||
Returns:
|
||||
Dict mapping model_id to model_name for all accessible models.
|
||||
|
||||
Raises:
|
||||
BotoCoreError, ClientError: If AWS API calls fail.
|
||||
"""
|
||||
bedrock_client = boto3.client(
|
||||
"bedrock",
|
||||
aws_access_key_id=bedrock_creds["access_key_id"],
|
||||
aws_secret_access_key=bedrock_creds["secret_access_key"],
|
||||
region_name=bedrock_creds["region"],
|
||||
)
|
||||
|
||||
models_to_check: Dict[str, str] = {}
|
||||
|
||||
# Step 1: Get foundation models with TEXT modality
|
||||
foundation_response = bedrock_client.list_foundation_models()
|
||||
model_summaries = foundation_response.get("modelSummaries", [])
|
||||
|
||||
for model in model_summaries:
|
||||
# Check if model supports TEXT input and output modality
|
||||
input_modalities = model.get("inputModalities", [])
|
||||
output_modalities = model.get("outputModalities", [])
|
||||
|
||||
if "TEXT" not in input_modalities or "TEXT" not in output_modalities:
|
||||
continue
|
||||
|
||||
model_id = model.get("modelId")
|
||||
if not model_id:
|
||||
continue
|
||||
|
||||
inference_types = model.get("inferenceTypesSupported", [])
|
||||
|
||||
# Only include models with ON_DEMAND inference support
|
||||
if "ON_DEMAND" in inference_types:
|
||||
models_to_check[model_id] = model["modelName"]
|
||||
|
||||
# Step 2: Get inference profiles
|
||||
try:
|
||||
inference_profiles_response = bedrock_client.list_inference_profiles()
|
||||
inference_profiles = inference_profiles_response.get(
|
||||
"inferenceProfileSummaries", []
|
||||
)
|
||||
|
||||
for profile in inference_profiles:
|
||||
# Check if profile supports TEXT modality
|
||||
input_modalities = profile.get("inputModalities", [])
|
||||
output_modalities = profile.get("outputModalities", [])
|
||||
|
||||
if "TEXT" not in input_modalities or "TEXT" not in output_modalities:
|
||||
continue
|
||||
|
||||
profile_id = profile.get("inferenceProfileId")
|
||||
if profile_id:
|
||||
models_to_check[profile_id] = profile["inferenceProfileName"]
|
||||
|
||||
except (BotoCoreError, ClientError) as e:
|
||||
logger.info(
|
||||
"Could not fetch inference profiles in %s: %s",
|
||||
bedrock_creds["region"],
|
||||
str(e),
|
||||
)
|
||||
|
||||
# Step 3: Verify entitlement availability for each model
|
||||
available_models: Dict[str, str] = {}
|
||||
|
||||
for model_id, model_name in models_to_check.items():
|
||||
try:
|
||||
availability = bedrock_client.get_foundation_model_availability(
|
||||
modelId=model_id
|
||||
)
|
||||
|
||||
entitlement = availability.get("entitlementAvailability")
|
||||
|
||||
# Only include models user has access to
|
||||
if entitlement == "AVAILABLE":
|
||||
available_models[model_id] = model_name
|
||||
else:
|
||||
logger.debug(
|
||||
"Skipping model %s - entitlement status: %s", model_id, entitlement
|
||||
)
|
||||
|
||||
except (BotoCoreError, ClientError) as e:
|
||||
logger.debug(
|
||||
"Could not check availability for model %s: %s", model_id, str(e)
|
||||
)
|
||||
continue
|
||||
|
||||
return available_models
|
||||
|
||||
|
||||
def refresh_lighthouse_provider_models(provider_config_id: str) -> Dict:
|
||||
"""
|
||||
Refresh the catalog of models for a Lighthouse provider configuration.
|
||||
|
||||
Fetches the current list of models from the provider, upserts entries into
|
||||
`LighthouseProviderModels`, and deletes stale entries no longer returned.
|
||||
|
||||
Args:
|
||||
provider_config_id: The primary key of the `LighthouseProviderConfiguration`
|
||||
whose models should be refreshed.
|
||||
|
||||
Returns:
|
||||
dict: A result dictionary with the following keys on success:
|
||||
- "created" (int): Number of new model rows created.
|
||||
- "updated" (int): Number of existing model rows updated.
|
||||
- "deleted" (int): Number of stale model rows removed.
|
||||
If an error occurs, the dictionary will contain an "error" (str) field instead.
|
||||
|
||||
Raises:
|
||||
LighthouseProviderConfiguration.DoesNotExist: If no configuration exists with the given ID.
|
||||
"""
|
||||
provider_cfg = LighthouseProviderConfiguration.objects.get(pk=provider_config_id)
|
||||
fetched_models: Dict[str, str] = {}
|
||||
|
||||
# Fetch models from the appropriate provider
|
||||
try:
|
||||
if (
|
||||
provider_cfg.provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
|
||||
):
|
||||
api_key = _extract_openai_api_key(provider_cfg)
|
||||
if not api_key:
|
||||
return {
|
||||
"created": 0,
|
||||
"updated": 0,
|
||||
"deleted": 0,
|
||||
"error": "API key is invalid or missing",
|
||||
}
|
||||
fetched_models = _fetch_openai_models(api_key)
|
||||
|
||||
elif (
|
||||
provider_cfg.provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
|
||||
):
|
||||
bedrock_creds = _extract_bedrock_credentials(provider_cfg)
|
||||
if not bedrock_creds:
|
||||
return {
|
||||
"created": 0,
|
||||
"updated": 0,
|
||||
"deleted": 0,
|
||||
"error": "AWS credentials are invalid or missing",
|
||||
}
|
||||
fetched_models = _fetch_bedrock_models(bedrock_creds)
|
||||
|
||||
elif (
|
||||
provider_cfg.provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE
|
||||
):
|
||||
params = _extract_openai_compatible_params(provider_cfg)
|
||||
if not params:
|
||||
return {
|
||||
"created": 0,
|
||||
"updated": 0,
|
||||
"deleted": 0,
|
||||
"error": "Base URL or API key is invalid or missing",
|
||||
}
|
||||
fetched_models = _fetch_openai_compatible_models(
|
||||
params["base_url"], params["api_key"]
|
||||
)
|
||||
|
||||
else:
|
||||
return {
|
||||
"created": 0,
|
||||
"updated": 0,
|
||||
"deleted": 0,
|
||||
"error": "Unsupported provider type",
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Unexpected error refreshing %s models: %s",
|
||||
provider_cfg.provider_type,
|
||||
str(e),
|
||||
)
|
||||
return {"created": 0, "updated": 0, "deleted": 0, "error": str(e)}
|
||||
|
||||
# Upsert models into the catalog
|
||||
created = 0
|
||||
updated = 0
|
||||
|
||||
for model_id, model_name in fetched_models.items():
|
||||
obj, was_created = LighthouseProviderModels.objects.update_or_create(
|
||||
tenant_id=provider_cfg.tenant_id,
|
||||
provider_configuration=provider_cfg,
|
||||
model_id=model_id,
|
||||
defaults={
|
||||
"model_name": model_name,
|
||||
"default_parameters": {},
|
||||
},
|
||||
)
|
||||
if was_created:
|
||||
created += 1
|
||||
else:
|
||||
updated += 1
|
||||
|
||||
# Delete stale models not present anymore
|
||||
deleted, _ = (
|
||||
LighthouseProviderModels.objects.filter(
|
||||
tenant_id=provider_cfg.tenant_id, provider_configuration=provider_cfg
|
||||
)
|
||||
.exclude(model_id__in=fetched_models.keys())
|
||||
.delete()
|
||||
)
|
||||
|
||||
return {"created": created, "updated": updated, "deleted": deleted}
|
||||
@@ -0,0 +1,64 @@
|
||||
from celery.utils.log import get_task_logger
|
||||
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
|
||||
from tasks.utils import batched
|
||||
|
||||
from api.db_utils import rls_transaction
|
||||
from api.models import Finding, MuteRule
|
||||
|
||||
logger = get_task_logger(__name__)
|
||||
|
||||
|
||||
def mute_historical_findings(tenant_id: str, mute_rule_id: str):
|
||||
"""
|
||||
Mute historical findings that match the given mute rule.
|
||||
|
||||
This function processes findings in batches, updating their muted status
|
||||
and adding the mute reason.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The tenant ID for RLS context
|
||||
mute_rule_id (str): The ID of the mute rule to apply
|
||||
|
||||
Returns:
|
||||
dict: Summary of the muting operation with findings_muted count
|
||||
"""
|
||||
findings_muted_count = 0
|
||||
|
||||
# Get the list of UIDs to mute and the reason
|
||||
with rls_transaction(tenant_id):
|
||||
mute_rule = MuteRule.objects.get(id=mute_rule_id, tenant_id=tenant_id)
|
||||
finding_uids = mute_rule.finding_uids
|
||||
mute_reason = mute_rule.reason
|
||||
muted_at = mute_rule.inserted_at
|
||||
|
||||
# Query findings that match the UIDs and are not already muted
|
||||
with rls_transaction(tenant_id):
|
||||
findings_to_mute = Finding.objects.filter(
|
||||
tenant_id=tenant_id, uid__in=finding_uids, muted=False
|
||||
)
|
||||
total_findings = findings_to_mute.count()
|
||||
|
||||
logger.info(
|
||||
f"Processing {total_findings} findings for mute rule {mute_rule_id}"
|
||||
)
|
||||
|
||||
if total_findings > 0:
|
||||
for batch, is_last in batched(
|
||||
findings_to_mute.iterator(), DJANGO_FINDINGS_BATCH_SIZE
|
||||
):
|
||||
batch_ids = [f.id for f in batch]
|
||||
updated_count = Finding.all_objects.filter(
|
||||
id__in=batch_ids, tenant_id=tenant_id
|
||||
).update(
|
||||
muted=True,
|
||||
muted_at=muted_at,
|
||||
muted_reason=mute_reason,
|
||||
)
|
||||
findings_muted_count += updated_count
|
||||
|
||||
logger.info(f"Muted {findings_muted_count} findings for rule {mute_rule_id}")
|
||||
|
||||
return {
|
||||
"findings_muted": findings_muted_count,
|
||||
"rule_id": mute_rule_id,
|
||||
}
|
||||
@@ -30,6 +30,7 @@ from api.exceptions import ProviderConnectionError
|
||||
from api.models import (
|
||||
ComplianceRequirementOverview,
|
||||
Finding,
|
||||
MuteRule,
|
||||
Processor,
|
||||
Provider,
|
||||
Resource,
|
||||
@@ -301,6 +302,20 @@ def perform_prowler_scan(
|
||||
logger.error(f"Error processing mutelist rules: {e}")
|
||||
mutelist_processor = None
|
||||
|
||||
# Load enabled mute rules for this tenant
|
||||
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
|
||||
try:
|
||||
active_mute_rules = MuteRule.objects.filter(
|
||||
tenant_id=tenant_id, enabled=True
|
||||
).values_list("finding_uids", "reason")
|
||||
|
||||
mute_rules_cache = {}
|
||||
for finding_uids, reason in active_mute_rules:
|
||||
for uid in finding_uids:
|
||||
mute_rules_cache[uid] = reason
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading mute rules: {e}")
|
||||
mute_rules_cache = {}
|
||||
try:
|
||||
with rls_transaction(tenant_id):
|
||||
try:
|
||||
@@ -449,11 +464,22 @@ def perform_prowler_scan(
|
||||
if not last_first_seen_at:
|
||||
last_first_seen_at = datetime.now(tz=timezone.utc)
|
||||
|
||||
# If the finding is muted at this time the reason must be the configured Mutelist
|
||||
muted_reason = "Muted by mutelist" if finding.muted else None
|
||||
# Determine if finding should be muted and why
|
||||
# Priority: mutelist processor (highest) > manual mute rules
|
||||
is_muted = False
|
||||
muted_reason = None
|
||||
|
||||
# Check mutelist processor first (highest priority)
|
||||
if finding.muted:
|
||||
is_muted = True
|
||||
muted_reason = "Muted by mutelist"
|
||||
# If not muted by mutelist, check manual mute rules
|
||||
elif finding_uid in mute_rules_cache:
|
||||
is_muted = True
|
||||
muted_reason = mute_rules_cache[finding_uid]
|
||||
|
||||
# Increment failed_findings_count cache if the finding status is FAIL and not muted
|
||||
if status == FindingStatus.FAIL and not finding.muted:
|
||||
if status == FindingStatus.FAIL and not is_muted:
|
||||
resource_uid = finding.resource_uid
|
||||
resource_failed_findings_cache[resource_uid] += 1
|
||||
|
||||
@@ -472,7 +498,8 @@ def perform_prowler_scan(
|
||||
check_id=finding.check_id,
|
||||
scan=scan_instance,
|
||||
first_seen_at=last_first_seen_at,
|
||||
muted=finding.muted,
|
||||
muted=is_muted,
|
||||
muted_at=datetime.now(tz=timezone.utc) if is_muted else None,
|
||||
muted_reason=muted_reason,
|
||||
compliance=finding.compliance,
|
||||
)
|
||||
|
||||
@@ -27,6 +27,11 @@ from tasks.jobs.integrations import (
|
||||
upload_s3_integration,
|
||||
upload_security_hub_integration,
|
||||
)
|
||||
from tasks.jobs.lighthouse_providers import (
|
||||
check_lighthouse_provider_connection,
|
||||
refresh_lighthouse_provider_models,
|
||||
)
|
||||
from tasks.jobs.muting import mute_historical_findings
|
||||
from tasks.jobs.report import generate_threatscore_report_job
|
||||
from tasks.jobs.scan import (
|
||||
aggregate_findings,
|
||||
@@ -524,6 +529,24 @@ def check_lighthouse_connection_task(lighthouse_config_id: str, tenant_id: str =
|
||||
return check_lighthouse_connection(lighthouse_config_id=lighthouse_config_id)
|
||||
|
||||
|
||||
@shared_task(base=RLSTask, name="lighthouse-provider-connection-check")
|
||||
@set_tenant
|
||||
def check_lighthouse_provider_connection_task(
|
||||
provider_config_id: str, tenant_id: str | None = None
|
||||
) -> dict:
|
||||
"""Task wrapper to validate provider credentials and set is_active."""
|
||||
return check_lighthouse_provider_connection(provider_config_id=provider_config_id)
|
||||
|
||||
|
||||
@shared_task(base=RLSTask, name="lighthouse-provider-models-refresh")
|
||||
@set_tenant
|
||||
def refresh_lighthouse_provider_models_task(
|
||||
provider_config_id: str, tenant_id: str | None = None
|
||||
) -> dict:
|
||||
"""Task wrapper to refresh provider models catalog for the given configuration."""
|
||||
return refresh_lighthouse_provider_models(provider_config_id=provider_config_id)
|
||||
|
||||
|
||||
@shared_task(name="integration-check")
|
||||
def check_integrations_task(tenant_id: str, provider_id: str, scan_id: str = None):
|
||||
"""
|
||||
@@ -659,3 +682,25 @@ def generate_threatscore_report_task(tenant_id: str, scan_id: str, provider_id:
|
||||
return generate_threatscore_report_job(
|
||||
tenant_id=tenant_id, scan_id=scan_id, provider_id=provider_id
|
||||
)
|
||||
|
||||
|
||||
@shared_task(name="findings-mute-historical")
|
||||
def mute_historical_findings_task(tenant_id: str, mute_rule_id: str):
|
||||
"""
|
||||
Background task to mute all historical findings matching a mute rule.
|
||||
|
||||
This task processes findings in batches to avoid memory issues with large datasets.
|
||||
It updates the Finding.muted, Finding.muted_at, and Finding.muted_reason fields
|
||||
for all findings whose UID is in the mute rule's finding_uids list.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The tenant ID for RLS context.
|
||||
mute_rule_id (str): The primary key of the MuteRule to apply.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing:
|
||||
- 'findings_muted' (int): Total number of findings muted.
|
||||
- 'rule_id' (str): The mute rule ID.
|
||||
- 'status' (str): Final status ('completed').
|
||||
"""
|
||||
return mute_historical_findings(tenant_id, mute_rule_id)
|
||||
|
||||
@@ -0,0 +1,532 @@
|
||||
from datetime import datetime, timezone
|
||||
from uuid import uuid4
|
||||
|
||||
import pytest
|
||||
from django.core.exceptions import ObjectDoesNotExist
|
||||
from tasks.jobs.muting import mute_historical_findings
|
||||
|
||||
from api.models import Finding, MuteRule
|
||||
from prowler.lib.check.models import Severity
|
||||
from prowler.lib.outputs.finding import Status
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestMuteHistoricalFindings:
|
||||
"""
|
||||
Test suite for the mute_historical_findings function.
|
||||
|
||||
This class tests the batch processing of findings to update their muted status
|
||||
based on MuteRule criteria.
|
||||
"""
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def test_user(self, create_test_user):
|
||||
"""Create a test user for mute rule creation."""
|
||||
return create_test_user
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def mute_rule_with_findings(self, tenants_fixture, findings_fixture, test_user):
|
||||
"""
|
||||
Create a mute rule that targets the first finding in the fixture.
|
||||
"""
|
||||
tenant = tenants_fixture[0]
|
||||
finding = findings_fixture[0]
|
||||
|
||||
mute_rule = MuteRule.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
name="Test Mute Rule",
|
||||
reason="Testing mute functionality",
|
||||
enabled=True,
|
||||
created_by=test_user,
|
||||
finding_uids=[finding.uid],
|
||||
)
|
||||
|
||||
return mute_rule
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def mute_rule_multiple_findings(self, scans_fixture, test_user):
|
||||
"""
|
||||
Create multiple unmuted findings and a mute rule targeting all of them.
|
||||
"""
|
||||
scan = scans_fixture[0]
|
||||
tenant_id = scan.tenant_id
|
||||
|
||||
# Create 5 unmuted findings
|
||||
finding_uids = []
|
||||
for i in range(5):
|
||||
finding = Finding.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
uid=f"test_finding_uid_mute_{i}",
|
||||
scan=scan,
|
||||
status=Status.FAIL,
|
||||
status_extended=f"Test status {i}",
|
||||
impact=Severity.high,
|
||||
severity=Severity.high,
|
||||
raw_result={
|
||||
"status": Status.FAIL,
|
||||
"impact": Severity.high,
|
||||
"severity": Severity.high,
|
||||
},
|
||||
check_id=f"test_check_id_{i}",
|
||||
check_metadata={
|
||||
"CheckId": f"test_check_id_{i}",
|
||||
"Description": f"Test description {i}",
|
||||
},
|
||||
muted=False,
|
||||
)
|
||||
finding_uids.append(finding.uid)
|
||||
|
||||
# Create mute rule targeting all findings
|
||||
mute_rule = MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Test Multiple Findings Mute Rule",
|
||||
reason="Testing batch muting",
|
||||
enabled=True,
|
||||
created_by=test_user,
|
||||
finding_uids=finding_uids,
|
||||
)
|
||||
|
||||
return mute_rule, finding_uids
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def mute_rule_already_muted(self, findings_fixture, test_user):
|
||||
"""
|
||||
Create a mute rule that targets an already-muted finding.
|
||||
"""
|
||||
tenant_id = findings_fixture[1].tenant_id
|
||||
already_muted_finding = findings_fixture[1]
|
||||
|
||||
mute_rule = MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Test Already Muted Rule",
|
||||
reason="Testing already muted findings",
|
||||
enabled=True,
|
||||
created_by=test_user,
|
||||
finding_uids=[already_muted_finding.uid],
|
||||
)
|
||||
|
||||
return mute_rule
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def mute_rule_mixed_findings(self, scans_fixture, test_user):
|
||||
"""
|
||||
Create a mute rule with a mix of muted and unmuted findings.
|
||||
"""
|
||||
scan = scans_fixture[0]
|
||||
tenant_id = scan.tenant_id
|
||||
|
||||
# Create 3 unmuted findings
|
||||
unmuted_uids = []
|
||||
for i in range(3):
|
||||
finding = Finding.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
uid=f"unmuted_finding_{i}",
|
||||
scan=scan,
|
||||
status=Status.FAIL,
|
||||
status_extended=f"Unmuted status {i}",
|
||||
impact=Severity.medium,
|
||||
severity=Severity.medium,
|
||||
raw_result={
|
||||
"status": Status.FAIL,
|
||||
"impact": Severity.medium,
|
||||
"severity": Severity.medium,
|
||||
},
|
||||
check_id=f"unmuted_check_{i}",
|
||||
check_metadata={
|
||||
"CheckId": f"unmuted_check_{i}",
|
||||
"Description": f"Unmuted description {i}",
|
||||
},
|
||||
muted=False,
|
||||
)
|
||||
unmuted_uids.append(finding.uid)
|
||||
|
||||
# Create 2 already muted findings
|
||||
muted_uids = []
|
||||
for i in range(2):
|
||||
finding = Finding.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
uid=f"muted_finding_{i}",
|
||||
scan=scan,
|
||||
status=Status.FAIL,
|
||||
status_extended=f"Muted status {i}",
|
||||
impact=Severity.low,
|
||||
severity=Severity.low,
|
||||
raw_result={
|
||||
"status": Status.FAIL,
|
||||
"impact": Severity.low,
|
||||
"severity": Severity.low,
|
||||
},
|
||||
check_id=f"muted_check_{i}",
|
||||
check_metadata={
|
||||
"CheckId": f"muted_check_{i}",
|
||||
"Description": f"Muted description {i}",
|
||||
},
|
||||
muted=True,
|
||||
muted_at=datetime.now(timezone.utc),
|
||||
muted_reason="Already muted",
|
||||
)
|
||||
muted_uids.append(finding.uid)
|
||||
|
||||
# Create mute rule targeting all findings
|
||||
all_uids = unmuted_uids + muted_uids
|
||||
mute_rule = MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Test Mixed Findings Rule",
|
||||
reason="Testing mixed muted/unmuted findings",
|
||||
enabled=True,
|
||||
created_by=test_user,
|
||||
finding_uids=all_uids,
|
||||
)
|
||||
|
||||
return mute_rule, unmuted_uids, muted_uids
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def mute_rule_batch_test(self, scans_fixture, test_user):
|
||||
"""
|
||||
Create enough findings to test batch processing (>1000 for default batch size).
|
||||
"""
|
||||
scan = scans_fixture[0]
|
||||
tenant_id = scan.tenant_id
|
||||
|
||||
# Create 1500 findings to exceed default batch size of 1000
|
||||
finding_uids = []
|
||||
for i in range(1500):
|
||||
finding = Finding.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
uid=f"batch_test_finding_{i}",
|
||||
scan=scan,
|
||||
status=Status.FAIL,
|
||||
status_extended=f"Batch test status {i}",
|
||||
impact=Severity.critical,
|
||||
severity=Severity.critical,
|
||||
raw_result={
|
||||
"status": Status.FAIL,
|
||||
"impact": Severity.critical,
|
||||
"severity": Severity.critical,
|
||||
},
|
||||
check_id=f"batch_test_check_{i}",
|
||||
check_metadata={
|
||||
"CheckId": f"batch_test_check_{i}",
|
||||
"Description": f"Batch test description {i}",
|
||||
},
|
||||
muted=False,
|
||||
)
|
||||
finding_uids.append(finding.uid)
|
||||
|
||||
# Create mute rule targeting all findings
|
||||
mute_rule = MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Test Batch Processing Rule",
|
||||
reason="Testing batch processing functionality",
|
||||
enabled=True,
|
||||
created_by=test_user,
|
||||
finding_uids=finding_uids,
|
||||
)
|
||||
|
||||
return mute_rule, finding_uids
|
||||
|
||||
def test_mute_historical_findings_single_finding(
|
||||
self, mute_rule_with_findings, findings_fixture
|
||||
):
|
||||
"""
|
||||
Test muting a single historical finding.
|
||||
"""
|
||||
mute_rule = mute_rule_with_findings
|
||||
tenant_id = str(mute_rule.tenant_id)
|
||||
finding = findings_fixture[0]
|
||||
|
||||
# Ensure the finding is not muted before execution
|
||||
finding.refresh_from_db()
|
||||
assert finding.muted is False
|
||||
assert finding.muted_at is None
|
||||
assert finding.muted_reason is None
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify return value
|
||||
assert result["findings_muted"] == 1
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
|
||||
# Verify the finding was muted
|
||||
finding.refresh_from_db()
|
||||
assert finding.muted is True
|
||||
assert finding.muted_at == mute_rule.inserted_at
|
||||
assert finding.muted_reason == mute_rule.reason
|
||||
|
||||
def test_mute_historical_findings_multiple_findings(
|
||||
self, mute_rule_multiple_findings
|
||||
):
|
||||
"""
|
||||
Test muting multiple historical findings.
|
||||
"""
|
||||
mute_rule, finding_uids = mute_rule_multiple_findings
|
||||
tenant_id = str(mute_rule.tenant_id)
|
||||
|
||||
# Verify all findings are unmuted
|
||||
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=finding_uids)
|
||||
assert findings.count() == 5
|
||||
for finding in findings:
|
||||
assert finding.muted is False
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify return value
|
||||
assert result["findings_muted"] == 5
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
|
||||
# Verify all findings were muted
|
||||
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=finding_uids)
|
||||
for finding in findings:
|
||||
assert finding.muted is True
|
||||
assert finding.muted_at == mute_rule.inserted_at
|
||||
assert finding.muted_reason == mute_rule.reason
|
||||
|
||||
def test_mute_historical_findings_already_muted(
|
||||
self, mute_rule_already_muted, findings_fixture
|
||||
):
|
||||
"""
|
||||
Test that already-muted findings are not counted or updated.
|
||||
"""
|
||||
mute_rule = mute_rule_already_muted
|
||||
tenant_id = str(mute_rule.tenant_id)
|
||||
finding = findings_fixture[1]
|
||||
|
||||
# Verify the finding is already muted
|
||||
finding.refresh_from_db()
|
||||
assert finding.muted is True
|
||||
original_muted_at = finding.muted_at
|
||||
original_muted_reason = finding.muted_reason
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify no findings were muted
|
||||
assert result["findings_muted"] == 0
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
|
||||
# Verify the finding's mute status did not change
|
||||
finding.refresh_from_db()
|
||||
assert finding.muted is True
|
||||
assert finding.muted_at == original_muted_at
|
||||
assert finding.muted_reason == original_muted_reason
|
||||
|
||||
def test_mute_historical_findings_mixed_status(self, mute_rule_mixed_findings):
|
||||
"""
|
||||
Test muting when some findings are already muted and others are not.
|
||||
"""
|
||||
mute_rule, unmuted_uids, muted_uids = mute_rule_mixed_findings
|
||||
tenant_id = str(mute_rule.tenant_id)
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify only unmuted findings were counted
|
||||
assert result["findings_muted"] == 3
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
|
||||
# Verify unmuted findings are now muted
|
||||
unmuted_findings = Finding.objects.filter(
|
||||
tenant_id=tenant_id, uid__in=unmuted_uids
|
||||
)
|
||||
for finding in unmuted_findings:
|
||||
assert finding.muted is True
|
||||
assert finding.muted_at == mute_rule.inserted_at
|
||||
assert finding.muted_reason == mute_rule.reason
|
||||
|
||||
# Verify already-muted findings remained unchanged
|
||||
already_muted_findings = Finding.objects.filter(
|
||||
tenant_id=tenant_id, uid__in=muted_uids
|
||||
)
|
||||
for finding in already_muted_findings:
|
||||
assert finding.muted is True
|
||||
assert finding.muted_reason == "Already muted"
|
||||
|
||||
def test_mute_historical_findings_nonexistent_rule(self, tenants_fixture):
|
||||
"""
|
||||
Test that a nonexistent mute rule raises ObjectDoesNotExist.
|
||||
"""
|
||||
tenant_id = str(tenants_fixture[0].id)
|
||||
nonexistent_rule_id = str(uuid4())
|
||||
|
||||
with pytest.raises(ObjectDoesNotExist):
|
||||
mute_historical_findings(tenant_id, nonexistent_rule_id)
|
||||
|
||||
def test_mute_historical_findings_no_matching_findings(
|
||||
self, tenants_fixture, test_user
|
||||
):
|
||||
"""
|
||||
Test muting when no findings match the rule's UIDs.
|
||||
"""
|
||||
tenant_id = str(tenants_fixture[0].id)
|
||||
|
||||
# Create a mute rule with non-existent finding UIDs
|
||||
mute_rule = MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Test No Match Rule",
|
||||
reason="Testing no matching findings",
|
||||
enabled=True,
|
||||
created_by=test_user,
|
||||
finding_uids=[
|
||||
"nonexistent_uid_1",
|
||||
"nonexistent_uid_2",
|
||||
"nonexistent_uid_3",
|
||||
],
|
||||
)
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify no findings were muted
|
||||
assert result["findings_muted"] == 0
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
|
||||
def test_mute_historical_findings_batch_processing(self, mute_rule_batch_test):
|
||||
"""
|
||||
Test that large numbers of findings are processed in batches correctly.
|
||||
"""
|
||||
mute_rule, finding_uids = mute_rule_batch_test
|
||||
tenant_id = str(mute_rule.tenant_id)
|
||||
|
||||
# Verify all findings exist and are unmuted
|
||||
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=finding_uids)
|
||||
assert findings.count() == 1500
|
||||
for finding in findings:
|
||||
assert finding.muted is False
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify return value
|
||||
assert result["findings_muted"] == 1500
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
|
||||
# Verify all findings were muted
|
||||
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=finding_uids)
|
||||
for finding in findings:
|
||||
assert finding.muted is True
|
||||
assert finding.muted_at == mute_rule.inserted_at
|
||||
assert finding.muted_reason == mute_rule.reason
|
||||
|
||||
def test_mute_historical_findings_preserves_muted_at_timestamp(
|
||||
self, mute_rule_with_findings, findings_fixture
|
||||
):
|
||||
"""
|
||||
Test that muted_at is set to the rule's inserted_at, not the current time.
|
||||
"""
|
||||
mute_rule = mute_rule_with_findings
|
||||
tenant_id = str(mute_rule.tenant_id)
|
||||
finding = findings_fixture[0]
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify the finding was muted
|
||||
assert result["findings_muted"] == 1
|
||||
|
||||
# Verify muted_at matches the rule's inserted_at timestamp
|
||||
finding.refresh_from_db()
|
||||
assert finding.muted_at == mute_rule.inserted_at
|
||||
assert finding.muted_at is not None
|
||||
|
||||
def test_mute_historical_findings_partial_match(self, scans_fixture, test_user):
|
||||
"""
|
||||
Test muting when only some of the rule's UIDs exist as findings.
|
||||
"""
|
||||
scan = scans_fixture[0]
|
||||
tenant_id = str(scan.tenant_id)
|
||||
|
||||
# Create 3 findings
|
||||
existing_uids = []
|
||||
for i in range(3):
|
||||
finding = Finding.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
uid=f"partial_match_finding_{i}",
|
||||
scan=scan,
|
||||
status=Status.FAIL,
|
||||
status_extended=f"Partial match status {i}",
|
||||
impact=Severity.high,
|
||||
severity=Severity.high,
|
||||
raw_result={
|
||||
"status": Status.FAIL,
|
||||
"impact": Severity.high,
|
||||
"severity": Severity.high,
|
||||
},
|
||||
check_id=f"partial_match_check_{i}",
|
||||
check_metadata={
|
||||
"CheckId": f"partial_match_check_{i}",
|
||||
"Description": f"Partial match description {i}",
|
||||
},
|
||||
muted=False,
|
||||
)
|
||||
existing_uids.append(finding.uid)
|
||||
|
||||
# Create a mute rule with both existing and non-existing UIDs
|
||||
all_uids = existing_uids + [
|
||||
"nonexistent_uid_1",
|
||||
"nonexistent_uid_2",
|
||||
]
|
||||
mute_rule = MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Test Partial Match Rule",
|
||||
reason="Testing partial matching",
|
||||
enabled=True,
|
||||
created_by=test_user,
|
||||
finding_uids=all_uids,
|
||||
)
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify only existing findings were muted
|
||||
assert result["findings_muted"] == 3
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
|
||||
# Verify the existing findings were muted
|
||||
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=existing_uids)
|
||||
assert findings.count() == 3
|
||||
for finding in findings:
|
||||
assert finding.muted is True
|
||||
assert finding.muted_at == mute_rule.inserted_at
|
||||
assert finding.muted_reason == mute_rule.reason
|
||||
|
||||
def test_mute_historical_findings_empty_uids(self, tenants_fixture, test_user):
|
||||
"""
|
||||
Test muting when the rule has an empty finding_uids array.
|
||||
"""
|
||||
tenant_id = str(tenants_fixture[0].id)
|
||||
|
||||
# Create a mute rule with empty finding_uids
|
||||
mute_rule = MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Test Empty UIDs Rule",
|
||||
reason="Testing empty UIDs",
|
||||
enabled=True,
|
||||
created_by=test_user,
|
||||
finding_uids=[],
|
||||
)
|
||||
|
||||
# Execute the muting function
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify no findings were muted
|
||||
assert result["findings_muted"] == 0
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
|
||||
def test_mute_historical_findings_return_format(self, mute_rule_with_findings):
|
||||
"""
|
||||
Test that the return value has the correct format and fields.
|
||||
"""
|
||||
mute_rule = mute_rule_with_findings
|
||||
tenant_id = str(mute_rule.tenant_id)
|
||||
|
||||
result = mute_historical_findings(tenant_id, str(mute_rule.id))
|
||||
|
||||
# Verify return value structure
|
||||
assert isinstance(result, dict)
|
||||
assert "findings_muted" in result
|
||||
assert "rule_id" in result
|
||||
assert isinstance(result["findings_muted"], int)
|
||||
assert isinstance(result["rule_id"], str)
|
||||
assert result["rule_id"] == str(mute_rule.id)
|
||||
@@ -18,7 +18,15 @@ from tasks.utils import CustomEncoder
|
||||
|
||||
from api.db_router import MainRouter
|
||||
from api.exceptions import ProviderConnectionError
|
||||
from api.models import Finding, Provider, Resource, Scan, StateChoices, StatusChoices
|
||||
from api.models import (
|
||||
Finding,
|
||||
MuteRule,
|
||||
Provider,
|
||||
Resource,
|
||||
Scan,
|
||||
StateChoices,
|
||||
StatusChoices,
|
||||
)
|
||||
from prowler.lib.check.models import Severity
|
||||
|
||||
|
||||
@@ -739,6 +747,561 @@ class TestPerformScan:
|
||||
# Assert that failed_findings_count was reset to 0 during the scan
|
||||
assert resource.failed_findings_count == 0
|
||||
|
||||
def test_perform_prowler_scan_with_active_mute_rules(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
"""Test active MuteRule mutes findings with correct reason"""
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
|
||||
new_callable=dict,
|
||||
),
|
||||
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
provider_id = str(provider.id)
|
||||
|
||||
# Create active MuteRule with specific finding UIDs
|
||||
mute_rule_reason = "Accepted risk - production exception"
|
||||
finding_uid_1 = "finding_to_mute_1"
|
||||
finding_uid_2 = "finding_to_mute_2"
|
||||
|
||||
MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Production Exception Rule",
|
||||
reason=mute_rule_reason,
|
||||
enabled=True,
|
||||
finding_uids=[finding_uid_1, finding_uid_2],
|
||||
)
|
||||
|
||||
# Mock findings: one FAIL and one PASS, both should be muted
|
||||
muted_fail_finding = MagicMock()
|
||||
muted_fail_finding.uid = finding_uid_1
|
||||
muted_fail_finding.status = StatusChoices.FAIL
|
||||
muted_fail_finding.status_extended = "muted fail"
|
||||
muted_fail_finding.severity = Severity.high
|
||||
muted_fail_finding.check_id = "muted_fail_check"
|
||||
muted_fail_finding.get_metadata.return_value = {"key": "value"}
|
||||
muted_fail_finding.resource_uid = "resource_uid_1"
|
||||
muted_fail_finding.resource_name = "resource_1"
|
||||
muted_fail_finding.region = "us-east-1"
|
||||
muted_fail_finding.service_name = "ec2"
|
||||
muted_fail_finding.resource_type = "instance"
|
||||
muted_fail_finding.resource_tags = {}
|
||||
muted_fail_finding.muted = False
|
||||
muted_fail_finding.raw = {}
|
||||
muted_fail_finding.resource_metadata = {}
|
||||
muted_fail_finding.resource_details = {}
|
||||
muted_fail_finding.partition = "aws"
|
||||
muted_fail_finding.compliance = {}
|
||||
|
||||
muted_pass_finding = MagicMock()
|
||||
muted_pass_finding.uid = finding_uid_2
|
||||
muted_pass_finding.status = StatusChoices.PASS
|
||||
muted_pass_finding.status_extended = "muted pass"
|
||||
muted_pass_finding.severity = Severity.medium
|
||||
muted_pass_finding.check_id = "muted_pass_check"
|
||||
muted_pass_finding.get_metadata.return_value = {"key": "value"}
|
||||
muted_pass_finding.resource_uid = "resource_uid_2"
|
||||
muted_pass_finding.resource_name = "resource_2"
|
||||
muted_pass_finding.region = "us-east-1"
|
||||
muted_pass_finding.service_name = "s3"
|
||||
muted_pass_finding.resource_type = "bucket"
|
||||
muted_pass_finding.resource_tags = {}
|
||||
muted_pass_finding.muted = False
|
||||
muted_pass_finding.raw = {}
|
||||
muted_pass_finding.resource_metadata = {}
|
||||
muted_pass_finding.resource_details = {}
|
||||
muted_pass_finding.partition = "aws"
|
||||
muted_pass_finding.compliance = {}
|
||||
|
||||
# Mock the ProwlerScan instance
|
||||
mock_prowler_scan_instance = MagicMock()
|
||||
mock_prowler_scan_instance.scan.return_value = [
|
||||
(100, [muted_fail_finding, muted_pass_finding])
|
||||
]
|
||||
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
|
||||
|
||||
# Mock prowler_provider
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
# Call the function under test
|
||||
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
|
||||
|
||||
# Verify findings are muted with correct reason
|
||||
fail_finding_db = Finding.objects.get(uid=finding_uid_1)
|
||||
pass_finding_db = Finding.objects.get(uid=finding_uid_2)
|
||||
|
||||
assert fail_finding_db.muted
|
||||
assert fail_finding_db.muted_reason == mute_rule_reason
|
||||
assert fail_finding_db.muted_at is not None
|
||||
|
||||
assert pass_finding_db.muted
|
||||
assert pass_finding_db.muted_reason == mute_rule_reason
|
||||
assert pass_finding_db.muted_at is not None
|
||||
|
||||
# Verify failed_findings_count is 0 for muted FAIL finding
|
||||
resource_1 = Resource.objects.get(uid="resource_uid_1")
|
||||
assert resource_1.failed_findings_count == 0
|
||||
|
||||
def test_perform_prowler_scan_with_inactive_mute_rules(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
"""Test inactive MuteRule does not mute findings"""
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
|
||||
new_callable=dict,
|
||||
),
|
||||
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
provider_id = str(provider.id)
|
||||
|
||||
# Create inactive MuteRule
|
||||
finding_uid = "finding_inactive_rule"
|
||||
MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Inactive Rule",
|
||||
reason="Should not apply",
|
||||
enabled=False,
|
||||
finding_uids=[finding_uid],
|
||||
)
|
||||
|
||||
# Mock FAIL finding
|
||||
fail_finding = MagicMock()
|
||||
fail_finding.uid = finding_uid
|
||||
fail_finding.status = StatusChoices.FAIL
|
||||
fail_finding.status_extended = "test fail"
|
||||
fail_finding.severity = Severity.high
|
||||
fail_finding.check_id = "fail_check"
|
||||
fail_finding.get_metadata.return_value = {"key": "value"}
|
||||
fail_finding.resource_uid = "resource_uid_inactive"
|
||||
fail_finding.resource_name = "resource_inactive"
|
||||
fail_finding.region = "us-east-1"
|
||||
fail_finding.service_name = "ec2"
|
||||
fail_finding.resource_type = "instance"
|
||||
fail_finding.resource_tags = {}
|
||||
fail_finding.muted = False
|
||||
fail_finding.raw = {}
|
||||
fail_finding.resource_metadata = {}
|
||||
fail_finding.resource_details = {}
|
||||
fail_finding.partition = "aws"
|
||||
fail_finding.compliance = {}
|
||||
|
||||
# Mock the ProwlerScan instance
|
||||
mock_prowler_scan_instance = MagicMock()
|
||||
mock_prowler_scan_instance.scan.return_value = [(100, [fail_finding])]
|
||||
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
|
||||
|
||||
# Mock prowler_provider
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
# Call the function under test
|
||||
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
|
||||
|
||||
# Verify finding is NOT muted
|
||||
finding_db = Finding.objects.get(uid=finding_uid)
|
||||
assert not finding_db.muted
|
||||
assert finding_db.muted_reason is None
|
||||
assert finding_db.muted_at is None
|
||||
|
||||
# Verify failed_findings_count increments for FAIL finding
|
||||
resource = Resource.objects.get(uid="resource_uid_inactive")
|
||||
assert resource.failed_findings_count == 1
|
||||
|
||||
def test_perform_prowler_scan_mutelist_overrides_mute_rules(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
"""Test mutelist processor takes precedence over MuteRule"""
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
|
||||
new_callable=dict,
|
||||
),
|
||||
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
provider_id = str(provider.id)
|
||||
|
||||
# Create active MuteRule
|
||||
finding_uid = "finding_both_rules"
|
||||
MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Manual Mute Rule",
|
||||
reason="Muted by manual rule",
|
||||
enabled=True,
|
||||
finding_uids=[finding_uid],
|
||||
)
|
||||
|
||||
# Mock finding with mutelist processor muted=True
|
||||
muted_finding = MagicMock()
|
||||
muted_finding.uid = finding_uid
|
||||
muted_finding.status = StatusChoices.FAIL
|
||||
muted_finding.status_extended = "test"
|
||||
muted_finding.severity = Severity.high
|
||||
muted_finding.check_id = "test_check"
|
||||
muted_finding.get_metadata.return_value = {"key": "value"}
|
||||
muted_finding.resource_uid = "resource_both"
|
||||
muted_finding.resource_name = "resource_both"
|
||||
muted_finding.region = "us-east-1"
|
||||
muted_finding.service_name = "ec2"
|
||||
muted_finding.resource_type = "instance"
|
||||
muted_finding.resource_tags = {}
|
||||
muted_finding.muted = True
|
||||
muted_finding.raw = {}
|
||||
muted_finding.resource_metadata = {}
|
||||
muted_finding.resource_details = {}
|
||||
muted_finding.partition = "aws"
|
||||
muted_finding.compliance = {}
|
||||
|
||||
# Mock the ProwlerScan instance
|
||||
mock_prowler_scan_instance = MagicMock()
|
||||
mock_prowler_scan_instance.scan.return_value = [(100, [muted_finding])]
|
||||
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
|
||||
|
||||
# Mock prowler_provider
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
# Call the function under test
|
||||
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
|
||||
|
||||
# Verify mutelist reason takes precedence
|
||||
finding_db = Finding.objects.get(uid=finding_uid)
|
||||
assert finding_db.muted
|
||||
assert finding_db.muted_reason == "Muted by mutelist"
|
||||
assert finding_db.muted_at is not None
|
||||
|
||||
# Verify failed_findings_count is 0
|
||||
resource = Resource.objects.get(uid="resource_both")
|
||||
assert resource.failed_findings_count == 0
|
||||
|
||||
def test_perform_prowler_scan_mute_rules_multiple_findings(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
"""Test MuteRule with multiple finding UIDs mutes all findings"""
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
|
||||
new_callable=dict,
|
||||
),
|
||||
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
provider_id = str(provider.id)
|
||||
|
||||
# Create MuteRule with multiple finding UIDs
|
||||
mute_rule_reason = "Bulk exception for dev environment"
|
||||
finding_uids = [
|
||||
"bulk_finding_1",
|
||||
"bulk_finding_2",
|
||||
"bulk_finding_3",
|
||||
"bulk_finding_4",
|
||||
]
|
||||
MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Bulk Mute Rule",
|
||||
reason=mute_rule_reason,
|
||||
enabled=True,
|
||||
finding_uids=finding_uids,
|
||||
)
|
||||
|
||||
# Mock multiple findings with mixed statuses
|
||||
findings = []
|
||||
for i, uid in enumerate(finding_uids):
|
||||
finding = MagicMock()
|
||||
finding.uid = uid
|
||||
finding.status = (
|
||||
StatusChoices.FAIL if i % 2 == 0 else StatusChoices.PASS
|
||||
)
|
||||
finding.status_extended = f"test {i}"
|
||||
finding.severity = Severity.medium
|
||||
finding.check_id = f"check_{i}"
|
||||
finding.get_metadata.return_value = {"key": f"value_{i}"}
|
||||
finding.resource_uid = f"resource_bulk_{i}"
|
||||
finding.resource_name = f"resource_{i}"
|
||||
finding.region = "us-west-2"
|
||||
finding.service_name = "lambda"
|
||||
finding.resource_type = "function"
|
||||
finding.resource_tags = {}
|
||||
finding.muted = False
|
||||
finding.raw = {}
|
||||
finding.resource_metadata = {}
|
||||
finding.resource_details = {}
|
||||
finding.partition = "aws"
|
||||
finding.compliance = {}
|
||||
findings.append(finding)
|
||||
|
||||
# Mock the ProwlerScan instance
|
||||
mock_prowler_scan_instance = MagicMock()
|
||||
mock_prowler_scan_instance.scan.return_value = [(100, findings)]
|
||||
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
|
||||
|
||||
# Mock prowler_provider
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-west-2"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
# Call the function under test
|
||||
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
|
||||
|
||||
# Verify all findings are muted with same reason
|
||||
for uid in finding_uids:
|
||||
finding_db = Finding.objects.get(uid=uid)
|
||||
assert finding_db.muted
|
||||
assert finding_db.muted_reason == mute_rule_reason
|
||||
assert finding_db.muted_at is not None
|
||||
|
||||
# Verify all resources have failed_findings_count = 0
|
||||
for i in range(len(finding_uids)):
|
||||
resource = Resource.objects.get(uid=f"resource_bulk_{i}")
|
||||
assert resource.failed_findings_count == 0
|
||||
|
||||
def test_perform_prowler_scan_mute_rules_error_handling(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
"""Test scan continues when MuteRule loading fails"""
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
|
||||
new_callable=dict,
|
||||
),
|
||||
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
|
||||
patch("api.models.MuteRule.objects.filter") as mock_mute_rule_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
provider_id = str(provider.id)
|
||||
|
||||
# Mock MuteRule.objects.filter to raise exception
|
||||
mock_mute_rule_filter.side_effect = Exception("Database error")
|
||||
|
||||
# Mock finding
|
||||
finding = MagicMock()
|
||||
finding.uid = "finding_error_handling"
|
||||
finding.status = StatusChoices.FAIL
|
||||
finding.status_extended = "test"
|
||||
finding.severity = Severity.high
|
||||
finding.check_id = "test_check"
|
||||
finding.get_metadata.return_value = {"key": "value"}
|
||||
finding.resource_uid = "resource_error"
|
||||
finding.resource_name = "resource_error"
|
||||
finding.region = "us-east-1"
|
||||
finding.service_name = "ec2"
|
||||
finding.resource_type = "instance"
|
||||
finding.resource_tags = {}
|
||||
finding.muted = False
|
||||
finding.raw = {}
|
||||
finding.resource_metadata = {}
|
||||
finding.resource_details = {}
|
||||
finding.partition = "aws"
|
||||
finding.compliance = {}
|
||||
|
||||
# Mock the ProwlerScan instance
|
||||
mock_prowler_scan_instance = MagicMock()
|
||||
mock_prowler_scan_instance.scan.return_value = [(100, [finding])]
|
||||
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
|
||||
|
||||
# Mock prowler_provider
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
# Call the function under test - should not raise
|
||||
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
|
||||
|
||||
# Verify scan completed successfully
|
||||
scan.refresh_from_db()
|
||||
assert scan.state == StateChoices.COMPLETED
|
||||
|
||||
# Verify finding is not muted (mute_rules_cache was empty dict)
|
||||
finding_db = Finding.objects.get(uid="finding_error_handling")
|
||||
assert not finding_db.muted
|
||||
assert finding_db.muted_reason is None
|
||||
|
||||
# Verify failed_findings_count increments
|
||||
resource = Resource.objects.get(uid="resource_error")
|
||||
assert resource.failed_findings_count == 1
|
||||
|
||||
def test_perform_prowler_scan_muted_at_timestamp(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
"""Test muted_at timestamp is set correctly for muted findings"""
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
|
||||
new_callable=dict,
|
||||
),
|
||||
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
provider_id = str(provider.id)
|
||||
|
||||
# Create active MuteRule
|
||||
finding_uid = "finding_timestamp_test"
|
||||
MuteRule.objects.create(
|
||||
tenant_id=tenant_id,
|
||||
name="Timestamp Test Rule",
|
||||
reason="Testing timestamp",
|
||||
enabled=True,
|
||||
finding_uids=[finding_uid],
|
||||
)
|
||||
|
||||
# Mock finding
|
||||
finding = MagicMock()
|
||||
finding.uid = finding_uid
|
||||
finding.status = StatusChoices.FAIL
|
||||
finding.status_extended = "test"
|
||||
finding.severity = Severity.high
|
||||
finding.check_id = "test_check"
|
||||
finding.get_metadata.return_value = {"key": "value"}
|
||||
finding.resource_uid = "resource_timestamp"
|
||||
finding.resource_name = "resource_timestamp"
|
||||
finding.region = "us-east-1"
|
||||
finding.service_name = "ec2"
|
||||
finding.resource_type = "instance"
|
||||
finding.resource_tags = {}
|
||||
finding.muted = False
|
||||
finding.raw = {}
|
||||
finding.resource_metadata = {}
|
||||
finding.resource_details = {}
|
||||
finding.partition = "aws"
|
||||
finding.compliance = {}
|
||||
|
||||
# Mock the ProwlerScan instance
|
||||
mock_prowler_scan_instance = MagicMock()
|
||||
mock_prowler_scan_instance.scan.return_value = [(100, [finding])]
|
||||
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
|
||||
|
||||
# Mock prowler_provider
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
# Capture time before and after scan
|
||||
before_scan = datetime.now(timezone.utc)
|
||||
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
|
||||
after_scan = datetime.now(timezone.utc)
|
||||
|
||||
# Verify muted_at is within the scan time window
|
||||
finding_db = Finding.objects.get(uid=finding_uid)
|
||||
assert finding_db.muted
|
||||
assert finding_db.muted_at is not None
|
||||
assert before_scan <= finding_db.muted_at <= after_scan
|
||||
|
||||
|
||||
# TODO Add tests for aggregations
|
||||
|
||||
|
||||
@@ -1,16 +1,24 @@
|
||||
import uuid
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import openai
|
||||
import pytest
|
||||
from botocore.exceptions import ClientError
|
||||
from tasks.tasks import (
|
||||
_perform_scan_complete_tasks,
|
||||
check_integrations_task,
|
||||
check_lighthouse_provider_connection_task,
|
||||
generate_outputs_task,
|
||||
refresh_lighthouse_provider_models_task,
|
||||
s3_integration_task,
|
||||
security_hub_integration_task,
|
||||
)
|
||||
|
||||
from api.models import Integration
|
||||
from api.models import (
|
||||
Integration,
|
||||
LighthouseProviderConfiguration,
|
||||
LighthouseProviderModels,
|
||||
)
|
||||
|
||||
|
||||
# TODO Move this to outputs/reports jobs
|
||||
@@ -1097,3 +1105,363 @@ class TestCheckIntegrationsTask:
|
||||
|
||||
assert result is False
|
||||
mock_upload.assert_called_once_with(self.tenant_id, self.provider_id, scan_id)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestCheckLighthouseProviderConnectionTask:
|
||||
def setup_method(self):
|
||||
self.tenant_id = str(uuid.uuid4())
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"provider_type,credentials,base_url,expected_result",
|
||||
[
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
|
||||
{"api_key": "sk-test123"},
|
||||
None,
|
||||
{"connected": True, "error": None},
|
||||
),
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE,
|
||||
{"api_key": "sk-test123"},
|
||||
"https://openrouter.ai/api/v1",
|
||||
{"connected": True, "error": None},
|
||||
),
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
|
||||
{
|
||||
"access_key_id": "AKIA123",
|
||||
"secret_access_key": "secret",
|
||||
"region": "us-east-1",
|
||||
},
|
||||
None,
|
||||
{"connected": True, "error": None},
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_check_connection_success_all_providers(
|
||||
self, tenants_fixture, provider_type, credentials, base_url, expected_result
|
||||
):
|
||||
"""Test successful connection check for all provider types."""
|
||||
# Create provider configuration
|
||||
provider_cfg = LighthouseProviderConfiguration(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
provider_type=provider_type,
|
||||
base_url=base_url,
|
||||
is_active=False,
|
||||
)
|
||||
provider_cfg.credentials_decoded = credentials
|
||||
provider_cfg.save()
|
||||
|
||||
# Mock the appropriate API calls
|
||||
with (
|
||||
patch("tasks.jobs.lighthouse_providers.openai.OpenAI") as mock_openai,
|
||||
patch("tasks.jobs.lighthouse_providers.boto3.client") as mock_boto3,
|
||||
):
|
||||
mock_client = MagicMock()
|
||||
mock_client.models.list.return_value = MagicMock()
|
||||
mock_client.list_foundation_models.return_value = {}
|
||||
mock_openai.return_value = mock_client
|
||||
mock_boto3.return_value = mock_client
|
||||
|
||||
# Execute
|
||||
result = check_lighthouse_provider_connection_task(
|
||||
provider_config_id=str(provider_cfg.id),
|
||||
tenant_id=str(tenants_fixture[0].id),
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert result == expected_result
|
||||
provider_cfg.refresh_from_db()
|
||||
assert provider_cfg.is_active is True
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"provider_type,credentials,base_url,exception_to_raise",
|
||||
[
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
|
||||
{"api_key": "sk-invalid"},
|
||||
None,
|
||||
openai.AuthenticationError(
|
||||
"Invalid API key", response=MagicMock(), body=None
|
||||
),
|
||||
),
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE,
|
||||
{"api_key": "sk-invalid"},
|
||||
"https://openrouter.ai/api/v1",
|
||||
openai.APIConnectionError(request=MagicMock()),
|
||||
),
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
|
||||
{
|
||||
"access_key_id": "AKIA123",
|
||||
"secret_access_key": "secret",
|
||||
"region": "us-east-1",
|
||||
},
|
||||
None,
|
||||
ClientError(
|
||||
{"Error": {"Code": "AccessDenied", "Message": "Access Denied"}},
|
||||
"list_foundation_models",
|
||||
),
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_check_connection_api_failure(
|
||||
self,
|
||||
tenants_fixture,
|
||||
provider_type,
|
||||
credentials,
|
||||
base_url,
|
||||
exception_to_raise,
|
||||
):
|
||||
"""Test connection check when API calls fail."""
|
||||
# Create provider configuration
|
||||
provider_cfg = LighthouseProviderConfiguration(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
provider_type=provider_type,
|
||||
base_url=base_url,
|
||||
is_active=True,
|
||||
)
|
||||
provider_cfg.credentials_decoded = credentials
|
||||
provider_cfg.save()
|
||||
|
||||
# Mock the API to raise exception
|
||||
with (
|
||||
patch("tasks.jobs.lighthouse_providers.openai.OpenAI") as mock_openai,
|
||||
patch("tasks.jobs.lighthouse_providers.boto3.client") as mock_boto3,
|
||||
):
|
||||
mock_client = MagicMock()
|
||||
if (
|
||||
provider_type
|
||||
== LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
|
||||
):
|
||||
mock_client.list_foundation_models.side_effect = exception_to_raise
|
||||
mock_boto3.return_value = mock_client
|
||||
else:
|
||||
mock_client.models.list.side_effect = exception_to_raise
|
||||
mock_openai.return_value = mock_client
|
||||
|
||||
# Execute
|
||||
result = check_lighthouse_provider_connection_task(
|
||||
provider_config_id=str(provider_cfg.id),
|
||||
tenant_id=str(tenants_fixture[0].id),
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert result["connected"] is False
|
||||
assert result["error"] is not None
|
||||
provider_cfg.refresh_from_db()
|
||||
assert provider_cfg.is_active is False
|
||||
|
||||
def test_check_connection_updates_active_status(self, tenants_fixture):
|
||||
"""Test that connection check toggles is_active from True to False on failure."""
|
||||
# Create provider with is_active=True
|
||||
provider_cfg = LighthouseProviderConfiguration(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
|
||||
base_url=None,
|
||||
is_active=True,
|
||||
)
|
||||
provider_cfg.credentials_decoded = {"api_key": "sk-test123"}
|
||||
provider_cfg.save()
|
||||
|
||||
# Mock API to fail
|
||||
with patch("tasks.jobs.lighthouse_providers.openai.OpenAI") as mock_openai:
|
||||
mock_client = MagicMock()
|
||||
mock_client.models.list.side_effect = openai.AuthenticationError(
|
||||
"Invalid", response=MagicMock(), body=None
|
||||
)
|
||||
mock_openai.return_value = mock_client
|
||||
|
||||
# Execute
|
||||
result = check_lighthouse_provider_connection_task(
|
||||
provider_config_id=str(provider_cfg.id),
|
||||
tenant_id=str(tenants_fixture[0].id),
|
||||
)
|
||||
|
||||
# Assert status changed
|
||||
assert result["connected"] is False
|
||||
provider_cfg.refresh_from_db()
|
||||
assert provider_cfg.is_active is False
|
||||
|
||||
def test_check_connection_provider_does_not_exist(self, tenants_fixture):
|
||||
"""Test that checking non-existent provider raises DoesNotExist."""
|
||||
non_existent_id = str(uuid.uuid4())
|
||||
|
||||
with pytest.raises(LighthouseProviderConfiguration.DoesNotExist):
|
||||
check_lighthouse_provider_connection_task(
|
||||
provider_config_id=non_existent_id,
|
||||
tenant_id=str(tenants_fixture[0].id),
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestRefreshLighthouseProviderModelsTask:
|
||||
def setup_method(self):
|
||||
self.tenant_id = str(uuid.uuid4())
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"provider_type,credentials,base_url,mock_models,expected_count",
|
||||
[
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
|
||||
{"api_key": "sk-test123"},
|
||||
None,
|
||||
{"gpt-5": "gpt-5", "gpt-4o": "gpt-4o"},
|
||||
2,
|
||||
),
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE,
|
||||
{"api_key": "sk-test123"},
|
||||
"https://openrouter.ai/api/v1",
|
||||
{"model-1": "Model One", "model-2": "Model Two"},
|
||||
2,
|
||||
),
|
||||
(
|
||||
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
|
||||
{
|
||||
"access_key_id": "AKIA123",
|
||||
"secret_access_key": "secret",
|
||||
"region": "us-east-1",
|
||||
},
|
||||
None,
|
||||
{"openai.gpt-oss-120b-1:0": "gpt-oss-120b"},
|
||||
1,
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_refresh_models_create_new(
|
||||
self,
|
||||
tenants_fixture,
|
||||
provider_type,
|
||||
credentials,
|
||||
base_url,
|
||||
mock_models,
|
||||
expected_count,
|
||||
):
|
||||
"""Test creating new models for all provider types."""
|
||||
# Create provider configuration
|
||||
provider_cfg = LighthouseProviderConfiguration(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
provider_type=provider_type,
|
||||
base_url=base_url,
|
||||
is_active=True,
|
||||
)
|
||||
provider_cfg.credentials_decoded = credentials
|
||||
provider_cfg.save()
|
||||
|
||||
# Mock the fetch functions
|
||||
with (
|
||||
patch(
|
||||
"tasks.jobs.lighthouse_providers._fetch_openai_models",
|
||||
return_value=mock_models,
|
||||
),
|
||||
patch(
|
||||
"tasks.jobs.lighthouse_providers._fetch_openai_compatible_models",
|
||||
return_value=mock_models,
|
||||
),
|
||||
patch(
|
||||
"tasks.jobs.lighthouse_providers._fetch_bedrock_models",
|
||||
return_value=mock_models,
|
||||
),
|
||||
):
|
||||
# Execute
|
||||
result = refresh_lighthouse_provider_models_task(
|
||||
provider_config_id=str(provider_cfg.id),
|
||||
tenant_id=str(tenants_fixture[0].id),
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert result["created"] == expected_count
|
||||
assert result["updated"] == 0
|
||||
assert result["deleted"] == 0
|
||||
assert (
|
||||
LighthouseProviderModels.objects.filter(
|
||||
provider_configuration=provider_cfg
|
||||
).count()
|
||||
== expected_count
|
||||
)
|
||||
|
||||
def test_refresh_models_mixed_operations(self, tenants_fixture):
|
||||
"""Test mixed create, update, and delete operations."""
|
||||
# Create provider configuration
|
||||
provider_cfg = LighthouseProviderConfiguration(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
|
||||
base_url=None,
|
||||
is_active=True,
|
||||
)
|
||||
provider_cfg.credentials_decoded = {"api_key": "sk-test123"}
|
||||
provider_cfg.save()
|
||||
|
||||
# Create 2 existing models (A, B)
|
||||
LighthouseProviderModels.objects.create(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
provider_configuration=provider_cfg,
|
||||
model_id="model-a",
|
||||
model_name="Model A",
|
||||
)
|
||||
LighthouseProviderModels.objects.create(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
provider_configuration=provider_cfg,
|
||||
model_id="model-b",
|
||||
model_name="Model B",
|
||||
)
|
||||
|
||||
# Mock API to return models B (existing), C (new) - A will be deleted
|
||||
mock_models = {"model-b": "Model B", "model-c": "Model C"}
|
||||
with patch(
|
||||
"tasks.jobs.lighthouse_providers._fetch_openai_models",
|
||||
return_value=mock_models,
|
||||
):
|
||||
# Execute
|
||||
result = refresh_lighthouse_provider_models_task(
|
||||
provider_config_id=str(provider_cfg.id),
|
||||
tenant_id=str(tenants_fixture[0].id),
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert result["created"] == 1 # model-c created
|
||||
assert result["updated"] == 1 # model-b updated
|
||||
assert result["deleted"] == 1 # model-a deleted
|
||||
|
||||
# Verify only B and C exist
|
||||
remaining_models = LighthouseProviderModels.objects.filter(
|
||||
provider_configuration=provider_cfg
|
||||
)
|
||||
assert remaining_models.count() == 2
|
||||
assert set(remaining_models.values_list("model_id", flat=True)) == {
|
||||
"model-b",
|
||||
"model-c",
|
||||
}
|
||||
|
||||
def test_refresh_models_api_exception(self, tenants_fixture):
|
||||
"""Test refresh when API raises an exception."""
|
||||
# Create provider configuration
|
||||
provider_cfg = LighthouseProviderConfiguration(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
|
||||
base_url=None,
|
||||
is_active=True,
|
||||
)
|
||||
provider_cfg.credentials_decoded = {"api_key": "sk-test123"}
|
||||
provider_cfg.save()
|
||||
|
||||
# Mock fetch to raise exception
|
||||
with patch(
|
||||
"tasks.jobs.lighthouse_providers._fetch_openai_models",
|
||||
side_effect=openai.APIError("API Error", request=MagicMock(), body=None),
|
||||
):
|
||||
# Execute
|
||||
result = refresh_lighthouse_provider_models_task(
|
||||
provider_config_id=str(provider_cfg.id),
|
||||
tenant_id=str(tenants_fixture[0].id),
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert result["created"] == 0
|
||||
assert result["updated"] == 0
|
||||
assert result["deleted"] == 0
|
||||
assert "error" in result
|
||||
assert result["error"] is not None
|
||||
|
||||
@@ -35,7 +35,8 @@ dashboard = dash.Dash(
|
||||
|
||||
# Logo
|
||||
prowler_logo = html.Img(
|
||||
src="https://prowler.com/wp-content/uploads/logo-dashboard.png", alt="Prowler Logo"
|
||||
src="https://cdn.prod.website-files.com/68c4ec3f9fb7b154fbcb6e36/68ffb46d40ed7faa37a592a5_prowler-logo.png",
|
||||
alt="Prowler Logo",
|
||||
)
|
||||
|
||||
menu_icons = {
|
||||
|
||||
@@ -0,0 +1,43 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_3_levels
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
data["REQUIREMENTS_DESCRIPTION"] = (
|
||||
data["REQUIREMENTS_ID"] + " - " + data["REQUIREMENTS_DESCRIPTION"]
|
||||
)
|
||||
|
||||
data["REQUIREMENTS_DESCRIPTION"] = data["REQUIREMENTS_DESCRIPTION"].apply(
|
||||
lambda x: x[:150] + "..." if len(str(x)) > 150 else x
|
||||
)
|
||||
|
||||
data["REQUIREMENTS_ATTRIBUTES_SECTION"] = data[
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
].apply(lambda x: x[:80] + "..." if len(str(x)) > 80 else x)
|
||||
|
||||
data["REQUIREMENTS_ATTRIBUTES_SUBSECTION"] = data[
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION"
|
||||
].apply(lambda x: x[:150] + "..." if len(str(x)) > 150 else x)
|
||||
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
]
|
||||
|
||||
return get_section_containers_3_levels(
|
||||
aux,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
)
|
||||
@@ -0,0 +1,25 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_format3
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_format3(
|
||||
aux, "REQUIREMENTS_ATTRIBUTES_SECTION", "REQUIREMENTS_ID"
|
||||
)
|
||||
@@ -0,0 +1,24 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_format3
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_format3(
|
||||
aux, "REQUIREMENTS_ATTRIBUTES_SECTION", "REQUIREMENTS_ID"
|
||||
)
|
||||
@@ -125,11 +125,11 @@ Each check **must** populate the `report.status` and `report.status_extended` fi
|
||||
|
||||
The severity of each check is defined in the metadata file using the `Severity` field. Severity values are always lowercase and must be one of the predefined categories below.
|
||||
|
||||
- `critical` – Issue that must be addressed immediately.
|
||||
- `high` – Issue that should be addressed as soon as possible.
|
||||
- `medium` – Issue that should be addressed within a reasonable timeframe.
|
||||
- `low` – Issue that can be addressed in the future.
|
||||
- `informational` – Not an issue but provides valuable information.
|
||||
- `critical` – Highest potential impact with broad exposure that could affect core security boundaries or business operations.
|
||||
- `high` – Substantial potential impact with significant exposure that could affect important security controls or resources.
|
||||
- `medium` – Moderate potential impact with limited exposure that weakens defense layers but has contained scope.
|
||||
- `low` – Minimal potential impact with negligible exposure that represents minor gaps in security posture.
|
||||
- `informational` – Provides valuable information but does not affect the security posture.
|
||||
|
||||
If the check involves multiple scenarios that may alter its severity, adjustments can be made dynamically within the check's logic using the severity `report.check_metadata.Severity` attribute:
|
||||
|
||||
|
||||
+1
-1
@@ -98,7 +98,7 @@
|
||||
]
|
||||
},
|
||||
"user-guide/tutorials/prowler-app-rbac",
|
||||
"user-guide/providers/prowler-app-api-keys",
|
||||
"user-guide/tutorials/prowler-app-api-keys",
|
||||
"user-guide/tutorials/prowler-app-mute-findings",
|
||||
{
|
||||
"group": "Integrations",
|
||||
|
||||
@@ -4,33 +4,29 @@ title: "Configuration"
|
||||
|
||||
Configure your MCP client to connect to Prowler MCP Server.
|
||||
|
||||
## Step 1: Get Your API Key (Optional)
|
||||
## Step 1: Get Your API Key
|
||||
|
||||
<Note>
|
||||
**Authentication is optional**: Prowler Hub and Prowler Documentation features work without authentication. An API key is only required for Prowler Cloud and Prowler App (Self-Managed) features.
|
||||
</Note>
|
||||
|
||||
To use Prowler Cloud or Prowler App (Self-Managed) features. To get the API key, please refer to the [API Keys](/user-guide/providers/prowler-app-api-keys) guide.
|
||||
To use Prowler Cloud or Prowler App (Self-Managed) features. To get the API key, please refer to the [API Keys](/user-guide/tutorials/prowler-app-api-keys) guide.
|
||||
|
||||
<Warning>
|
||||
Keep the API key secure. Never share it publicly or commit it to version control.
|
||||
</Warning>
|
||||
|
||||
## Step 2: Configure Your MCP Client
|
||||
## Step 2: Configure Your MCP Host/Client
|
||||
|
||||
Choose the configuration based on your deployment:
|
||||
|
||||
- **STDIO Mode**: Local installation only (runs as subprocess).
|
||||
- **HTTP Mode**: Prowler Cloud MCP Server or self-hosted Prowler MCP Server.
|
||||
- **STDIO Mode**: Local installation only (runs as subprocess of your MCP client).
|
||||
|
||||
### HTTP Mode (Prowler Cloud MCP Server or self-hosted Prowler MCP Server)
|
||||
### HTTP Mode
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Native HTTP Support (Cursor, VSCode)">
|
||||
**Clients that support HTTP with custom headers natively**
|
||||
|
||||
For example: Cursor, VSCode, LobeChat, etc.
|
||||
|
||||
<Tab title="Generic Native HTTP Support">
|
||||
**Configuration:**
|
||||
```json
|
||||
{
|
||||
@@ -38,20 +34,15 @@ Choose the configuration based on your deployment:
|
||||
"prowler": {
|
||||
"url": "https://mcp.prowler.com/mcp", // or your self-hosted Prowler MCP Server URL
|
||||
"headers": {
|
||||
"Authorization": "Bearer pk_your_api_key_here"
|
||||
"Authorization": "Bearer <your-api-key-here>"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Using mcp-remote (Claude Desktop)">
|
||||
**For clients without native HTTP support (like Claude Desktop)**
|
||||
|
||||
For example: Claude Desktop.
|
||||
|
||||
<Tab title="Generic without Native HTTP Support">
|
||||
**Configuration:**
|
||||
```json
|
||||
{
|
||||
@@ -65,26 +56,79 @@ Choose the configuration based on your deployment:
|
||||
"Authorization: Bearer ${PROWLER_APP_API_KEY}"
|
||||
],
|
||||
"env": {
|
||||
"PROWLER_APP_API_KEY": "pk_your_api_key_here"
|
||||
"PROWLER_APP_API_KEY": "<your-api-key-here>"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<Info>
|
||||
The `mcp-remote` tool acts as a bridge for clients that don't support HTTP natively. Learn more at [mcp-remote on npm](https://www.npmjs.com/package/mcp-remote).
|
||||
</Info>
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Claude Desktop">
|
||||
1. Open Claude Desktop settings
|
||||
2. Go to "Developer" tab
|
||||
3. Click in "Edit Config" button
|
||||
4. Edit the `claude_desktop_config.json` file with your favorite editor
|
||||
5. Add the following configuration:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"prowler": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"mcp-remote",
|
||||
"https://mcp.prowler.com/mcp",
|
||||
"--header",
|
||||
"Authorization: Bearer ${PROWLER_APP_API_KEY}"
|
||||
],
|
||||
"env": {
|
||||
"PROWLER_APP_API_KEY": "<your-api-key-here>"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Claude Code">
|
||||
Run the following command:
|
||||
```bash
|
||||
export PROWLER_APP_API_KEY="<your-api-key-here>"
|
||||
claude mcp add --transport http prowler https://mcp.prowler.com/mcp --header "Authorization: Bearer $PROWLER_APP_API_KEY" --scope user
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Cursor">
|
||||
1. Open Cursor settings
|
||||
2. Go to "Tools & MCP"
|
||||
3. Click in "New MCP Server" button
|
||||
4. Add to the JSON Configuration the following:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"prowler": {
|
||||
"url": "https://mcp.prowler.com/mcp",
|
||||
"headers": {
|
||||
"Authorization": "Bearer <your-api-key-here>"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
|
||||
|
||||
</Tabs>
|
||||
|
||||
### STDIO Mode (Local Installation Only)
|
||||
### STDIO Mode
|
||||
|
||||
STDIO mode is only available when running the MCP server locally.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Using uvx">
|
||||
<Tab title="Generic uvx installation">
|
||||
**Run from source or local installation**
|
||||
|
||||
```json
|
||||
@@ -94,7 +138,7 @@ STDIO mode is only available when running the MCP server locally.
|
||||
"command": "uvx",
|
||||
"args": ["/absolute/path/to/prowler/mcp_server/"],
|
||||
"env": {
|
||||
"PROWLER_APP_API_KEY": "pk_your_api_key_here",
|
||||
"PROWLER_APP_API_KEY": "<your-api-key-here>",
|
||||
"PROWLER_API_BASE_URL": "https://api.prowler.com"
|
||||
}
|
||||
}
|
||||
@@ -108,7 +152,7 @@ STDIO mode is only available when running the MCP server locally.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Using Docker">
|
||||
<Tab title="Generic Docker installation">
|
||||
**Run with Docker image**
|
||||
|
||||
```json
|
||||
@@ -121,7 +165,7 @@ STDIO mode is only available when running the MCP server locally.
|
||||
"--rm",
|
||||
"-i",
|
||||
"--env",
|
||||
"PROWLER_APP_API_KEY=pk_your_api_key_here",
|
||||
"PROWLER_APP_API_KEY=<your-api-key-here>",
|
||||
"--env",
|
||||
"PROWLER_API_BASE_URL=https://api.prowler.com",
|
||||
"prowlercloud/prowler-mcp"
|
||||
@@ -154,7 +198,7 @@ Prowler MCP Server supports two authentication methods to connect to Prowler Clo
|
||||
Use your Prowler API key directly in the Bearer token:
|
||||
|
||||
```
|
||||
Authorization: Bearer pk_your_api_key_here
|
||||
Authorization: Bearer <your-api-key-here>
|
||||
```
|
||||
|
||||
This is the recommended method for most users.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: 'Installation'
|
||||
title: "Installation"
|
||||
---
|
||||
|
||||
### Installation
|
||||
@@ -9,41 +9,40 @@ Prowler App supports multiple installation methods based on your environment.
|
||||
Refer to the [Prowler App Tutorial](/user-guide/tutorials/prowler-app) for detailed usage instructions.
|
||||
|
||||
<Warning>
|
||||
Prowler configuration is based in `.env` files. Every version of Prowler can have differences on that file, so, please, use the file that corresponds with that version or repository branch or tag.
|
||||
|
||||
Prowler configuration is based in `.env` files. Every version of Prowler can have differences on that file, so, please, use the file that corresponds with that version or repository branch or tag.
|
||||
</Warning>
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Docker Compose">
|
||||
_Requirements_:
|
||||
|
||||
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
|
||||
- `Docker Compose` installed: https://docs.docker.com/compose/install/.
|
||||
|
||||
_Commands_:
|
||||
|
||||
``` bash
|
||||
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/docker-compose.yml
|
||||
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/.env
|
||||
```bash
|
||||
VERSION=$(curl -s https://api.github.com/repos/prowler-cloud/prowler/releases/latest | jq -r .tag_name)
|
||||
curl -sLO "https://raw.githubusercontent.com/prowler-cloud/prowler/refs/tags/${VERSION}/docker-compose.yml"
|
||||
curl -sLO "https://raw.githubusercontent.com/prowler-cloud/prowler/refs/tags/${VERSION}/.env"
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
> Containers are built for `linux/amd64`. If your workstation's architecture is different, please set `DOCKER_DEFAULT_PLATFORM=linux/amd64` in your environment or use the `--platform linux/amd64` flag in the docker command.
|
||||
|
||||
</Tab>
|
||||
<Tab title="GitHub">
|
||||
_Requirements_:
|
||||
|
||||
* `git` installed.
|
||||
* `poetry` installed: [poetry installation](https://python-poetry.org/docs/#installation).
|
||||
* `npm` installed: [npm installation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
|
||||
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
|
||||
- `git` installed.
|
||||
- `poetry` installed: [poetry installation](https://python-poetry.org/docs/#installation).
|
||||
- `npm` installed: [npm installation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
|
||||
- `Docker Compose` installed: https://docs.docker.com/compose/install/.
|
||||
|
||||
<Warning>
|
||||
Make sure to have `api/.env` and `ui/.env.local` files with the required environment variables. You can find the required environment variables in the [`api/.env.template`](https://github.com/prowler-cloud/prowler/blob/master/api/.env.example) and [`ui/.env.template`](https://github.com/prowler-cloud/prowler/blob/master/ui/.env.template) files.
|
||||
Make sure to have `api/.env` and `ui/.env.local` files with the required environment variables. You can find the required environment variables in the [`api/.env.template`](https://github.com/prowler-cloud/prowler/blob/master/api/.env.example) and [`ui/.env.template`](https://github.com/prowler-cloud/prowler/blob/master/ui/.env.template) files.
|
||||
</Warning>
|
||||
|
||||
_Commands to run the API_:
|
||||
|
||||
``` bash
|
||||
```bash
|
||||
git clone https://github.com/prowler-cloud/prowler \
|
||||
cd prowler/api \
|
||||
poetry install \
|
||||
@@ -57,17 +56,15 @@ Prowler configuration is based in `.env` files. Every version of Prowler can hav
|
||||
```
|
||||
|
||||
<Warning>
|
||||
Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
|
||||
Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
|
||||
|
||||
If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
|
||||
In case you have any doubts, consult the Poetry environment activation guide: https://python-poetry.org/docs/managing-environments/#activating-the-environment
|
||||
If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment. In case you have any doubts, consult the Poetry environment activation guide: https://python-poetry.org/docs/managing-environments/#activating-the-environment
|
||||
</Warning>
|
||||
|
||||
> Now, you can access the API documentation at http://localhost:8080/api/v1/docs.
|
||||
|
||||
_Commands to run the API Worker_:
|
||||
|
||||
``` bash
|
||||
```bash
|
||||
git clone https://github.com/prowler-cloud/prowler \
|
||||
cd prowler/api \
|
||||
poetry install \
|
||||
@@ -80,7 +77,7 @@ Prowler configuration is based in `.env` files. Every version of Prowler can hav
|
||||
|
||||
_Commands to run the API Scheduler_:
|
||||
|
||||
``` bash
|
||||
```bash
|
||||
git clone https://github.com/prowler-cloud/prowler \
|
||||
cd prowler/api \
|
||||
poetry install \
|
||||
@@ -93,7 +90,7 @@ Prowler configuration is based in `.env` files. Every version of Prowler can hav
|
||||
|
||||
_Commands to run the UI_:
|
||||
|
||||
``` bash
|
||||
```bash
|
||||
git clone https://github.com/prowler-cloud/prowler \
|
||||
cd prowler/ui \
|
||||
npm install \
|
||||
@@ -104,10 +101,11 @@ Prowler configuration is based in `.env` files. Every version of Prowler can hav
|
||||
> Enjoy Prowler App at http://localhost:3000 by signing up with your email and password.
|
||||
|
||||
<Warning>
|
||||
Google and GitHub authentication is only available in [Prowler Cloud](https://prowler.com).
|
||||
Google and GitHub authentication is only available in [Prowler Cloud](https://prowler.com).
|
||||
</Warning>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Update Prowler App
|
||||
|
||||
Upgrade Prowler App installation using one of two options:
|
||||
@@ -129,13 +127,12 @@ docker compose pull --policy always
|
||||
|
||||
The `--policy always` flag ensures that Docker pulls the latest images even if they already exist locally.
|
||||
|
||||
|
||||
<Note>
|
||||
**What Gets Preserved During Upgrade**
|
||||
|
||||
Everything is preserved, nothing will be deleted after the update.
|
||||
**What Gets Preserved During Upgrade**
|
||||
|
||||
Everything is preserved, nothing will be deleted after the update.
|
||||
</Note>
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
If containers don't start, check logs for errors:
|
||||
@@ -155,7 +152,6 @@ docker compose pull
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
|
||||
### Container versions
|
||||
|
||||
The available versions of Prowler CLI are the following:
|
||||
@@ -171,6 +167,5 @@ The available versions of Prowler CLI are the following:
|
||||
The container images are available here:
|
||||
|
||||
- Prowler App:
|
||||
|
||||
- [DockerHub - Prowler UI](https://hub.docker.com/r/prowlercloud/prowler-ui/tags)
|
||||
- [DockerHub - Prowler API](https://hub.docker.com/r/prowlercloud/prowler-api/tags)
|
||||
- [DockerHub - Prowler UI](https://hub.docker.com/r/prowlercloud/prowler-ui/tags)
|
||||
- [DockerHub - Prowler API](https://hub.docker.com/r/prowlercloud/prowler-api/tags)
|
||||
|
||||
@@ -61,6 +61,59 @@ The Prowler MCP Server enables powerful workflows through AI assistants:
|
||||
- "What authentication methods does Prowler support for Azure?"
|
||||
- "How can I contribute with a new security check to Prowler?"
|
||||
|
||||
### Example: Creating a custom dashboard with Prowler extracted data
|
||||
|
||||
In the next example you can see how to create a dashboard using Prowler MCP Server and Claude Desktop.
|
||||
|
||||
**Used Prompt:**
|
||||
```
|
||||
Generate me a security dashboard for the Prowler open source project using live data from Prowler MCP tools.
|
||||
|
||||
REQUIREMENTS:
|
||||
1. Fetch real-time data from Prowler Findings using MCP tools
|
||||
2. Create a single self-contained HTML file and display it
|
||||
3. Dashboard must be production-ready with modern design
|
||||
|
||||
DATA TO FETCH:
|
||||
Use these MCP tools in this order:
|
||||
1. Prowler app list providers - To get all available configured provider in the account
|
||||
2. Prowler app get latest findings - To get findings information, if there are so many you can use the filter_fields to get less information, or pagination to get in different batches
|
||||
3. For most critical findings you can get more context and remediation with Prowler Hub to get remediations for example
|
||||
|
||||
DESIGN REQUIREMENTS:
|
||||
- Dark theme (gradient background: #0a0e27 to #131830)
|
||||
- Card-based layout with glassmorphism effects
|
||||
- Color scheme:
|
||||
* Primary green
|
||||
* Secondary purple
|
||||
- Modern, professional look
|
||||
- Animated "LIVE DATA" indicator (pulsing green badge)
|
||||
- Hover effects on all cards (lift, glow, border color change)
|
||||
- Responsive grid layout
|
||||
- Mobile-responsive breakpoints at 768px
|
||||
- Single HTML file with all CSS and JavaScript embedded
|
||||
- No external dependencies
|
||||
|
||||
SPECIFIC DETAILS TO INCLUDE:
|
||||
- Show actual counts from the data (don't hardcode numbers)
|
||||
- Add timestamp showing when dashboard was generated
|
||||
- Link to GitHub repository: https://github.com/prowler-cloud/prowler
|
||||
|
||||
OUTPUT:
|
||||
Generate the complete HTML file and display it
|
||||
```
|
||||
|
||||
**Video:**
|
||||
<iframe
|
||||
className="w-full aspect-video rounded-xl"
|
||||
src="https://www.youtube.com/embed/li29KNmYd4g?si=P3m6eB2z0Cqqse_H"
|
||||
title="Prowler MCP Server - Creating a dashboard"
|
||||
frameBorder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allowFullScreen
|
||||
></iframe>
|
||||
|
||||
|
||||
## Deployment Options
|
||||
|
||||
Prowler MCP Server can be used in three ways:
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 235 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 236 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 328 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 298 KiB |
@@ -31,9 +31,9 @@ The supported providers right now are:
|
||||
| [Kubernetes](/user-guide/providers/kubernetes/in-cluster) | Official | UI, API, CLI |
|
||||
| [M365](/user-guide/providers/microsoft365/getting-started-m365) | Official | UI, API, CLI |
|
||||
| [Github](/user-guide/providers/github/getting-started-github) | Official | UI, API, CLI |
|
||||
| [Oracle Cloud](/user-guide/providers/oci/getting-started-oci) | Official | CLI |
|
||||
| [Infra as Code](/user-guide/providers/iac/getting-started-iac) | Official | CLI |
|
||||
| [MongoDB Atlas](/user-guide/providers/mongodbatlas/getting-started-mongodbatlas) | Official | CLI |
|
||||
| [Oracle Cloud](/user-guide/providers/oci/getting-started-oci) | Official | UI, API, CLI |
|
||||
| [Infra as Code](/user-guide/providers/iac/getting-started-iac) | Official | UI, API, CLI |
|
||||
| [MongoDB Atlas](/user-guide/providers/mongodbatlas/getting-started-mongodbatlas) | Official | CLI, API |
|
||||
| [LLM](/user-guide/providers/llm/getting-started-llm) | Official | CLI |
|
||||
| **NHN** | Unofficial | CLI |
|
||||
|
||||
@@ -48,4 +48,4 @@ For more information about the checks and compliance of each provider visit [Pro
|
||||
<Card title="Development Guide" icon="pen-to-square" href="/developer-guide/introduction">
|
||||
Interested in contributing to Prowler?
|
||||
</Card>
|
||||
</Columns>
|
||||
</Columns>
|
||||
|
||||
@@ -0,0 +1,12 @@
|
||||
export const VersionBadge = ({ version }) => {
|
||||
return (
|
||||
<code className="version-badge-container">
|
||||
<p className="version-badge">
|
||||
<span className="version-badge-label">Added in:</span>
|
||||
<code className="version-badge-version">{version}</code>
|
||||
</p>
|
||||
</code>
|
||||
|
||||
|
||||
);
|
||||
};
|
||||
@@ -0,0 +1,51 @@
|
||||
/* Version Badge Styling */
|
||||
.version-badge-container {
|
||||
display: inline-block;
|
||||
margin: 0 0 1rem 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.version-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
margin: 0;
|
||||
padding: 0.375rem 0.75rem;
|
||||
background: linear-gradient(135deg, #1a1a1a 0%, #000000 100%);
|
||||
color: #ffffff;
|
||||
border-radius: 1.25rem;
|
||||
font-weight: 400;
|
||||
font-size: 0.875rem;
|
||||
line-height: 1.25rem;
|
||||
border: 1px solid rgba(0, 0, 0, 0.15);
|
||||
box-shadow: none;
|
||||
}
|
||||
|
||||
.version-badge-label {
|
||||
font-weight: 400;
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.version-badge-version {
|
||||
background: rgba(255, 255, 255, 0.12);
|
||||
padding: 0.125rem 0.5rem;
|
||||
border-radius: 0.875rem;
|
||||
font-family: ui-monospace, SFMono-Regular, 'SF Mono', Menlo, Monaco, 'Cascadia Code', 'Roboto Mono', Consolas, 'Courier New', monospace;
|
||||
font-weight: 600;
|
||||
font-size: 0.875rem;
|
||||
color: #ffffff;
|
||||
border: none;
|
||||
}
|
||||
|
||||
|
||||
.dark .version-badge {
|
||||
background: #55B685;
|
||||
color: #000000;
|
||||
border: 2px solid rgba(85, 182, 133, 0.3);
|
||||
box-shadow: none;
|
||||
}
|
||||
|
||||
.dark .version-badge-version {
|
||||
background: rgba(0, 0, 0, 0.1);
|
||||
color: #000000;
|
||||
border: none;
|
||||
}
|
||||
@@ -24,7 +24,7 @@ Standard results will be shown and additionally the framework information as the
|
||||
**If Prowler can't find a resource related with a check from a compliance requirement, this requirement won't appear on the output**
|
||||
</Note>
|
||||
|
||||
## List Available Compliance Frameworks
|
||||
## List Available Compliance Frameworks
|
||||
|
||||
To see which compliance frameworks are covered by Prowler, use the `--list-compliance` option:
|
||||
|
||||
@@ -34,7 +34,7 @@ prowler <provider> --list-compliance
|
||||
|
||||
Or you can visit [Prowler Hub](https://hub.prowler.com/compliance).
|
||||
|
||||
## List Requirements of Compliance Frameworks
|
||||
## List Requirements of Compliance Frameworks
|
||||
To list requirements for a compliance framework, use the `--list-compliance-requirements` option:
|
||||
|
||||
```sh
|
||||
|
||||
@@ -94,7 +94,7 @@ The following list includes all the Azure checks with configurable variables tha
|
||||
|
||||
### Configurable Checks
|
||||
|
||||
## Kubernetes
|
||||
## Kubernetes
|
||||
|
||||
### Configurable Checks
|
||||
The following list includes all the Kubernetes checks with configurable variables that can be changed in the configuration yaml file:
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
title: 'Integrations'
|
||||
---
|
||||
|
||||
## Integration with Slack
|
||||
## Integration with Slack
|
||||
|
||||
Prowler can be integrated with [Slack](https://slack.com/) to send a summary of the execution having configured a Slack APP in your channel with the following command:
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ prowler <provider> -V/-v/--version
|
||||
|
||||
Prowler provides various execution settings.
|
||||
|
||||
### Verbose Execution
|
||||
### Verbose Execution
|
||||
|
||||
To enable verbose mode in Prowler, similar to Version 2, use:
|
||||
|
||||
@@ -54,7 +54,7 @@ To run Prowler without color formatting:
|
||||
prowler <provider> --no-color
|
||||
```
|
||||
|
||||
### Checks in Prowler
|
||||
### Checks in Prowler
|
||||
|
||||
Prowler provides various security checks per cloud provider. Use the following options to list, execute, or exclude specific checks:
|
||||
|
||||
@@ -96,7 +96,7 @@ prowler <provider> -e/--excluded-checks ec2 rds
|
||||
prowler <provider> -C/--checks-file <checks_list>.json
|
||||
```
|
||||
|
||||
## Custom Checks in Prowler
|
||||
## Custom Checks in Prowler
|
||||
|
||||
Prowler supports custom security checks, allowing users to define their own logic.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ If any of the criteria do not match, the check is not muted.
|
||||
Remember that mutelist can be used with regular expressions.
|
||||
|
||||
</Note>
|
||||
## Mutelist Specification
|
||||
## Mutelist Specification
|
||||
|
||||
<Note>
|
||||
- For Azure provider, the Account ID is the Subscription Name and the Region is the Location.
|
||||
@@ -40,10 +40,9 @@ The Mutelist file uses the [YAML](https://en.wikipedia.org/wiki/YAML) format wit
|
||||
```yaml
|
||||
### Account, Check and/or Region can be * to apply for all the cases.
|
||||
### Resources and tags are lists that can have either Regex or Keywords.
|
||||
### Multiple tags in the list are "ANDed" together (ALL must match).
|
||||
### Use regex alternation (|) within a single tag for "OR" logic (e.g., "env=dev|env=stg").
|
||||
### For each check you can use Exceptions to unmute specific Accounts, Regions, Resources and/or Tags.
|
||||
### All conditions (Account, Check, Region, Resource, Tags) are ANDed together.
|
||||
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
|
||||
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
|
||||
### For each check you can except Accounts, Regions, Resources and/or Tags.
|
||||
########################### MUTELIST EXAMPLE ###########################
|
||||
Mutelist:
|
||||
Accounts:
|
||||
|
||||
@@ -10,7 +10,7 @@ This can help for really large accounts, but please be aware of AWS API rate lim
|
||||
2. **API Rate Limits**: Most of the rate limits in AWS are applied at the API level. Each API call to an AWS service counts towards the rate limit for that service.
|
||||
3. **Throttling Responses**: When you exceed the rate limit for a service, AWS responds with a throttling error. In AWS SDKs, these are typically represented as `ThrottlingException` or `RateLimitExceeded` errors.
|
||||
|
||||
For information on Prowler's retrier configuration please refer to this [page](https://docs.prowler.com/user-guide/providers/aws/boto3-configuration/).
|
||||
For information on Prowler's retrier configuration please refer to this [page](https://docs.prowler.cloud/en/latest/tutorials/aws/boto3-configuration/).
|
||||
|
||||
<Note>
|
||||
You might need to increase the `--aws-retries-max-attempts` parameter from the default value of 3. The retrier follows an exponential backoff strategy.
|
||||
|
||||
@@ -24,6 +24,6 @@ By default, it extracts resources from all the regions, you could use `-f`/`--fi
|
||||
|
||||

|
||||
|
||||
## Objections
|
||||
## Objections
|
||||
|
||||
The inventorying process is carried out with `resourcegroupstaggingapi` calls, which means that only resources they have or have had tags will appear (except for the IAM and S3 resources which are done with Boto3 API calls).
|
||||
|
||||
@@ -22,7 +22,7 @@ prowler <provider> --output-formats json-asff
|
||||
|
||||
All compliance-related reports are automatically generated when Prowler is executed. These outputs are stored in the `/output/compliance` directory.
|
||||
|
||||
## Custom Output Flags
|
||||
## Custom Output Flags
|
||||
|
||||
By default, Prowler creates a file inside the `output` directory named: `prowler-output-ACCOUNT_NUM-OUTPUT_DATE.format`.
|
||||
|
||||
@@ -53,7 +53,7 @@ Both flags can be used simultaneously to provide a custom directory and filename
|
||||
|
||||
By default, the timestamp format of the output files is ISO 8601. This can be changed with the flag `--unix-timestamp` generating the timestamp fields in pure unix timestamp format.
|
||||
|
||||
## Supported Output Formats
|
||||
## Supported Output Formats
|
||||
|
||||
Prowler natively supports the following reporting output formats:
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ prowler <provider> --scan-unused-services
|
||||
|
||||
## Services Ignored
|
||||
|
||||
### AWS
|
||||
### AWS
|
||||
|
||||
#### ACM (AWS Certificate Manager)
|
||||
|
||||
@@ -22,21 +22,21 @@ Certificates stored in ACM without active usage in AWS resources are excluded. B
|
||||
|
||||
- `acm_certificates_expiration_check`
|
||||
|
||||
#### Athena
|
||||
#### Athena
|
||||
|
||||
Upon AWS account creation, Athena provisions a default primary workgroup for the user. Prowler verifies if this workgroup is enabled and used by checking for queries within the last 45 days. If Athena is unused, findings related to its checks will not appear.
|
||||
|
||||
- `athena_workgroup_encryption`
|
||||
- `athena_workgroup_enforce_configuration`
|
||||
|
||||
#### AWS CloudTrail
|
||||
#### AWS CloudTrail
|
||||
|
||||
AWS CloudTrail should have at least one trail with a data event to record all S3 object-level API operations. Before flagging this issue, Prowler verifies if S3 buckets exist in the account.
|
||||
|
||||
- `cloudtrail_s3_dataevents_read_enabled`
|
||||
- `cloudtrail_s3_dataevents_write_enabled`
|
||||
|
||||
#### AWS Elastic Compute Cloud (EC2)
|
||||
#### AWS Elastic Compute Cloud (EC2)
|
||||
|
||||
If Amazon Elastic Block Store (EBS) default encyption is not enabled, sensitive data at rest will remain unprotected in EC2. However, Prowler will only generate a finding if EBS volumes exist where default encryption could be enforced.
|
||||
|
||||
@@ -56,7 +56,7 @@ Prowler scans only attached security groups to report vulnerabilities in activel
|
||||
|
||||
- `ec2_networkacl_allow_ingress_X_port`
|
||||
|
||||
#### AWS Glue
|
||||
#### AWS Glue
|
||||
|
||||
AWS Glue best practices recommend encrypting metadata and connection passwords in Data Catalogs.
|
||||
|
||||
@@ -71,7 +71,7 @@ Amazon Inspector is a vulnerability discovery service that automates continuous
|
||||
|
||||
- `inspector2_is_enabled`
|
||||
|
||||
#### Amazon Macie
|
||||
#### Amazon Macie
|
||||
|
||||
Amazon Macie leverages machine learning to automatically discover, classify, and protect sensitive data in S3 buckets. Prowler only generates findings if Macie is disabled and there are S3 buckets in the AWS account.
|
||||
|
||||
@@ -83,7 +83,7 @@ A network firewall is essential for monitoring and controlling traffic within a
|
||||
|
||||
- `networkfirewall_in_all_vpc`
|
||||
|
||||
#### Amazon S3
|
||||
#### Amazon S3
|
||||
|
||||
To prevent unintended data exposure:
|
||||
|
||||
@@ -91,7 +91,7 @@ Public Access Block should be enabled at the account level. Prowler only checks
|
||||
|
||||
- `s3_account_level_public_access_blocks`
|
||||
|
||||
#### Virtual Private Cloud (VPC)
|
||||
#### Virtual Private Cloud (VPC)
|
||||
|
||||
VPC settings directly impact network security and availability.
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: "Prowler ThreatScore Documentation"
|
||||
---
|
||||
|
||||
|
||||
|
||||
<Info>This feature is only available in Prowler Cloud/App.</Info>
|
||||
|
||||
## Introduction
|
||||
|
||||
The **Prowler ThreatScore** is a comprehensive compliance scoring system that provides a unified metric for assessing your organization's security posture across compliance frameworks. It aggregates findings from individual security checks into a single, normalized score ranging from 0 to 100.
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
---
|
||||
title: "Getting Started with the IaC Provider"
|
||||
---
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
Prowler's Infrastructure as Code (IaC) provider enables scanning of local or remote infrastructure code for security and compliance issues using [Trivy](https://trivy.dev/). This provider supports a wide range of IaC frameworks, allowing assessment of code before deployment.
|
||||
|
||||
@@ -22,8 +23,46 @@ The IaC provider leverages [Trivy](https://trivy.dev/latest/docs/scanner/vulnera
|
||||
- Mutelist logic ([filtering](https://trivy.dev/latest/docs/configuration/filtering/)) is handled by Trivy, not Prowler.
|
||||
- Results are output in the same formats as other Prowler providers (CSV, JSON, HTML, etc.).
|
||||
|
||||
## Prowler App
|
||||
|
||||
<VersionBadge version="5.14.0" />
|
||||
|
||||
### Step 1: Access Prowler Cloud/App
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Go to "Configuration" > "Cloud Providers"
|
||||
|
||||

|
||||
|
||||
3. Click "Add Cloud Provider"
|
||||
|
||||

|
||||
|
||||
4. Select "Infrastructure as Code"
|
||||
|
||||

|
||||
|
||||
5. Add the Repository URL and an optional alias, then click "Next"
|
||||
|
||||

|
||||
|
||||
### Step 2: Enter Authentication Details
|
||||
|
||||
6. Optionally provide the [authentication](/user-guide/providers/iac/authentication) details for private repositories, then click "Next"
|
||||
|
||||

|
||||
|
||||
### Step 3: Verify Connection & Start Scan
|
||||
|
||||
7. Review the provider configuration and click "Launch scan" to initiate the scan
|
||||
|
||||

|
||||
|
||||
|
||||
## Prowler CLI
|
||||
|
||||
<VersionBadge version="5.8.0" />
|
||||
|
||||
### Usage
|
||||
|
||||
Use the `iac` argument to run Prowler with the IaC provider. Specify the directory or repository to scan, frameworks to include, and paths to exclude.
|
||||
|
||||
@@ -32,7 +32,7 @@ The AWS Organizations Bulk Provisioning tool simplifies multi-account onboarding
|
||||
* ProwlerRole (or custom role) deployed across all target accounts
|
||||
* Prowler API key (from Prowler Cloud or self-hosted Prowler App)
|
||||
* For self-hosted Prowler App, remember to [point to your API base URL](./bulk-provider-provisioning#custom-api-endpoints)
|
||||
* Learn how to create API keys: [Prowler App API Keys](../providers/prowler-app-api-keys)
|
||||
* Learn how to create API keys: [Prowler App API Keys](../tutorials/prowler-app-api-keys)
|
||||
|
||||
### Deploying ProwlerRole Across AWS Organizations
|
||||
|
||||
@@ -101,7 +101,7 @@ To create an API key:
|
||||
4. Provide a descriptive name and optionally set an expiration date
|
||||
5. Copy the generated API key (it will only be shown once)
|
||||
|
||||
For detailed instructions, see: [Prowler App API Keys](../providers/prowler-app-api-keys)
|
||||
For detailed instructions, see: [Prowler App API Keys](../tutorials/prowler-app-api-keys)
|
||||
|
||||
## Basic Usage
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ The Bulk Provider Provisioning tool automates the creation of cloud providers in
|
||||
* Python 3.7 or higher
|
||||
* Prowler API key (from Prowler Cloud or self-hosted Prowler App)
|
||||
* For self-hosted Prowler App, remember to [point to your API base URL](#custom-api-endpoints)
|
||||
* Learn how to create API keys: [Prowler App API Keys](../providers/prowler-app-api-keys)
|
||||
* Learn how to create API keys: [Prowler App API Keys](../tutorials/prowler-app-api-keys)
|
||||
* Authentication credentials for target cloud providers
|
||||
|
||||
### Installation
|
||||
@@ -57,7 +57,7 @@ To create an API key:
|
||||
4. Provide a descriptive name and optionally set an expiration date
|
||||
5. Copy the generated API key (it will only be shown once)
|
||||
|
||||
For detailed instructions, see: [Prowler App API Keys](../providers/prowler-app-api-keys)
|
||||
For detailed instructions, see: [Prowler App API Keys](../tutorials/prowler-app-api-keys)
|
||||
|
||||
## Configuration File Structure
|
||||
|
||||
|
||||
+4
@@ -2,6 +2,10 @@
|
||||
title: 'API Keys'
|
||||
---
|
||||
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.13.0" />
|
||||
|
||||
API key authentication in Prowler App provides an alternative to JWT tokens and empowers automation, CI/CD pipelines, and third-party integrations. This guide explains how to create, manage, and safeguard API keys when working with the Prowler API.
|
||||
|
||||
## API Key Advantages
|
||||
@@ -1,6 +1,9 @@
|
||||
---
|
||||
title: "Jira Integration"
|
||||
---
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.12.0" />
|
||||
|
||||
Prowler App enables automatic export of security findings to Jira, providing seamless integration with Atlassian's work item tracking and project management platform. This comprehensive guide demonstrates how to configure and manage Jira integrations to streamline security incident management and enhance team collaboration across security workflows.
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: 'Prowler Lighthouse AI'
|
||||
---
|
||||
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.8.0" />
|
||||
|
||||
Prowler Lighthouse AI is a Cloud Security Analyst chatbot that helps you understand, prioritize, and remediate security findings in your cloud environments. It's designed to provide security expertise for teams without dedicated resources, acting as your 24/7 virtual cloud security analyst.
|
||||
|
||||
<img src="/images/prowler-app/lighthouse-intro.png" alt="Prowler Lighthouse" />
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
---
|
||||
title: 'Mute Findings (Mutelist)'
|
||||
---
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.9.0" />
|
||||
|
||||
Prowler App allows users to mute specific findings to focus on the most critical security issues. This comprehensive guide demonstrates how to effectively use the Mutelist feature to manage and prioritize security findings.
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: 'Managing Users and Role-Based Access Control (RBAC)'
|
||||
---
|
||||
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.1.0" />
|
||||
|
||||
**Prowler App** supports multiple users within a single tenant, enabling seamless collaboration by allowing team members to easily share insights and manage security findings.
|
||||
|
||||
[Roles](#roles) help you control user permissions, determining what actions each user can perform and the data they can access within Prowler. By default, each account includes an immutable **admin** role, ensuring that your account always retains administrative access.
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: 'Amazon S3 Integration'
|
||||
---
|
||||
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.10.0" />
|
||||
|
||||
**Prowler App** allows automatic export of scan results to Amazon S3 buckets, providing seamless integration with existing data workflows and storage infrastructure. This comprehensive guide demonstrates configuration and management of Amazon S3 integrations to streamline security finding management and reporting.
|
||||
|
||||
When enabled and configured, scan results are automatically stored in the configured bucket. Results are provided in `csv`, `html` and `json-ocsf` formats, offering flexibility for custom integrations:
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
---
|
||||
title: "AWS Security Hub Integration"
|
||||
---
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.11.0" />
|
||||
|
||||
Prowler App enables automatic export of security findings to AWS Security Hub, providing seamless integration with AWS's native security and compliance service. This comprehensive guide demonstrates how to configure and manage AWS Security Hub integrations to centralize security findings and enhance compliance tracking across AWS environments.
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: 'Social Login Configuration'
|
||||
---
|
||||
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.5.0" />
|
||||
|
||||
**Prowler App** supports social login using Google and GitHub OAuth providers. This document guides you through configuring the required environment variables to enable social authentication.
|
||||
|
||||
<img src="/images/prowler-app/social-login/social_login_buttons.png" alt="Social login buttons" width="700" />
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: 'SAML Single Sign-On (SSO)'
|
||||
---
|
||||
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.9.0" />
|
||||
|
||||
This guide provides comprehensive instructions to configure SAML-based Single Sign-On (SSO) in Prowler App. This configuration allows users to authenticate using the organization's Identity Provider (IdP).
|
||||
|
||||
This document is divided into two main sections:
|
||||
|
||||
Generated
+119
-98
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "about-time"
|
||||
@@ -4420,106 +4420,127 @@ typing-extensions = {version = ">=4.4.0", markers = "python_version < \"3.13\""}
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "2024.11.6"
|
||||
version = "2025.9.18"
|
||||
description = "Alternative regular expression module, to replace re."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev", "docs"]
|
||||
files = [
|
||||
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ff590880083d60acc0433f9c3f713c51f7ac6ebb9adf889c79a261ecf541aa91"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:658f90550f38270639e83ce492f27d2c8d2cd63805c65a13a14d36ca126753f0"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:164d8b7b3b4bcb2068b97428060b2a53be050085ef94eca7f240e7947f1b080e"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3660c82f209655a06b587d55e723f0b813d3a7db2e32e5e7dc64ac2a9e86fde"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d22326fcdef5e08c154280b71163ced384b428343ae16a5ab2b3354aed12436e"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f1ac758ef6aebfc8943560194e9fd0fa18bcb34d89fd8bd2af18183afd8da3a2"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:997d6a487ff00807ba810e0f8332c18b4eb8d29463cfb7c820dc4b6e7562d0cf"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:02a02d2bb04fec86ad61f3ea7f49c015a0681bf76abb9857f945d26159d2968c"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f02f93b92358ee3f78660e43b4b0091229260c5d5c408d17d60bf26b6c900e86"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:06eb1be98df10e81ebaded73fcd51989dcf534e3c753466e4b60c4697a003b67"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:040df6fe1a5504eb0f04f048e6d09cd7c7110fef851d7c567a6b6e09942feb7d"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:fdabbfc59f2c6edba2a6622c647b716e34e8e3867e0ab975412c5c2f79b82da2"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:8447d2d39b5abe381419319f942de20b7ecd60ce86f16a23b0698f22e1b70008"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:da8f5fc57d1933de22a9e23eec290a0d8a5927a5370d24bda9a6abe50683fe62"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-win32.whl", hash = "sha256:b489578720afb782f6ccf2840920f3a32e31ba28a4b162e13900c3e6bd3f930e"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-win_amd64.whl", hash = "sha256:5071b2093e793357c9d8b2929dfc13ac5f0a6c650559503bb81189d0a3814519"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:5478c6962ad548b54a591778e93cd7c456a7a29f8eca9c49e4f9a806dcc5d638"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2c89a8cc122b25ce6945f0423dc1352cb9593c68abd19223eebbd4e56612c5b7"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:94d87b689cdd831934fa3ce16cc15cd65748e6d689f5d2b8f4f4df2065c9fa20"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1062b39a0a2b75a9c694f7a08e7183a80c63c0d62b301418ffd9c35f55aaa114"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:167ed4852351d8a750da48712c3930b031f6efdaa0f22fa1933716bfcd6bf4a3"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2d548dafee61f06ebdb584080621f3e0c23fff312f0de1afc776e2a2ba99a74f"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a19f302cd1ce5dd01a9099aaa19cae6173306d1302a43b627f62e21cf18ac0"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bec9931dfb61ddd8ef2ebc05646293812cb6b16b60cf7c9511a832b6f1854b55"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:9714398225f299aa85267fd222f7142fcb5c769e73d7733344efc46f2ef5cf89"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:202eb32e89f60fc147a41e55cb086db2a3f8cb82f9a9a88440dcfc5d37faae8d"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:4181b814e56078e9b00427ca358ec44333765f5ca1b45597ec7446d3a1ef6e34"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:068376da5a7e4da51968ce4c122a7cd31afaaec4fccc7856c92f63876e57b51d"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ac10f2c4184420d881a3475fb2c6f4d95d53a8d50209a2500723d831036f7c45"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-win32.whl", hash = "sha256:c36f9b6f5f8649bb251a5f3f66564438977b7ef8386a52460ae77e6070d309d9"},
|
||||
{file = "regex-2024.11.6-cp311-cp311-win_amd64.whl", hash = "sha256:02e28184be537f0e75c1f9b2f8847dc51e08e6e171c6bde130b2687e0c33cf60"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:52fb28f528778f184f870b7cf8f225f5eef0a8f6e3778529bdd40c7b3920796a"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fdd6028445d2460f33136c55eeb1f601ab06d74cb3347132e1c24250187500d9"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:805e6b60c54bf766b251e94526ebad60b7de0c70f70a4e6210ee2891acb70bf2"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b85c2530be953a890eaffde05485238f07029600e8f098cdf1848d414a8b45e4"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bb26437975da7dc36b7efad18aa9dd4ea569d2357ae6b783bf1118dabd9ea577"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:abfa5080c374a76a251ba60683242bc17eeb2c9818d0d30117b4486be10c59d3"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b7fa6606c2881c1db9479b0eaa11ed5dfa11c8d60a474ff0e095099f39d98e"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0c32f75920cf99fe6b6c539c399a4a128452eaf1af27f39bce8909c9a3fd8cbe"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:982e6d21414e78e1f51cf595d7f321dcd14de1f2881c5dc6a6e23bbbbd68435e"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:a7c2155f790e2fb448faed6dd241386719802296ec588a8b9051c1f5c481bc29"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:149f5008d286636e48cd0b1dd65018548944e495b0265b45e1bffecce1ef7f39"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:e5364a4502efca094731680e80009632ad6624084aff9a23ce8c8c6820de3e51"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:0a86e7eeca091c09e021db8eb72d54751e527fa47b8d5787caf96d9831bd02ad"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-win32.whl", hash = "sha256:32f9a4c643baad4efa81d549c2aadefaeba12249b2adc5af541759237eee1c54"},
|
||||
{file = "regex-2024.11.6-cp312-cp312-win_amd64.whl", hash = "sha256:a93c194e2df18f7d264092dc8539b8ffb86b45b899ab976aa15d48214138e81b"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a6ba92c0bcdf96cbf43a12c717eae4bc98325ca3730f6b130ffa2e3c3c723d84"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:525eab0b789891ac3be914d36893bdf972d483fe66551f79d3e27146191a37d4"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:086a27a0b4ca227941700e0b31425e7a28ef1ae8e5e05a33826e17e47fbfdba0"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bde01f35767c4a7899b7eb6e823b125a64de314a8ee9791367c9a34d56af18d0"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b583904576650166b3d920d2bcce13971f6f9e9a396c673187f49811b2769dc7"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1c4de13f06a0d54fa0d5ab1b7138bfa0d883220965a29616e3ea61b35d5f5fc7"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3cde6e9f2580eb1665965ce9bf17ff4952f34f5b126beb509fee8f4e994f143c"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0d7f453dca13f40a02b79636a339c5b62b670141e63efd511d3f8f73fba162b3"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:59dfe1ed21aea057a65c6b586afd2a945de04fc7db3de0a6e3ed5397ad491b07"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b97c1e0bd37c5cd7902e65f410779d39eeda155800b65fc4d04cc432efa9bc6e"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:f9d1e379028e0fc2ae3654bac3cbbef81bf3fd571272a42d56c24007979bafb6"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:13291b39131e2d002a7940fb176e120bec5145f3aeb7621be6534e46251912c4"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4f51f88c126370dcec4908576c5a627220da6c09d0bff31cfa89f2523843316d"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-win32.whl", hash = "sha256:63b13cfd72e9601125027202cad74995ab26921d8cd935c25f09c630436348ff"},
|
||||
{file = "regex-2024.11.6-cp313-cp313-win_amd64.whl", hash = "sha256:2b3361af3198667e99927da8b84c1b010752fa4b1115ee30beaa332cabc3ef1a"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:3a51ccc315653ba012774efca4f23d1d2a8a8f278a6072e29c7147eee7da446b"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ad182d02e40de7459b73155deb8996bbd8e96852267879396fb274e8700190e3"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:ba9b72e5643641b7d41fa1f6d5abda2c9a263ae835b917348fc3c928182ad467"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40291b1b89ca6ad8d3f2b82782cc33807f1406cf68c8d440861da6304d8ffbbd"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cdf58d0e516ee426a48f7b2c03a332a4114420716d55769ff7108c37a09951bf"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a36fdf2af13c2b14738f6e973aba563623cb77d753bbbd8d414d18bfaa3105dd"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d1cee317bfc014c2419a76bcc87f071405e3966da434e03e13beb45f8aced1a6"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:50153825ee016b91549962f970d6a4442fa106832e14c918acd1c8e479916c4f"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ea1bfda2f7162605f6e8178223576856b3d791109f15ea99a9f95c16a7636fb5"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:df951c5f4a1b1910f1a99ff42c473ff60f8225baa1cdd3539fe2819d9543e9df"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:072623554418a9911446278f16ecb398fb3b540147a7828c06e2011fa531e773"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:f654882311409afb1d780b940234208a252322c24a93b442ca714d119e68086c"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:89d75e7293d2b3e674db7d4d9b1bee7f8f3d1609428e293771d1a962617150cc"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:f65557897fc977a44ab205ea871b690adaef6b9da6afda4790a2484b04293a5f"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-win32.whl", hash = "sha256:6f44ec28b1f858c98d3036ad5d7d0bfc568bdd7a74f9c24e25f41ef1ebfd81a4"},
|
||||
{file = "regex-2024.11.6-cp38-cp38-win_amd64.whl", hash = "sha256:bb8f74f2f10dbf13a0be8de623ba4f9491faf58c24064f32b65679b021ed0001"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5704e174f8ccab2026bd2f1ab6c510345ae8eac818b613d7d73e785f1310f839"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:220902c3c5cc6af55d4fe19ead504de80eb91f786dc102fbd74894b1551f095e"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5e7e351589da0850c125f1600a4c4ba3c722efefe16b297de54300f08d734fbf"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5056b185ca113c88e18223183aa1a50e66507769c9640a6ff75859619d73957b"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2e34b51b650b23ed3354b5a07aab37034d9f923db2a40519139af34f485f77d0"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5670bce7b200273eee1840ef307bfa07cda90b38ae56e9a6ebcc9f50da9c469b"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:08986dce1339bc932923e7d1232ce9881499a0e02925f7402fb7c982515419ef"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:93c0b12d3d3bc25af4ebbf38f9ee780a487e8bf6954c115b9f015822d3bb8e48"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:764e71f22ab3b305e7f4c21f1a97e1526a25ebdd22513e251cf376760213da13"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:f056bf21105c2515c32372bbc057f43eb02aae2fda61052e2f7622c801f0b4e2"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:69ab78f848845569401469da20df3e081e6b5a11cb086de3eed1d48f5ed57c95"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:86fddba590aad9208e2fa8b43b4c098bb0ec74f15718bb6a704e3c63e2cef3e9"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:684d7a212682996d21ca12ef3c17353c021fe9de6049e19ac8481ec35574a70f"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:a03e02f48cd1abbd9f3b7e3586d97c8f7a9721c436f51a5245b3b9483044480b"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-win32.whl", hash = "sha256:41758407fc32d5c3c5de163888068cfee69cb4c2be844e7ac517a52770f9af57"},
|
||||
{file = "regex-2024.11.6-cp39-cp39-win_amd64.whl", hash = "sha256:b2837718570f95dd41675328e111345f9b7095d821bac435aac173ac80b19983"},
|
||||
{file = "regex-2024.11.6.tar.gz", hash = "sha256:7ab159b063c52a0333c884e4679f8d7a85112ee3078fe3d9004b2dd875585519"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:12296202480c201c98a84aecc4d210592b2f55e200a1d193235c4db92b9f6788"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:220381f1464a581f2ea988f2220cf2a67927adcef107d47d6897ba5a2f6d51a4"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:87f681bfca84ebd265278b5daa1dcb57f4db315da3b5d044add7c30c10442e61"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:34d674cbba70c9398074c8a1fcc1a79739d65d1105de2a3c695e2b05ea728251"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:385c9b769655cb65ea40b6eea6ff763cbb6d69b3ffef0b0db8208e1833d4e746"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:8900b3208e022570ae34328712bef6696de0804c122933414014bae791437ab2"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c204e93bf32cd7a77151d44b05eb36f469d0898e3fba141c026a26b79d9914a0"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3acc471d1dd7e5ff82e6cacb3b286750decd949ecd4ae258696d04f019817ef8"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6479d5555122433728760e5f29edb4c2b79655a8deb681a141beb5c8a025baea"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:431bd2a8726b000eb6f12429c9b438a24062a535d06783a93d2bcbad3698f8a8"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:0cc3521060162d02bd36927e20690129200e5ac9d2c6d32b70368870b122db25"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:a021217b01be2d51632ce056d7a837d3fa37c543ede36e39d14063176a26ae29"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-win32.whl", hash = "sha256:4a12a06c268a629cb67cc1d009b7bb0be43e289d00d5111f86a2efd3b1949444"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-win_amd64.whl", hash = "sha256:47acd811589301298c49db2c56bde4f9308d6396da92daf99cba781fa74aa450"},
|
||||
{file = "regex-2025.9.18-cp310-cp310-win_arm64.whl", hash = "sha256:16bd2944e77522275e5ee36f867e19995bcaa533dcb516753a26726ac7285442"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:51076980cd08cd13c88eb7365427ae27f0d94e7cebe9ceb2bb9ffdae8fc4d82a"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:828446870bd7dee4e0cbeed767f07961aa07f0ea3129f38b3ccecebc9742e0b8"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c28821d5637866479ec4cc23b8c990f5bc6dd24e5e4384ba4a11d38a526e1414"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:726177ade8e481db669e76bf99de0b278783be8acd11cef71165327abd1f170a"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f5cca697da89b9f8ea44115ce3130f6c54c22f541943ac8e9900461edc2b8bd4"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:dfbde38f38004703c35666a1e1c088b778e35d55348da2b7b278914491698d6a"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f2f422214a03fab16bfa495cfec72bee4aaa5731843b771860a471282f1bf74f"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a295916890f4df0902e4286bc7223ee7f9e925daa6dcdec4192364255b70561a"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:5db95ff632dbabc8c38c4e82bf545ab78d902e81160e6e455598014f0abe66b9"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:fb967eb441b0f15ae610b7069bdb760b929f267efbf522e814bbbfffdf125ce2"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f04d2f20da4053d96c08f7fde6e1419b7ec9dbcee89c96e3d731fca77f411b95"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-win32.whl", hash = "sha256:895197241fccf18c0cea7550c80e75f185b8bd55b6924fcae269a1a92c614a07"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-win_amd64.whl", hash = "sha256:7e2b414deae99166e22c005e154a5513ac31493db178d8aec92b3269c9cce8c9"},
|
||||
{file = "regex-2025.9.18-cp311-cp311-win_arm64.whl", hash = "sha256:fb137ec7c5c54f34a25ff9b31f6b7b0c2757be80176435bf367111e3f71d72df"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:436e1b31d7efd4dcd52091d076482031c611dde58bf9c46ca6d0a26e33053a7e"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c190af81e5576b9c5fdc708f781a52ff20f8b96386c6e2e0557a78402b029f4a"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:e4121f1ce2b2b5eec4b397cc1b277686e577e658d8f5870b7eb2d726bd2300ab"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:300e25dbbf8299d87205e821a201057f2ef9aa3deb29caa01cd2cac669e508d5"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:7b47fcf9f5316c0bdaf449e879407e1b9937a23c3b369135ca94ebc8d74b1742"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:57a161bd3acaa4b513220b49949b07e252165e6b6dc910ee7617a37ff4f5b425"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4f130c3a7845ba42de42f380fff3c8aebe89a810747d91bcf56d40a069f15352"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5f96fa342b6f54dcba928dd452e8d8cb9f0d63e711d1721cd765bb9f73bb048d"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:0f0d676522d68c207828dcd01fb6f214f63f238c283d9f01d85fc664c7c85b56"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:40532bff8a1a0621e7903ae57fce88feb2e8a9a9116d341701302c9302aef06e"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:039f11b618ce8d71a1c364fdee37da1012f5a3e79b1b2819a9f389cd82fd6282"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-win32.whl", hash = "sha256:e1dd06f981eb226edf87c55d523131ade7285137fbde837c34dc9d1bf309f459"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-win_amd64.whl", hash = "sha256:3d86b5247bf25fa3715e385aa9ff272c307e0636ce0c9595f64568b41f0a9c77"},
|
||||
{file = "regex-2025.9.18-cp312-cp312-win_arm64.whl", hash = "sha256:032720248cbeeae6444c269b78cb15664458b7bb9ed02401d3da59fe4d68c3a5"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:2a40f929cd907c7e8ac7566ac76225a77701a6221bca937bdb70d56cb61f57b2"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:c90471671c2cdf914e58b6af62420ea9ecd06d1554d7474d50133ff26ae88feb"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:1a351aff9e07a2dabb5022ead6380cff17a4f10e4feb15f9100ee56c4d6d06af"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:bc4b8e9d16e20ddfe16430c23468a8707ccad3365b06d4536142e71823f3ca29"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:4b8cdbddf2db1c5e80338ba2daa3cfa3dec73a46fff2a7dda087c8efbf12d62f"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a276937d9d75085b2c91fb48244349c6954f05ee97bba0963ce24a9d915b8b68"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:92a8e375ccdc1256401c90e9dc02b8642894443d549ff5e25e36d7cf8a80c783"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:0dc6893b1f502d73037cf807a321cdc9be29ef3d6219f7970f842475873712ac"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:a61e85bfc63d232ac14b015af1261f826260c8deb19401c0597dbb87a864361e"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:1ef86a9ebc53f379d921fb9a7e42b92059ad3ee800fcd9e0fe6181090e9f6c23"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:d3bc882119764ba3a119fbf2bd4f1b47bc56c1da5d42df4ed54ae1e8e66fdf8f"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-win32.whl", hash = "sha256:3810a65675845c3bdfa58c3c7d88624356dd6ee2fc186628295e0969005f928d"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-win_amd64.whl", hash = "sha256:16eaf74b3c4180ede88f620f299e474913ab6924d5c4b89b3833bc2345d83b3d"},
|
||||
{file = "regex-2025.9.18-cp313-cp313-win_arm64.whl", hash = "sha256:4dc98ba7dd66bd1261927a9f49bd5ee2bcb3660f7962f1ec02617280fc00f5eb"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:fe5d50572bc885a0a799410a717c42b1a6b50e2f45872e2b40f4f288f9bce8a2"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:1b9d9a2d6cda6621551ca8cf7a06f103adf72831153f3c0d982386110870c4d3"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:13202e4c4ac0ef9a317fff817674b293c8f7e8c68d3190377d8d8b749f566e12"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:874ff523b0fecffb090f80ae53dc93538f8db954c8bb5505f05b7787ab3402a0"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:d13ab0490128f2bb45d596f754148cd750411afc97e813e4b3a61cf278a23bb6"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:05440bc172bc4b4b37fb9667e796597419404dbba62e171e1f826d7d2a9ebcef"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5514b8e4031fdfaa3d27e92c75719cbe7f379e28cacd939807289bce76d0e35a"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:65d3c38c39efce73e0d9dc019697b39903ba25b1ad45ebbd730d2cf32741f40d"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:ae77e447ebc144d5a26d50055c6ddba1d6ad4a865a560ec7200b8b06bc529368"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:e3ef8cf53dc8df49d7e28a356cf824e3623764e9833348b655cfed4524ab8a90"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:9feb29817df349c976da9a0debf775c5c33fc1c8ad7b9f025825da99374770b7"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-win32.whl", hash = "sha256:168be0d2f9b9d13076940b1ed774f98595b4e3c7fc54584bba81b3cc4181742e"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-win_amd64.whl", hash = "sha256:d59ecf3bb549e491c8104fea7313f3563c7b048e01287db0a90485734a70a730"},
|
||||
{file = "regex-2025.9.18-cp313-cp313t-win_arm64.whl", hash = "sha256:dbef80defe9fb21310948a2595420b36c6d641d9bea4c991175829b2cc4bc06a"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:c6db75b51acf277997f3adcd0ad89045d856190d13359f15ab5dda21581d9129"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8f9698b6f6895d6db810e0bda5364f9ceb9e5b11328700a90cae573574f61eea"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:29cd86aa7cb13a37d0f0d7c21d8d949fe402ffa0ea697e635afedd97ab4b69f1"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7c9f285a071ee55cd9583ba24dde006e53e17780bb309baa8e4289cd472bcc47"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:5adf266f730431e3be9021d3e5b8d5ee65e563fec2883ea8093944d21863b379"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:1137cabc0f38807de79e28d3f6e3e3f2cc8cfb26bead754d02e6d1de5f679203"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7cc9e5525cada99699ca9223cce2d52e88c52a3d2a0e842bd53de5497c604164"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:bbb9246568f72dce29bcd433517c2be22c7791784b223a810225af3b50d1aafb"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:6a52219a93dd3d92c675383efff6ae18c982e2d7651c792b1e6d121055808743"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:ae9b3840c5bd456780e3ddf2f737ab55a79b790f6409182012718a35c6d43282"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d488c236ac497c46a5ac2005a952c1a0e22a07be9f10c3e735bc7d1209a34773"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-win32.whl", hash = "sha256:0c3506682ea19beefe627a38872d8da65cc01ffa25ed3f2e422dffa1474f0788"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-win_amd64.whl", hash = "sha256:57929d0f92bebb2d1a83af372cd0ffba2263f13f376e19b1e4fa32aec4efddc3"},
|
||||
{file = "regex-2025.9.18-cp314-cp314-win_arm64.whl", hash = "sha256:6a4b44df31d34fa51aa5c995d3aa3c999cec4d69b9bd414a8be51984d859f06d"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:b176326bcd544b5e9b17d6943f807697c0cb7351f6cfb45bf5637c95ff7e6306"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:0ffd9e230b826b15b369391bec167baed57c7ce39efc35835448618860995946"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:ec46332c41add73f2b57e2f5b642f991f6b15e50e9f86285e08ffe3a512ac39f"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b80fa342ed1ea095168a3f116637bd1030d39c9ff38dc04e54ef7c521e01fc95"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f4d97071c0ba40f0cf2a93ed76e660654c399a0a04ab7d85472239460f3da84b"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0ac936537ad87cef9e0e66c5144484206c1354224ee811ab1519a32373e411f3"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:dec57f96d4def58c422d212d414efe28218d58537b5445cf0c33afb1b4768571"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:48317233294648bf7cd068857f248e3a57222259a5304d32c7552e2284a1b2ad"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:274687e62ea3cf54846a9b25fc48a04459de50af30a7bd0b61a9e38015983494"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:a78722c86a3e7e6aadf9579e3b0ad78d955f2d1f1a8ca4f67d7ca258e8719d4b"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:06104cd203cdef3ade989a1c45b6215bf42f8b9dd705ecc220c173233f7cba41"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-win32.whl", hash = "sha256:2e1eddc06eeaffd249c0adb6fafc19e2118e6308c60df9db27919e96b5656096"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-win_amd64.whl", hash = "sha256:8620d247fb8c0683ade51217b459cb4a1081c0405a3072235ba43a40d355c09a"},
|
||||
{file = "regex-2025.9.18-cp314-cp314t-win_arm64.whl", hash = "sha256:b7531a8ef61de2c647cdf68b3229b071e46ec326b3138b2180acb4275f470b01"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3dbcfcaa18e9480669030d07371713c10b4f1a41f791ffa5cb1a99f24e777f40"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1e85f73ef7095f0380208269055ae20524bfde3f27c5384126ddccf20382a638"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9098e29b3ea4ffffeade423f6779665e2a4f8db64e699c0ed737ef0db6ba7b12"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:90b6b7a2d0f45b7ecaaee1aec6b362184d6596ba2092dd583ffba1b78dd0231c"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c81b892af4a38286101502eae7aec69f7cd749a893d9987a92776954f3943408"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3b524d010973f2e1929aeb635418d468d869a5f77b52084d9f74c272189c251d"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6b498437c026a3d5d0be0020023ff76d70ae4d77118e92f6f26c9d0423452446"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0716e4d6e58853d83f6563f3cf25c281ff46cf7107e5f11879e32cb0b59797d9"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:065b6956749379d41db2625f880b637d4acc14c0a4de0d25d609a62850e96d36"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:d4a691494439287c08ddb9b5793da605ee80299dd31e95fa3f323fac3c33d9d4"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:ef8d10cc0989565bcbe45fb4439f044594d5c2b8919d3d229ea2c4238f1d55b0"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:4baeb1b16735ac969a7eeecc216f1f8b7caf60431f38a2671ae601f716a32d25"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-win32.whl", hash = "sha256:8e5f41ad24a1e0b5dfcf4c4e5d9f5bd54c895feb5708dd0c1d0d35693b24d478"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-win_amd64.whl", hash = "sha256:50e8290707f2fb8e314ab3831e594da71e062f1d623b05266f8cfe4db4949afd"},
|
||||
{file = "regex-2025.9.18-cp39-cp39-win_arm64.whl", hash = "sha256:039a9d7195fd88c943d7c777d4941e8ef736731947becce773c31a1009cb3c35"},
|
||||
{file = "regex-2025.9.18.tar.gz", hash = "sha256:c5ba23274c61c6fef447ba6a39333297d0c247f53059dba0bca415cac511edc4"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
@@ -0,0 +1,366 @@
|
||||
# Prowler SDK Agent Guide
|
||||
|
||||
**Complete guide for AI agents and developers working on the Prowler SDK - the core Python security scanning engine.**
|
||||
|
||||
## Project Overview
|
||||
|
||||
The Prowler SDK is the core Python engine that powers Prowler's cloud security assessment capabilities. It provides:
|
||||
|
||||
- **Multi-cloud Security Scanning**: AWS, Azure, GCP, Kubernetes, GitHub, M365, Oracle Cloud, MongoDB Atlas, and more
|
||||
- **Compliance Frameworks**: 30+ frameworks including CIS, NIST, PCI-DSS, SOC2, GDPR
|
||||
- **1000+ Security Checks**: Comprehensive coverage across all supported providers
|
||||
- **Multiple Output Formats**: JSON, CSV, HTML, ASFF, OCSF, and compliance-specific formats
|
||||
|
||||
## Mission & Scope
|
||||
|
||||
- Maintain and enhance the core Prowler SDK functionality with security and stability as top priorities
|
||||
- Follow best practices for Python patterns, code style, security, and comprehensive testing
|
||||
- To get more information about development guidelines, please refer to the Prowler Developer Guide in `docs/developer-guide/`
|
||||
|
||||
---
|
||||
|
||||
## Architecture Rules
|
||||
|
||||
### 1. Provider Architecture Pattern
|
||||
|
||||
All Prowler providers MUST follow the established pattern:
|
||||
|
||||
```
|
||||
prowler/providers/{provider}/
|
||||
├── {provider}_provider.py # Main provider class
|
||||
├── models.py # Provider-specific models
|
||||
├── config.py # Provider configuration
|
||||
├── exceptions/ # Provider-specific exceptions
|
||||
├── lib/ # Provider libraries (as minimun it should have implemented the next folders: service, arguments, mutelist)
|
||||
│ ├── service/ # Provider-specific service class to be inherited by all services of the provider
|
||||
│ ├── arguments/ # Provider-specific CLI arguments parser
|
||||
│ └── mutelist/ # Provider-specific mutelist functionality
|
||||
└── services/ # All provider services to be audited
|
||||
└── {service}/ # Individual service
|
||||
├── {service}_service.py # Class to fetch the needed resources from the API and store them to be used by the checks
|
||||
├── {service}_client.py # Python instance of the service class to be used by the checks
|
||||
└── {check_name}/ # Individual check folder
|
||||
├── {check_name}.py # Python class to implement the check logic
|
||||
└── {check_name}.metadata.json # JSON file to store the check metadata
|
||||
└── {check_name_2}/ # Other checks can be added to the same service folder
|
||||
├── {check_name_2}.py
|
||||
└── {check_name_2}.metadata.json
|
||||
...
|
||||
└── {service_2}/ # Other services can be added to the same provider folder
|
||||
...
|
||||
```
|
||||
|
||||
### 2. Check Implementation Standards
|
||||
|
||||
Every security check MUST implement:
|
||||
|
||||
```python
|
||||
from prowler.lib.check.models import Check, CheckReport<Provider>
|
||||
from prowler.providers.<provider>.services.<service>.<service>_client import <service>_client
|
||||
|
||||
class check_name(Check):
|
||||
"""Ensure that <resource> meets <security_requirement>."""
|
||||
def execute(self) -> list[CheckReport<Provider>]:
|
||||
"""Execute the check logic.
|
||||
|
||||
Returns:
|
||||
A list of reports containing the result of the check.
|
||||
"""
|
||||
findings = []
|
||||
# Check implementation here
|
||||
for resource in <service>_client.<resources>:
|
||||
# Security validation logic
|
||||
report = CheckReport<Provider>(metadata=self.metadata(), resource=resource)
|
||||
report.status = "PASS" | "FAIL"
|
||||
report.status_extended = "Detailed explanation"
|
||||
findings.append(report) # Add the report to the list of findings
|
||||
return findings
|
||||
```
|
||||
|
||||
### 3. Compliance Framework Integration
|
||||
|
||||
All compliance frameworks must be defined in:
|
||||
- `prowler/compliance/{provider}/{framework}.json`
|
||||
- Follow the established Compliance model structure
|
||||
- Include proper requirement mappings and metadata
|
||||
|
||||
---
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **Language**: Python 3.9+
|
||||
- **Dependency Management**: Poetry 2+
|
||||
- **CLI Framework**: Custom argument parser with provider-specific subcommands
|
||||
- **Testing**: Pytest with extensive unit and integration tests
|
||||
- **Code Quality**: Pre-commit hooks for Black, Flake8, Pylint, Bandit for security scanning
|
||||
|
||||
## Commands
|
||||
|
||||
### Development Environment
|
||||
|
||||
```bash
|
||||
# Core development setup
|
||||
poetry install --with dev # Install all dependencies
|
||||
poetry run pre-commit install # Install pre-commit hooks
|
||||
|
||||
# Code quality
|
||||
poetry run pre-commit run --all-files
|
||||
|
||||
# Run tests
|
||||
poetry run pytest -n auto -vvv -s -x tests/
|
||||
```
|
||||
|
||||
### Running Prowler CLI
|
||||
|
||||
```bash
|
||||
# Run Prowler
|
||||
poetry run python prowler-cli.py --help
|
||||
|
||||
# Run Prowler with a specific provider
|
||||
poetry run python prowler-cli.py <provider>
|
||||
|
||||
# Run Prowler with error logging
|
||||
poetry run python prowler-cli.py <provider> --log-level ERROR --verbose
|
||||
|
||||
# Run specific checks
|
||||
poetry run python prowler-cli.py <provider> --checks <check_name_1> <check_name_2>
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
prowler/
|
||||
├── __main__.py # Main CLI entry point
|
||||
├── config/ # Global configuration
|
||||
│ ├── config.py # Core configuration settings
|
||||
│ └── __init__.py
|
||||
├── lib/ # Core library functions
|
||||
│ ├── check/ # Check execution engine
|
||||
│ │ ├── check.py # Check execution logic
|
||||
│ │ ├── checks_loader.py # Dynamic check loading
|
||||
│ │ ├── compliance.py # Compliance framework handling
|
||||
│ │ └── models.py # Check and report models
|
||||
│ ├── cli/ # Command-line interface
|
||||
│ │ └── parser.py # Argument parsing
|
||||
│ ├── outputs/ # Output format handlers
|
||||
│ │ ├── csv/ # CSV output
|
||||
│ │ ├── html/ # HTML reports
|
||||
│ │ ├── json/ # JSON formats
|
||||
│ │ └── compliance/ # Compliance reports
|
||||
│ ├── scan/ # Scan orchestration
|
||||
│ ├── utils/ # Utility functions
|
||||
│ └── mutelist/ # Mute list functionality
|
||||
├── providers/ # Cloud provider implementations
|
||||
│ ├── aws/ # AWS provider
|
||||
│ ├── azure/ # Azure provider
|
||||
│ ├── gcp/ # Google Cloud provider
|
||||
│ ├── kubernetes/ # Kubernetes provider
|
||||
│ ├── github/ # GitHub provider
|
||||
│ ├── m365/ # Microsoft 365 provider
|
||||
│ ├── mongodbatlas/ # MongoDB Atlas provider
|
||||
│ ├── oci/ # Oracle Cloud provider
|
||||
│ ├── ...
|
||||
│ └── common/ # Shared provider utilities
|
||||
├── compliance/ # Compliance framework definitions
|
||||
│ ├── aws/ # AWS compliance frameworks
|
||||
│ ├── azure/ # Azure compliance frameworks
|
||||
│ ├── gcp/ # GCP compliance frameworks
|
||||
│ ├── ...
|
||||
└── exceptions/ # Global exception definitions
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
### 1. Provider System
|
||||
|
||||
Each cloud provider implements:
|
||||
|
||||
```python
|
||||
class Provider:
|
||||
"""Base provider class"""
|
||||
|
||||
def __init__(self, arguments):
|
||||
self.session = self._setup_session(arguments)
|
||||
self.regions = self._get_regions()
|
||||
# Initialize all services
|
||||
|
||||
def _setup_session(self, arguments):
|
||||
"""Provider-specific authentication"""
|
||||
pass
|
||||
|
||||
def _get_regions(self):
|
||||
"""Get available regions for provider"""
|
||||
pass
|
||||
```
|
||||
|
||||
### 2. Check Engine
|
||||
|
||||
The check execution system:
|
||||
|
||||
- **Dynamic Loading**: Automatically discovers and loads checks
|
||||
- **Parallel Execution**: Runs checks in parallel for performance
|
||||
- **Error Isolation**: Individual check failures don't affect others
|
||||
- **Comprehensive Reporting**: Detailed findings with remediation guidance
|
||||
|
||||
### 3. Compliance Framework Engine
|
||||
|
||||
Compliance frameworks are defined as JSON files mapping checks to requirements:
|
||||
|
||||
```json
|
||||
{
|
||||
"Framework": "CIS",
|
||||
"Name": "CIS Amazon Web Services Foundations Benchmark v2.0.0",
|
||||
"Version": "2.0",
|
||||
"Provider": "AWS",
|
||||
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
|
||||
"Requirements": [
|
||||
{
|
||||
"Id": "1.1",
|
||||
"Description": "Maintain current contact details",
|
||||
"Checks": ["account_contact_details_configured"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Output System
|
||||
|
||||
Multiple output formats supported:
|
||||
|
||||
- **JSON**: Machine-readable findings
|
||||
- **CSV**: Spreadsheet-compatible format
|
||||
- **HTML**: Interactive web reports
|
||||
- **ASFF**: AWS Security Finding Format
|
||||
- **OCSF**: Open Cybersecurity Schema Framework
|
||||
|
||||
## Development Patterns
|
||||
|
||||
### Adding New Cloud Providers
|
||||
|
||||
1. **Create Provider Structure**:
|
||||
```bash
|
||||
mkdir -p prowler/providers/{provider}
|
||||
mkdir -p prowler/providers/{provider}/services
|
||||
mkdir -p prowler/providers/{provider}/lib/{service,arguments,mutelist}
|
||||
mkdir -p prowler/providers/{provider}/exceptions
|
||||
```
|
||||
|
||||
2. **Implement Provider Class**:
|
||||
```python
|
||||
from prowler.providers.common.provider import Provider
|
||||
|
||||
class NewProvider(Provider):
|
||||
def __init__(self, arguments):
|
||||
super().__init__(arguments)
|
||||
# Provider-specific initialization
|
||||
```
|
||||
|
||||
3. **Add Provider to CLI**:
|
||||
Update `prowler/lib/cli/parser.py` to include new provider arguments.
|
||||
|
||||
### Adding New Security Checks
|
||||
|
||||
The most common high level steps to create a new check are:
|
||||
|
||||
1. Prerequisites:
|
||||
- Verify the check does not already exist by searching in the same service folder as `prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>/`.
|
||||
- Ensure required provider and service exist. If not, you will need to create them first.
|
||||
- Confirm the service has implemented all required methods and attributes for the check (in most cases, you will need to add or modify some methods in the service to get the data you need for the check).
|
||||
2. Navigate to the service directory. The path should be as follows: `prowler/providers/<provider>/services/<service>`.
|
||||
3. Create a check-specific folder. The path should follow this pattern: `prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>`. Adhere to the [Naming Format for Checks](/developer-guide/checks#naming-format-for-checks).
|
||||
4. Create the check files, you can use next commands:
|
||||
```bash
|
||||
mkdir -p prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>
|
||||
touch prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>/__init__.py
|
||||
touch prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>/<check_name_want_to_implement>.py
|
||||
touch prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>/<check_name_want_to_implement>.metadata.json
|
||||
```
|
||||
5. Run the check locally to ensure it works as expected. For checking you can use the CLI in the next way:
|
||||
- To ensure the check has been detected by Prowler: `poetry run python prowler-cli.py <provider> --list-checks | grep <check_name>`.
|
||||
- To run the check, to find possible issues: `poetry run python prowler-cli.py <provider> --log-level ERROR --verbose --check <check_name>`.
|
||||
6. Create comprehensive tests for the check that cover multiple scenarios including both PASS (compliant) and FAIL (non-compliant) cases. For detailed information about test structure and implementation guidelines, refer to the [Testing](/developer-guide/unit-testing) documentation.
|
||||
7. If the check and its corresponding tests are working as expected, you can submit a PR to Prowler.
|
||||
|
||||
### Adding Compliance Frameworks
|
||||
|
||||
1. **Create Framework File**:
|
||||
```bash
|
||||
# Create prowler/compliance/{provider}/{framework}.json
|
||||
```
|
||||
|
||||
2. **Define Requirements**:
|
||||
Map framework requirements to existing checks.
|
||||
|
||||
3. **Test Compliance**:
|
||||
```bash
|
||||
poetry run python -m prowler {provider} --compliance {framework}
|
||||
```
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### 1. Python Style
|
||||
|
||||
- **PEP 8 Compliance**: Enforced by black and flake8
|
||||
- **Type Hints**: Required for all public functions
|
||||
- **Docstrings**: Required for all classes and methods
|
||||
- **Import Organization**: Use isort for consistent import ordering
|
||||
|
||||
```python
|
||||
import standard_library
|
||||
|
||||
from third_party import library
|
||||
|
||||
from prowler.lib import internal_module
|
||||
|
||||
class ExampleClass:
|
||||
"""Class docstring."""
|
||||
|
||||
def method(self, param: str) -> dict | list | None:
|
||||
"""Method docstring.
|
||||
|
||||
Args:
|
||||
param: Description of parameter
|
||||
|
||||
Returns:
|
||||
Description of return value
|
||||
"""
|
||||
return None
|
||||
```
|
||||
|
||||
### 2. Error Handling
|
||||
|
||||
```python
|
||||
from prowler.lib.logger import logger
|
||||
|
||||
try:
|
||||
# Risky operation
|
||||
result = api_call()
|
||||
except ProviderSpecificException as e:
|
||||
logger.error(f"Provider error: {e}")
|
||||
# Graceful handling
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error: {e}")
|
||||
# Never let checks crash the entire scan
|
||||
```
|
||||
|
||||
### 3. Security Practices
|
||||
|
||||
- **No Hardcoded Secrets**: Use environment variables or secure credential management
|
||||
- **Input Validation**: Validate all external inputs
|
||||
- **Principle of Least Privilege**: Request minimal necessary permissions
|
||||
- **Secure Defaults**: Default to secure configurations
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
### Unit Tests
|
||||
|
||||
- **100% Coverage Goal**: Aim for complete test coverage
|
||||
- **Mock External Services**: Use mock objects to simulate the external services
|
||||
- **Test Edge Cases**: Include error conditions and boundary cases
|
||||
|
||||
## References
|
||||
|
||||
- **Root Project Guide**: `../AGENTS.md` (takes priority for cross-component guidance)
|
||||
- **Provider Examples**: Reference existing providers for implementation patterns
|
||||
- **Check Examples**: Study existing checks for proper implementation patterns
|
||||
- **Compliance Framework Examples**: Review existing frameworks for structure
|
||||
+51
-7
@@ -2,18 +2,57 @@
|
||||
|
||||
All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
## [v5.14.0] (Prowler UNRELEASED)
|
||||
|
||||
### Added
|
||||
- GitHub provider check `organization_default_repository_permission_strict` [(#8785)](https://github.com/prowler-cloud/prowler/pull/8785)
|
||||
- Add OCI mapping to scan and check classes [(#8927)](https://github.com/prowler-cloud/prowler/pull/8927)
|
||||
- `codepipeline_project_repo_private` check for AWS provider [(#5915)](https://github.com/prowler-cloud/prowler/pull/5915)
|
||||
- `cloudstorage_bucket_versioning_enabled` check for GCP provider [(#9014)](https://github.com/prowler-cloud/prowler/pull/9014)
|
||||
- `cloudstorage_bucket_soft_delete_enabled` check for GCP provider [(#9028)](https://github.com/prowler-cloud/prowler/pull/9028)
|
||||
- `cloudstorage_bucket_logging_enabled` check for GCP provider [(#9091)](https://github.com/prowler-cloud/prowler/pull/9091)
|
||||
- C5 compliance framework for Azure provider [(#9081)](https://github.com/prowler-cloud/prowler/pull/9081)
|
||||
- C5 compliance framework for the GCP provider [(#9097)](https://github.com/prowler-cloud/prowler/pull/9097)
|
||||
- `organization_repository_creation_limited` check for GitHub provider [(#8844)](https://github.com/prowler-cloud/prowler/pull/8844)
|
||||
- HIPAA compliance framework for the GCP provider [(#8955)](https://github.com/prowler-cloud/prowler/pull/8955)
|
||||
- Add organization ID parameter for MongoDB Atlas provider [(#9167)](https://github.com/prowler-cloud/prowler/pull/9167)
|
||||
- Add multiple compliance improvements [(#9145)](https://github.com/prowler-cloud/prowler/pull/9145)
|
||||
- Added validation for invalid checks, services, and categories in `load_checks_to_execute` function [(#8971)](https://github.com/prowler-cloud/prowler/pull/8971)
|
||||
- NIST CSF 2.0 compliance framework for the AWS provider [(#9185)](https://github.com/prowler-cloud/prowler/pull/9185)
|
||||
|
||||
### Changed
|
||||
- Update AWS Direct Connect service metadata to new format [(#8855)](https://github.com/prowler-cloud/prowler/pull/8855)
|
||||
- Update AWS DRS service metadata to new format [(#8870)](https://github.com/prowler-cloud/prowler/pull/8870)
|
||||
- Update AWS DynamoDB service metadata to new format [(#8871)](https://github.com/prowler-cloud/prowler/pull/8871)
|
||||
- Update AWS CloudWatch service metadata to new format [(#8848)](https://github.com/prowler-cloud/prowler/pull/8848)
|
||||
- Update AWS EMR service metadata to new format [(#9002)](https://github.com/prowler-cloud/prowler/pull/9002)
|
||||
- Update AWS EKS service metadata to new format [(#8890)](https://github.com/prowler-cloud/prowler/pull/8890)
|
||||
- Update AWS Elastic Beanstalk service metadata to new format [(#8934)](https://github.com/prowler-cloud/prowler/pull/8934)
|
||||
- Update AWS ElastiCache service metadata to new format [(#8933)](https://github.com/prowler-cloud/prowler/pull/8933)
|
||||
- Update AWS CodeBuild service metadata to new format [(#8851)](https://github.com/prowler-cloud/prowler/pull/8851)
|
||||
- Update GCP Artifact Registry service metadata to new format [(#9088)](https://github.com/prowler-cloud/prowler/pull/9088)
|
||||
- Update AWS EFS service metadata to new format [(#8889)](https://github.com/prowler-cloud/prowler/pull/8889)
|
||||
- Update AWS EventBridge service metadata to new format [(#9003)](https://github.com/prowler-cloud/prowler/pull/9003)
|
||||
- Update AWS Firehose service metadata to new format [(#9004)](https://github.com/prowler-cloud/prowler/pull/9004)
|
||||
- Update AWS FMS service metadata to new format [(#9005)](https://github.com/prowler-cloud/prowler/pull/9005)
|
||||
- Update AWS FSx service metadata to new format [(#9006)](https://github.com/prowler-cloud/prowler/pull/9006)
|
||||
- Update AWS Glacier service metadata to new format [(#9007)](https://github.com/prowler-cloud/prowler/pull/9007)
|
||||
- Update oraclecloud analytics service metadata to new format [(#9114)](https://github.com/prowler-cloud/prowler/pull/9114)
|
||||
|
||||
- Update AWS CodeArtifact service metadata to new format [(#8850)](https://github.com/prowler-cloud/prowler/pull/8850)
|
||||
- Rename OCI provider to oraclecloud with oci alias [(#9126)](https://github.com/prowler-cloud/prowler/pull/9126)
|
||||
|
||||
---
|
||||
|
||||
## [v5.13.2] (Prowler UNRELEASED)
|
||||
|
||||
### Fixed
|
||||
- Check `check_name` has no `resource_name` error for GCP provider [(#9169)](https://github.com/prowler-cloud/prowler/pull/9169)
|
||||
- Depth Truncation and parsing error in PowerShell queries [(#9181)](https://github.com/prowler-cloud/prowler/pull/9181)
|
||||
- False negative in `iam_role_cross_service_confused_deputy_prevention` check [(#9213)](https://github.com/prowler-cloud/prowler/pull/9213)
|
||||
- Fix M365 Teams `--sp-env-auth` connection error and enhanced timeout logging [(#9191)](https://github.com/prowler-cloud/prowler/pull/9191)
|
||||
- Rename `get_oci_assessment_summary` to `get_oraclecloud_assessment_summary` in HTML output [(#9200)](https://github.com/prowler-cloud/prowler/pull/9200)
|
||||
- Fix Validation and other errors in Azure provider [(#8915)](https://github.com/prowler-cloud/prowler/pull/8915)
|
||||
- Update documentation URLs from docs.prowler.cloud to docs.prowler.com [(#9240)](https://github.com/prowler-cloud/prowler/pull/9240)
|
||||
- Fix file name parsing for checks on Windows [(#9268)](https://github.com/prowler-cloud/prowler/pull/9268)
|
||||
- Remove typo for Prowler ThreatScore - M365 [(#9274)](https://github.com/prowler-cloud/prowler/pull/9274)
|
||||
- Fix M365 Teams connection error and enhanced timeout logging [(#9197)](https://github.com/prowler-cloud/prowler/pull/9197)
|
||||
|
||||
---
|
||||
|
||||
## [v5.13.1] (Prowler v5.13.1)
|
||||
|
||||
@@ -22,6 +61,12 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
- Fix `ec2_instance_with_outdated_ami` check to handle None AMIs [(#9046)](https://github.com/prowler-cloud/prowler/pull/9046)
|
||||
- Handle timestamp when transforming compliance findings in CCC [(#9042)](https://github.com/prowler-cloud/prowler/pull/9042)
|
||||
- Update `resource_id` for admincenter service and avoid unnecessary msgraph requests [(#9019)](https://github.com/prowler-cloud/prowler/pull/9019)
|
||||
- Fix `firehose_stream_encrypted_at_rest` description and findings clarity [(#9142)](https://github.com/prowler-cloud/prowler/pull/9142)
|
||||
|
||||
---
|
||||
|
||||
### Changed
|
||||
- Adapt IaC provider to be used in the Prowler App [(#8751)](https://github.com/prowler-cloud/prowler/pull/8751)
|
||||
|
||||
---
|
||||
|
||||
@@ -67,7 +112,6 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
- Update AWS Directory Service service metadata to new format [(#8859)](https://github.com/prowler-cloud/prowler/pull/8859)
|
||||
- Update AWS CloudFront service metadata to new format [(#8829)](https://github.com/prowler-cloud/prowler/pull/8829)
|
||||
- Deprecate user authentication for M365 provider [(#8865)](https://github.com/prowler-cloud/prowler/pull/8865)
|
||||
- Update AWS EFS service metadata to new format [(#8889)](https://github.com/prowler-cloud/prowler/pull/8889)
|
||||
|
||||
### Fixed
|
||||
- Fix SNS topics showing empty AWS_ResourceID in Quick Inventory output [(#8762)](https://github.com/prowler-cloud/prowler/issues/8762)
|
||||
|
||||
+38
-6
@@ -49,17 +49,19 @@ from prowler.lib.outputs.asff.asff import ASFF
|
||||
from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected import (
|
||||
AWSWellArchitected,
|
||||
)
|
||||
from prowler.lib.outputs.compliance.c5.c5_aws import AWSC5
|
||||
from prowler.lib.outputs.compliance.c5.c5_azure import AzureC5
|
||||
from prowler.lib.outputs.compliance.c5.c5_gcp import GCPC5
|
||||
from prowler.lib.outputs.compliance.ccc.ccc_aws import CCC_AWS
|
||||
from prowler.lib.outputs.compliance.ccc.ccc_azure import CCC_Azure
|
||||
from prowler.lib.outputs.compliance.ccc.ccc_gcp import CCC_GCP
|
||||
from prowler.lib.outputs.compliance.c5.c5_aws import AWSC5
|
||||
from prowler.lib.outputs.compliance.cis.cis_aws import AWSCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_azure import AzureCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_github import GithubCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_kubernetes import KubernetesCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_m365 import M365CIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_oci import OCICIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_oraclecloud import OracleCloudCIS
|
||||
from prowler.lib.outputs.compliance.compliance import display_compliance_table
|
||||
from prowler.lib.outputs.compliance.ens.ens_aws import AWSENS
|
||||
from prowler.lib.outputs.compliance.ens.ens_azure import AzureENS
|
||||
@@ -332,7 +334,7 @@ def prowler():
|
||||
output_options = IACOutputOptions(args, bulk_checks_metadata)
|
||||
elif provider == "llm":
|
||||
output_options = LLMOutputOptions(args, bulk_checks_metadata)
|
||||
elif provider == "oci":
|
||||
elif provider == "oraclecloud":
|
||||
output_options = OCIOutputOptions(
|
||||
args, bulk_checks_metadata, global_provider.identity
|
||||
)
|
||||
@@ -357,6 +359,12 @@ def prowler():
|
||||
else:
|
||||
# Original behavior for IAC or non-verbose LLM
|
||||
findings = global_provider.run()
|
||||
# Note: IaC doesn't support granular progress tracking since Trivy runs as a black box
|
||||
# and returns all findings at once. Progress tracking would just be 0% → 100%.
|
||||
|
||||
# Filter findings by status if specified
|
||||
if hasattr(args, "status") and args.status:
|
||||
findings = [f for f in findings if f.status in args.status]
|
||||
# Report findings for verbose output
|
||||
report(findings, global_provider, output_options)
|
||||
elif len(checks_to_execute):
|
||||
@@ -422,7 +430,7 @@ def prowler():
|
||||
else:
|
||||
# Refactor(CLI)
|
||||
logger.critical(
|
||||
"Slack integration needs SLACK_API_TOKEN and SLACK_CHANNEL_NAME environment variables (see more in https://docs.prowler.com/user-guide/cli/tutorials/integrations#configuration-of-the-integration-with-slack)."
|
||||
"Slack integration needs SLACK_API_TOKEN and SLACK_CHANNEL_NAME environment variables (see more in https://docs.prowler.cloud/en/latest/tutorials/integrations/#slack)."
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
@@ -682,6 +690,18 @@ def prowler():
|
||||
)
|
||||
generated_outputs["compliance"].append(ccc_azure)
|
||||
ccc_azure.batch_write_data_to_file()
|
||||
elif compliance_name == "c5_azure":
|
||||
filename = (
|
||||
f"{output_options.output_directory}/compliance/"
|
||||
f"{output_options.output_filename}_{compliance_name}.csv"
|
||||
)
|
||||
c5_azure = AzureC5(
|
||||
findings=finding_outputs,
|
||||
compliance=bulk_compliance_frameworks[compliance_name],
|
||||
file_path=filename,
|
||||
)
|
||||
generated_outputs["compliance"].append(c5_azure)
|
||||
c5_azure.batch_write_data_to_file()
|
||||
else:
|
||||
filename = (
|
||||
f"{output_options.output_directory}/compliance/"
|
||||
@@ -773,6 +793,18 @@ def prowler():
|
||||
)
|
||||
generated_outputs["compliance"].append(ccc_gcp)
|
||||
ccc_gcp.batch_write_data_to_file()
|
||||
elif compliance_name == "c5_gcp":
|
||||
filename = (
|
||||
f"{output_options.output_directory}/compliance/"
|
||||
f"{output_options.output_filename}_{compliance_name}.csv"
|
||||
)
|
||||
c5_gcp = GCPC5(
|
||||
findings=finding_outputs,
|
||||
compliance=bulk_compliance_frameworks[compliance_name],
|
||||
file_path=filename,
|
||||
)
|
||||
generated_outputs["compliance"].append(c5_gcp)
|
||||
c5_gcp.batch_write_data_to_file()
|
||||
else:
|
||||
filename = (
|
||||
f"{output_options.output_directory}/compliance/"
|
||||
@@ -937,7 +969,7 @@ def prowler():
|
||||
generated_outputs["compliance"].append(generic_compliance)
|
||||
generic_compliance.batch_write_data_to_file()
|
||||
|
||||
elif provider == "oci":
|
||||
elif provider == "oraclecloud":
|
||||
for compliance_name in input_compliance_frameworks:
|
||||
if compliance_name.startswith("cis_"):
|
||||
# Generate CIS Finding Object
|
||||
@@ -945,7 +977,7 @@ def prowler():
|
||||
f"{output_options.output_directory}/compliance/"
|
||||
f"{output_options.output_filename}_{compliance_name}.csv"
|
||||
)
|
||||
cis = OCICIS(
|
||||
cis = OracleCloudCIS(
|
||||
findings=finding_outputs,
|
||||
compliance=bulk_compliance_frameworks[compliance_name],
|
||||
file_path=filename,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"Framework": "ENS",
|
||||
"Name": "ENS RD 311/2022",
|
||||
"Name": "ENS RD 311/2022 - Categoría Alta",
|
||||
"Version": "RD2022",
|
||||
"Provider": "AWS",
|
||||
"Description": "The accreditation scheme of the ENS (National Security Scheme) has been developed by the Ministry of Finance and Public Administrations and the CCN (National Cryptological Center). This includes the basic principles and minimum requirements necessary for the adequate protection of information.",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"Framework": "NIS2",
|
||||
"Name": "Network and Information Security Directive (Directive (EU) 2022/2555)",
|
||||
"Name": "NIS2 - Network and Information Security Directive (Directive (EU) 2022/2555)",
|
||||
"Version": "",
|
||||
"Provider": "AWS",
|
||||
"Description": "ANNEX to the Commission Implementing Regulation laying down rules for the application of Directive (EU) 2022/2555 as regards technical and methodological requirements of cybersecurity risk-management measures and further specification of the cases in which an incident is considered to be significant with regard to DNS service providers, TLD name registries, cloud computing service providers, data centre service providers, content delivery network providers, managed service providers, managed security service providers, providers of online market places, of online search engines and of social networking services platforms, and trust service providers",
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"Framework": "ENS",
|
||||
"Name": "ENS RD 311/2022",
|
||||
"Name": "ENS RD 311/2022 - Categoría Alta",
|
||||
"Version": "RD2022",
|
||||
"Provider": "AZURE",
|
||||
"Description": "The accreditation scheme of the ENS (National Security Scheme) has been developed by the Ministry of Finance and Public Administrations and the CCN (National Cryptological Center). This includes the basic principles and minimum requirements necessary for the adequate protection of information.",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"Framework": "NIS2",
|
||||
"Name": "Network and Information Security Directive (Directive (EU) 2022/2555)",
|
||||
"Name": "NIS2 - Network and Information Security Directive (Directive (EU) 2022/2555)",
|
||||
"Version": "",
|
||||
"Provider": "Azure",
|
||||
"Description": "ANNEX to the Commission Implementing Regulation laying down rules for the application of Directive (EU) 2022/2555 as regards technical and methodological requirements of cybersecurity risk-management measures and further specification of the cases in which an incident is considered to be significant with regard to DNS service providers, TLD name registries, cloud computing service providers, data centre service providers, content delivery network providers, managed service providers, managed security service providers, providers of online market places, of online search engines and of social networking services platforms, and trust service providers",
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user