Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 5e488e2ee6 |
@@ -145,7 +145,7 @@ SENTRY_RELEASE=local
|
||||
NEXT_PUBLIC_SENTRY_ENVIRONMENT=${SENTRY_ENVIRONMENT}
|
||||
|
||||
#### Prowler release version ####
|
||||
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.27.0
|
||||
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.26.0
|
||||
|
||||
# Social login credentials
|
||||
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
# These are supported funding model platforms
|
||||
|
||||
github: [prowler-cloud]
|
||||
# patreon: # Replace with a single Patreon username
|
||||
# open_collective: # Replace with a single Open Collective username
|
||||
# ko_fi: # Replace with a single Ko-fi username
|
||||
# tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
|
||||
# community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
|
||||
# liberapay: # Replace with a single Liberapay username
|
||||
# issuehunt: # Replace with a single IssueHunt username
|
||||
# lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
|
||||
# polar: # Replace with a single Polar username
|
||||
# buy_me_a_coffee: # Replace with a single Buy Me a Coffee username
|
||||
# thanks_dev: # Replace with a single thanks.dev username
|
||||
# custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
|
||||
@@ -1,143 +0,0 @@
|
||||
name: "🔎 New Check Request"
|
||||
description: Request a new Prowler security check
|
||||
title: "[New Check]: "
|
||||
labels: ["feature-request", "status/needs-triage"]
|
||||
|
||||
body:
|
||||
- type: checkboxes
|
||||
id: search
|
||||
attributes:
|
||||
label: Existing check search
|
||||
description: Confirm this check does not already exist before opening a new request.
|
||||
options:
|
||||
- label: I have searched existing issues, Prowler Hub, and the public roadmap, and this check does not already exist.
|
||||
required: true
|
||||
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Use this form to describe the security condition that Prowler should evaluate.
|
||||
|
||||
The most useful inputs for [Prowler Studio](https://github.com/prowler-cloud/prowler-studio) are:
|
||||
- What should be detected
|
||||
- What PASS and FAIL mean
|
||||
- Vendor docs, API references, SDK methods, CLI commands, or reference code
|
||||
|
||||
- type: dropdown
|
||||
id: provider
|
||||
attributes:
|
||||
label: Provider
|
||||
description: Cloud or platform this check targets.
|
||||
options:
|
||||
- AWS
|
||||
- Azure
|
||||
- GCP
|
||||
- Kubernetes
|
||||
- GitHub
|
||||
- Microsoft 365
|
||||
- OCI
|
||||
- Alibaba Cloud
|
||||
- Cloudflare
|
||||
- MongoDB Atlas
|
||||
- Google Workspace
|
||||
- OpenStack
|
||||
- Vercel
|
||||
- NHN
|
||||
- Other / New provider
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: other_provider_name
|
||||
attributes:
|
||||
label: New provider name
|
||||
description: Only fill this if you selected "Other / New provider" above.
|
||||
placeholder: "NewProviderName"
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: input
|
||||
id: service_name
|
||||
attributes:
|
||||
label: Service or product area
|
||||
description: Optional. Main service, product, or feature to audit.
|
||||
placeholder: "s3, bedrock, entra, repository, apiserver"
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: input
|
||||
id: suggested_check_name
|
||||
attributes:
|
||||
label: Suggested check name
|
||||
description: Optional. Use `snake_case` following `<service>_<resource>_<best_practice>`, with lowercase letters and underscores only.
|
||||
placeholder: "bedrock_guardrail_sensitive_information_filter_enabled"
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: context
|
||||
attributes:
|
||||
label: Context and goal
|
||||
description: Describe the security problem, why it matters, and what this new check should help detect.
|
||||
placeholder: |-
|
||||
- Security condition to validate:
|
||||
- Why it matters:
|
||||
- Resource, feature, or configuration involved:
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: expected_behavior
|
||||
attributes:
|
||||
label: Expected behavior
|
||||
description: Explain what the check should evaluate and what PASS, FAIL, or MANUAL should mean.
|
||||
placeholder: |-
|
||||
- Resource or scope to evaluate:
|
||||
- PASS when:
|
||||
- FAIL when:
|
||||
- MANUAL when (if applicable):
|
||||
- Exclusions, thresholds, or edge cases:
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: references
|
||||
attributes:
|
||||
label: References
|
||||
description: Add vendor docs, API references, SDK methods, CLI commands, endpoint docs, sample payloads, or similar reference material.
|
||||
placeholder: |-
|
||||
- Product or service documentation:
|
||||
- API or SDK reference:
|
||||
- CLI command or endpoint documentation:
|
||||
- Sample payload or response:
|
||||
- Security advisory or benchmark:
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: severity
|
||||
attributes:
|
||||
label: Suggested severity
|
||||
description: Your best estimate. Reviewers will confirm during triage.
|
||||
options:
|
||||
- Critical
|
||||
- High
|
||||
- Medium
|
||||
- Low
|
||||
- Informational
|
||||
- Not sure
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: implementation_notes
|
||||
attributes:
|
||||
label: Additional implementation notes
|
||||
description: Optional. Add permissions, unsupported regions, config knobs, product limitations, or anything else that may affect implementation.
|
||||
placeholder: |-
|
||||
- Required permissions or scopes:
|
||||
- Region, tenant, or subscription limitations:
|
||||
- Configurable behavior or thresholds:
|
||||
- Other constraints:
|
||||
validations:
|
||||
required: false
|
||||
@@ -158,7 +158,7 @@ jobs:
|
||||
tags: |
|
||||
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-${{ matrix.arch }}
|
||||
cache-from: type=gha,scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }},scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
|
||||
|
||||
# Create and push multi-architecture manifest
|
||||
create-manifest:
|
||||
|
||||
@@ -5,16 +5,10 @@ on:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'api/**'
|
||||
- '.github/workflows/api-container-checks.yml'
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'api/**'
|
||||
- '.github/workflows/api-container-checks.yml'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -63,7 +57,16 @@ jobs:
|
||||
|
||||
api-container-build-and-scan:
|
||||
if: github.repository == 'prowler-cloud/prowler'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ${{ matrix.runner }}
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- platform: linux/amd64
|
||||
runner: ubuntu-latest
|
||||
arch: amd64
|
||||
- platform: linux/arm64
|
||||
runner: ubuntu-24.04-arm
|
||||
arch: arm64
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -116,22 +119,23 @@ jobs:
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
|
||||
- name: Build container
|
||||
- name: Build container for ${{ matrix.arch }}
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
|
||||
with:
|
||||
context: ${{ env.API_WORKING_DIR }}
|
||||
push: false
|
||||
load: true
|
||||
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }}
|
||||
platforms: ${{ matrix.platform }}
|
||||
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
|
||||
cache-from: type=gha,scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
|
||||
|
||||
- name: Scan container with Trivy
|
||||
- name: Scan container with Trivy for ${{ matrix.arch }}
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: ./.github/actions/trivy-scan
|
||||
with:
|
||||
image-name: ${{ env.IMAGE_NAME }}
|
||||
image-tag: ${{ github.sha }}
|
||||
image-tag: ${{ github.sha }}-${{ matrix.arch }}
|
||||
fail-on-critical: 'false'
|
||||
severity: 'CRITICAL'
|
||||
|
||||
@@ -5,20 +5,10 @@ on:
|
||||
branches:
|
||||
- "master"
|
||||
- "v5.*"
|
||||
paths:
|
||||
- 'api/**'
|
||||
- '.github/workflows/api-tests.yml'
|
||||
- '.github/workflows/api-security.yml'
|
||||
- '.github/actions/setup-python-poetry/**'
|
||||
pull_request:
|
||||
branches:
|
||||
- "master"
|
||||
- "v5.*"
|
||||
paths:
|
||||
- 'api/**'
|
||||
- '.github/workflows/api-tests.yml'
|
||||
- '.github/workflows/api-security.yml'
|
||||
- '.github/actions/setup-python-poetry/**'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
|
||||
@@ -4,6 +4,8 @@ on:
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v3'
|
||||
- 'v4.*'
|
||||
- 'v5.*'
|
||||
types:
|
||||
- 'opened'
|
||||
|
||||
@@ -43,11 +43,14 @@ jobs:
|
||||
|
||||
echo "Processing release tag: $RELEASE_TAG"
|
||||
|
||||
# Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
|
||||
VERSION_ONLY="${RELEASE_TAG#v}"
|
||||
|
||||
# Check if it's a minor version (X.Y.0)
|
||||
if [[ "$VERSION_ONLY" =~ ^([0-9]+)\.([0-9]+)\.0$ ]]; then
|
||||
echo "Release $RELEASE_TAG (version $VERSION_ONLY) is a minor version. Proceeding to create backport label."
|
||||
|
||||
# Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
|
||||
MAJOR="${BASH_REMATCH[1]}"
|
||||
MINOR="${BASH_REMATCH[2]}"
|
||||
TWO_DIGIT_VERSION="${MAJOR}.${MINOR}"
|
||||
@@ -59,6 +62,7 @@ jobs:
|
||||
echo "Label name: $LABEL_NAME"
|
||||
echo "Label description: $LABEL_DESC"
|
||||
|
||||
# Check if label already exists
|
||||
if gh label list --repo ${{ github.repository }} --limit 1000 | grep -q "^${LABEL_NAME}[[:space:]]"; then
|
||||
echo "Label '$LABEL_NAME' already exists."
|
||||
else
|
||||
|
||||
@@ -37,13 +37,10 @@ jobs:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
# PRs only need the diff range; push to master/release walks the new range from event.before.
|
||||
# 50 is enough headroom for the longest realistic PR/push chain without paying for a full clone.
|
||||
fetch-depth: 50
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Scan diff for secrets with TruffleHog
|
||||
# Action auto-injects --since-commit/--branch from event payload; passing them in extra_args produces duplicate flags.
|
||||
- name: Scan for secrets with TruffleHog
|
||||
uses: trufflesecurity/trufflehog@ef6e76c3c4023279497fab4721ffa071a722fd05 # v3.92.4
|
||||
with:
|
||||
extra_args: --results=verified,unknown
|
||||
extra_args: '--results=verified,unknown'
|
||||
|
||||
@@ -152,7 +152,7 @@ jobs:
|
||||
org.opencontainers.image.created=${{ github.event_name == 'release' && github.event.release.published_at || github.event.head_commit.timestamp }}
|
||||
${{ github.event_name == 'release' && format('org.opencontainers.image.version={0}', env.RELEASE_TAG) || '' }}
|
||||
cache-from: type=gha,scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }},scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
|
||||
|
||||
# Create and push multi-architecture manifest
|
||||
create-manifest:
|
||||
|
||||
@@ -5,16 +5,10 @@ on:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'mcp_server/**'
|
||||
- '.github/workflows/mcp-container-checks.yml'
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'mcp_server/**'
|
||||
- '.github/workflows/mcp-container-checks.yml'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -62,7 +56,16 @@ jobs:
|
||||
|
||||
mcp-container-build-and-scan:
|
||||
if: github.repository == 'prowler-cloud/prowler'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ${{ matrix.runner }}
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- platform: linux/amd64
|
||||
runner: ubuntu-latest
|
||||
arch: amd64
|
||||
- platform: linux/arm64
|
||||
runner: ubuntu-24.04-arm
|
||||
arch: arm64
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -109,22 +112,23 @@ jobs:
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
|
||||
- name: Build MCP container
|
||||
- name: Build MCP container for ${{ matrix.arch }}
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
|
||||
with:
|
||||
context: ${{ env.MCP_WORKING_DIR }}
|
||||
push: false
|
||||
load: true
|
||||
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }}
|
||||
platforms: ${{ matrix.platform }}
|
||||
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
|
||||
cache-from: type=gha,scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
|
||||
|
||||
- name: Scan MCP container with Trivy
|
||||
- name: Scan MCP container with Trivy for ${{ matrix.arch }}
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: ./.github/actions/trivy-scan
|
||||
with:
|
||||
image-name: ${{ env.IMAGE_NAME }}
|
||||
image-tag: ${{ github.sha }}
|
||||
image-tag: ${{ github.sha }}-${{ matrix.arch }}
|
||||
fail-on-critical: 'false'
|
||||
severity: 'CRITICAL'
|
||||
|
||||
@@ -86,32 +86,11 @@ jobs:
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
# The MCP server version (mcp_server/pyproject.toml) is decoupled from the Prowler release
|
||||
# version: it only changes when MCP code changes. mcp-bump-version.yml normally keeps it in
|
||||
# sync with mcp_server/CHANGELOG.md, but this publish workflow still runs on every release.
|
||||
# Pre-flight PyPI check covers the legitimate "no MCP changes for this release" case (and any
|
||||
# workflow_dispatch re-runs) without failing with HTTP 400 (version exists).
|
||||
- name: Check if prowler-mcp version already exists on PyPI
|
||||
id: pypi-check
|
||||
working-directory: ${{ env.WORKING_DIRECTORY }}
|
||||
run: |
|
||||
MCP_VERSION=$(grep '^version' pyproject.toml | head -1 | sed -E 's/^version[[:space:]]*=[[:space:]]*"([^"]+)".*/\1/')
|
||||
echo "mcp_version=${MCP_VERSION}" >> "$GITHUB_OUTPUT"
|
||||
if curl -fsS "https://pypi.org/pypi/prowler-mcp/${MCP_VERSION}/json" >/dev/null 2>&1; then
|
||||
echo "skip=true" >> "$GITHUB_OUTPUT"
|
||||
echo "::notice title=Skipping prowler-mcp publish::Version ${MCP_VERSION} already exists on PyPI; bump mcp_server/pyproject.toml to publish a new release."
|
||||
else
|
||||
echo "skip=false" >> "$GITHUB_OUTPUT"
|
||||
echo "::notice title=Publishing prowler-mcp::Version ${MCP_VERSION} not on PyPI yet; proceeding."
|
||||
fi
|
||||
|
||||
- name: Build prowler-mcp package
|
||||
if: steps.pypi-check.outputs.skip != 'true'
|
||||
working-directory: ${{ env.WORKING_DIRECTORY }}
|
||||
run: uv build
|
||||
|
||||
- name: Publish prowler-mcp package to PyPI
|
||||
if: steps.pypi-check.outputs.skip != 'true'
|
||||
uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0
|
||||
with:
|
||||
packages-dir: ${{ env.WORKING_DIRECTORY }}/dist/
|
||||
|
||||
@@ -1,98 +0,0 @@
|
||||
name: 'Nightly: ARM64 Container Builds'
|
||||
|
||||
# Mitigation for amd64-only PR container-checks: build amd64+arm64 nightly against
|
||||
# master to keep arm-specific Dockerfile regressions caught quickly. Build only —
|
||||
# no push, no Trivy (weekly checks already cover that).
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 4 * * *'
|
||||
workflow_dispatch: {}
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
build-arm64:
|
||||
if: github.repository == 'prowler-cloud/prowler'
|
||||
runs-on: ubuntu-24.04-arm
|
||||
timeout-minutes: 60
|
||||
permissions:
|
||||
contents: read
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- component: sdk
|
||||
context: .
|
||||
dockerfile: ./Dockerfile
|
||||
image_name: prowler
|
||||
- component: api
|
||||
context: ./api
|
||||
dockerfile: ./api/Dockerfile
|
||||
image_name: prowler-api
|
||||
- component: ui
|
||||
context: ./ui
|
||||
dockerfile: ./ui/Dockerfile
|
||||
image_name: prowler-ui
|
||||
target: prod
|
||||
build_args: |
|
||||
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_51LwpXXXX
|
||||
- component: mcp
|
||||
context: ./mcp_server
|
||||
dockerfile: ./mcp_server/Dockerfile
|
||||
image_name: prowler-mcp
|
||||
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
|
||||
- name: Build ${{ matrix.component }} container (linux/arm64)
|
||||
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
|
||||
with:
|
||||
context: ${{ matrix.context }}
|
||||
file: ${{ matrix.dockerfile }}
|
||||
target: ${{ matrix.target }}
|
||||
push: false
|
||||
load: false
|
||||
platforms: linux/arm64
|
||||
tags: ${{ matrix.image_name }}:nightly-arm64
|
||||
build-args: ${{ matrix.build_args }}
|
||||
cache-from: type=gha,scope=arm64
|
||||
cache-to: type=gha,mode=min,scope=arm64
|
||||
|
||||
notify-failure:
|
||||
needs: build-arm64
|
||||
if: failure() && github.event_name == 'schedule'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Notify Slack on failure
|
||||
uses: slackapi/slack-github-action@91efab103c0de0a537f72a35f6b8cda0ee76bf0a # v2.1.1
|
||||
with:
|
||||
method: chat.postMessage
|
||||
token: ${{ secrets.SLACK_BOT_TOKEN }}
|
||||
payload: |
|
||||
channel: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
|
||||
text: ":rotating_light: Nightly arm64 container build failed for prowler — <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|view run>"
|
||||
errors: true
|
||||
@@ -41,15 +41,10 @@ jobs:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
fetch-depth: 0
|
||||
# zizmor: ignore[artipacked]
|
||||
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
|
||||
|
||||
- name: Fetch PR base ref for tj-actions/changed-files
|
||||
env:
|
||||
BASE_REF: ${{ github.event.pull_request.base.ref }}
|
||||
run: git fetch --depth=1 origin "${BASE_REF}"
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
|
||||
|
||||
@@ -45,15 +45,10 @@ jobs:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
fetch-depth: 0
|
||||
# zizmor: ignore[artipacked]
|
||||
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
|
||||
|
||||
- name: Fetch PR base ref for tj-actions/changed-files
|
||||
env:
|
||||
BASE_REF: ${{ github.event.pull_request.base.ref }}
|
||||
run: git fetch --depth=1 origin "${BASE_REF}"
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
|
||||
|
||||
@@ -36,14 +36,8 @@ jobs:
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
fetch-depth: 1
|
||||
# zizmor: ignore[artipacked]
|
||||
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
|
||||
|
||||
- name: Fetch PR base ref for tj-actions/changed-files
|
||||
env:
|
||||
BASE_REF: ${{ github.event.pull_request.base.ref }}
|
||||
run: git fetch --depth=1 origin "${BASE_REF}"
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
|
||||
@@ -5,9 +5,6 @@ on:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'tests/providers/**/*_test.py'
|
||||
- '.github/workflows/sdk-check-duplicate-test-names.yml'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
|
||||
@@ -3,7 +3,9 @@ name: 'SDK: Container Build and Push'
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v3' # For v3-latest
|
||||
- 'v4.6' # For v4-latest
|
||||
- 'master' # For latest
|
||||
paths-ignore:
|
||||
- '.github/**'
|
||||
- '!.github/workflows/sdk-container-build-push.yml'
|
||||
@@ -54,6 +56,7 @@ jobs:
|
||||
timeout-minutes: 5
|
||||
outputs:
|
||||
prowler_version: ${{ steps.get-prowler-version.outputs.prowler_version }}
|
||||
prowler_version_major: ${{ steps.get-prowler-version.outputs.prowler_version_major }}
|
||||
latest_tag: ${{ steps.get-prowler-version.outputs.latest_tag }}
|
||||
stable_tag: ${{ steps.get-prowler-version.outputs.stable_tag }}
|
||||
permissions:
|
||||
@@ -89,13 +92,32 @@ jobs:
|
||||
PROWLER_VERSION="$(poetry version -s 2>/dev/null)"
|
||||
echo "prowler_version=${PROWLER_VERSION}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
# Extract major version
|
||||
PROWLER_VERSION_MAJOR="${PROWLER_VERSION%%.*}"
|
||||
if [[ "${PROWLER_VERSION_MAJOR}" != "5" ]]; then
|
||||
echo "::error::Unsupported Prowler major version: ${PROWLER_VERSION_MAJOR}"
|
||||
exit 1
|
||||
fi
|
||||
echo "latest_tag=latest" >> "${GITHUB_OUTPUT}"
|
||||
echo "stable_tag=stable" >> "${GITHUB_OUTPUT}"
|
||||
echo "prowler_version_major=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
# Set version-specific tags
|
||||
case ${PROWLER_VERSION_MAJOR} in
|
||||
3)
|
||||
echo "latest_tag=v3-latest" >> "${GITHUB_OUTPUT}"
|
||||
echo "stable_tag=v3-stable" >> "${GITHUB_OUTPUT}"
|
||||
echo "✓ Prowler v3 detected - tags: v3-latest, v3-stable"
|
||||
;;
|
||||
4)
|
||||
echo "latest_tag=v4-latest" >> "${GITHUB_OUTPUT}"
|
||||
echo "stable_tag=v4-stable" >> "${GITHUB_OUTPUT}"
|
||||
echo "✓ Prowler v4 detected - tags: v4-latest, v4-stable"
|
||||
;;
|
||||
5)
|
||||
echo "latest_tag=latest" >> "${GITHUB_OUTPUT}"
|
||||
echo "stable_tag=stable" >> "${GITHUB_OUTPUT}"
|
||||
echo "✓ Prowler v5 detected - tags: latest, stable"
|
||||
;;
|
||||
*)
|
||||
echo "::error::Unsupported Prowler major version: ${PROWLER_VERSION_MAJOR}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
notify-release-started:
|
||||
if: github.repository == 'prowler-cloud/prowler' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
|
||||
@@ -206,7 +228,7 @@ jobs:
|
||||
tags: |
|
||||
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-${{ matrix.arch }}
|
||||
cache-from: type=gha,scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }},scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
|
||||
|
||||
# Create and push multi-architecture manifest
|
||||
create-manifest:
|
||||
@@ -364,3 +386,39 @@ jobs:
|
||||
payload-file-path: "./.github/scripts/slack-messages/container-release-completed.json"
|
||||
step-outcome: ${{ steps.outcome.outputs.outcome }}
|
||||
update-ts: ${{ needs.notify-release-started.outputs.message-ts }}
|
||||
|
||||
dispatch-v3-deployment:
|
||||
needs: [setup, container-build-push]
|
||||
if: always() && needs.setup.outputs.prowler_version_major == '3' && needs.setup.result == 'success' && needs.container-build-push.result == 'success'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Calculate short SHA
|
||||
id: short-sha
|
||||
run: echo "short_sha=${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Dispatch v3 deployment (latest)
|
||||
if: github.event_name == 'push'
|
||||
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
|
||||
with:
|
||||
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
repository: ${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}
|
||||
event-type: dispatch
|
||||
client-payload: '{"version":"v3-latest","tag":"${{ steps.short-sha.outputs.short_sha }}"}'
|
||||
|
||||
- name: Dispatch v3 deployment (release)
|
||||
if: github.event_name == 'release'
|
||||
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
|
||||
with:
|
||||
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
repository: ${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}
|
||||
event-type: dispatch
|
||||
client-payload: '{"version":"release","tag":"${{ needs.setup.outputs.prowler_version }}"}'
|
||||
|
||||
@@ -5,22 +5,10 @@ on:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'prowler/**'
|
||||
- 'Dockerfile*'
|
||||
- 'pyproject.toml'
|
||||
- 'poetry.lock'
|
||||
- '.github/workflows/sdk-container-checks.yml'
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'prowler/**'
|
||||
- 'Dockerfile*'
|
||||
- 'pyproject.toml'
|
||||
- 'poetry.lock'
|
||||
- '.github/workflows/sdk-container-checks.yml'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -68,7 +56,16 @@ jobs:
|
||||
|
||||
sdk-container-build-and-scan:
|
||||
if: github.repository == 'prowler-cloud/prowler'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ${{ matrix.runner }}
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- platform: linux/amd64
|
||||
runner: ubuntu-latest
|
||||
arch: amd64
|
||||
- platform: linux/arm64
|
||||
runner: ubuntu-24.04-arm
|
||||
arch: arm64
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -135,22 +132,23 @@ jobs:
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
|
||||
- name: Build SDK container
|
||||
- name: Build SDK container for ${{ matrix.arch }}
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
|
||||
with:
|
||||
context: .
|
||||
push: false
|
||||
load: true
|
||||
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }}
|
||||
platforms: ${{ matrix.platform }}
|
||||
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
|
||||
cache-from: type=gha,scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
|
||||
|
||||
- name: Scan SDK container with Trivy
|
||||
- name: Scan SDK container with Trivy for ${{ matrix.arch }}
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: ./.github/actions/trivy-scan
|
||||
with:
|
||||
image-name: ${{ env.IMAGE_NAME }}
|
||||
image-tag: ${{ github.sha }}
|
||||
image-tag: ${{ github.sha }}-${{ matrix.arch }}
|
||||
fail-on-critical: 'false'
|
||||
severity: 'CRITICAL'
|
||||
|
||||
@@ -5,26 +5,10 @@ on:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'prowler/**'
|
||||
- 'tests/**'
|
||||
- 'pyproject.toml'
|
||||
- 'poetry.lock'
|
||||
- '.github/workflows/sdk-tests.yml'
|
||||
- '.github/workflows/sdk-security.yml'
|
||||
- '.github/actions/setup-python-poetry/**'
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'prowler/**'
|
||||
- 'tests/**'
|
||||
- 'pyproject.toml'
|
||||
- 'poetry.lock'
|
||||
- '.github/workflows/sdk-tests.yml'
|
||||
- '.github/workflows/sdk-security.yml'
|
||||
- '.github/actions/setup-python-poetry/**'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
|
||||
@@ -151,7 +151,7 @@ jobs:
|
||||
tags: |
|
||||
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-${{ matrix.arch }}
|
||||
cache-from: type=gha,scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }},scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
|
||||
|
||||
# Create and push multi-architecture manifest
|
||||
create-manifest:
|
||||
|
||||
@@ -5,16 +5,10 @@ on:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'ui/**'
|
||||
- '.github/workflows/ui-container-checks.yml'
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
paths:
|
||||
- 'ui/**'
|
||||
- '.github/workflows/ui-container-checks.yml'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -63,7 +57,16 @@ jobs:
|
||||
|
||||
ui-container-build-and-scan:
|
||||
if: github.repository == 'prowler-cloud/prowler'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ${{ matrix.runner }}
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- platform: linux/amd64
|
||||
runner: ubuntu-latest
|
||||
arch: amd64
|
||||
- platform: linux/arm64
|
||||
runner: ubuntu-24.04-arm
|
||||
arch: arm64
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -111,7 +114,7 @@ jobs:
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
|
||||
- name: Build UI container
|
||||
- name: Build UI container for ${{ matrix.arch }}
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
|
||||
with:
|
||||
@@ -119,17 +122,18 @@ jobs:
|
||||
target: prod
|
||||
push: false
|
||||
load: true
|
||||
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }}
|
||||
platforms: ${{ matrix.platform }}
|
||||
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
|
||||
cache-from: type=gha,scope=${{ matrix.arch }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
|
||||
build-args: |
|
||||
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_51LwpXXXX
|
||||
|
||||
- name: Scan UI container with Trivy
|
||||
- name: Scan UI container with Trivy for ${{ matrix.arch }}
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
uses: ./.github/actions/trivy-scan
|
||||
with:
|
||||
image-name: ${{ env.IMAGE_NAME }}
|
||||
image-tag: ${{ github.sha }}
|
||||
image-tag: ${{ github.sha }}-${{ matrix.arch }}
|
||||
fail-on-critical: 'false'
|
||||
severity: 'CRITICAL'
|
||||
|
||||
@@ -15,10 +15,6 @@ on:
|
||||
- 'ui/**'
|
||||
- 'api/**' # API changes can affect UI E2E
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
@@ -270,7 +266,7 @@ jobs:
|
||||
with:
|
||||
name: playwright-report
|
||||
path: ui/playwright-report/
|
||||
retention-days: 7
|
||||
retention-days: 30
|
||||
|
||||
- name: Cleanup services
|
||||
if: always()
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
name: "UI: Tests"
|
||||
name: 'UI: Tests'
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- "master"
|
||||
- "v5.*"
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
pull_request:
|
||||
branches:
|
||||
- "master"
|
||||
- "v5.*"
|
||||
- 'master'
|
||||
- 'v5.*'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -16,7 +16,7 @@ concurrency:
|
||||
|
||||
env:
|
||||
UI_WORKING_DIR: ./ui
|
||||
NODE_VERSION: "24.13.0"
|
||||
NODE_VERSION: '24.13.0'
|
||||
|
||||
permissions: {}
|
||||
|
||||
@@ -42,9 +42,6 @@ jobs:
|
||||
fonts.gstatic.com:443
|
||||
api.github.com:443
|
||||
release-assets.githubusercontent.com:443
|
||||
cdn.playwright.dev:443
|
||||
objects.githubusercontent.com:443
|
||||
playwright.download.prss.microsoft.com:443
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
@@ -136,7 +133,7 @@ jobs:
|
||||
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed == 'true'
|
||||
run: |
|
||||
echo "Critical paths changed - running ALL unit tests"
|
||||
pnpm run test:unit
|
||||
pnpm run test:run
|
||||
|
||||
- name: Run unit tests (related to changes only)
|
||||
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed != 'true' && steps.changed-source.outputs.all_changed_files != ''
|
||||
@@ -145,7 +142,7 @@ jobs:
|
||||
echo "${STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES}"
|
||||
# Convert space-separated to vitest related format (remove ui/ prefix for relative paths)
|
||||
CHANGED_FILES=$(echo "${STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES}" | tr ' ' '\n' | sed 's|^ui/||' | tr '\n' ' ')
|
||||
pnpm exec vitest related $CHANGED_FILES --run --project unit
|
||||
pnpm exec vitest related $CHANGED_FILES --run
|
||||
env:
|
||||
STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES: ${{ steps.changed-source.outputs.all_changed_files }}
|
||||
|
||||
@@ -153,25 +150,7 @@ jobs:
|
||||
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed != 'true' && steps.changed-source.outputs.all_changed_files == ''
|
||||
run: |
|
||||
echo "Only test files changed - running ALL unit tests"
|
||||
pnpm run test:unit
|
||||
|
||||
- name: Cache Playwright browsers
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
id: playwright-cache
|
||||
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
|
||||
with:
|
||||
path: ~/.cache/ms-playwright
|
||||
key: ${{ runner.os }}-playwright-chromium-${{ hashFiles('ui/pnpm-lock.yaml') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-playwright-chromium-
|
||||
|
||||
- name: Install Playwright Chromium browser
|
||||
if: steps.check-changes.outputs.any_changed == 'true' && steps.playwright-cache.outputs.cache-hit != 'true'
|
||||
run: pnpm exec playwright install chromium
|
||||
|
||||
- name: Run browser tests
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
run: pnpm run test:browser
|
||||
pnpm run test:run
|
||||
|
||||
- name: Build application
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
|
||||
@@ -8,7 +8,6 @@ rules:
|
||||
- docs-bump-version.yml
|
||||
- issue-triage.lock.yml
|
||||
- mcp-container-build-push.yml
|
||||
- nightly-arm64-container-builds.yml
|
||||
- pr-merged.yml
|
||||
- prepare-release.yml
|
||||
- sdk-bump-version.yml
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
# P40 — security scanners
|
||||
# P50 — dependency validation
|
||||
|
||||
default_install_hook_types: [pre-commit]
|
||||
default_install_hook_types: [pre-commit, pre-push]
|
||||
|
||||
repos:
|
||||
## GENERAL (prek built-in — no external repo needed)
|
||||
@@ -44,9 +44,7 @@ repos:
|
||||
rev: v1.24.1
|
||||
hooks:
|
||||
- id: zizmor
|
||||
# zizmor only audits workflows, composite actions and dependabot
|
||||
# config; broader paths trip exit 3 ("no audit was performed").
|
||||
files: ^\.github/(workflows|actions)/.+\.ya?ml$|^\.github/dependabot\.ya?ml$
|
||||
files: ^\.github/
|
||||
priority: 30
|
||||
|
||||
## BASH
|
||||
@@ -64,7 +62,12 @@ repos:
|
||||
- id: autoflake
|
||||
name: "SDK - autoflake"
|
||||
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
|
||||
args: ["--in-place", "--remove-all-unused-imports", "--remove-unused-variable"]
|
||||
args:
|
||||
[
|
||||
"--in-place",
|
||||
"--remove-all-unused-imports",
|
||||
"--remove-unused-variable",
|
||||
]
|
||||
priority: 20
|
||||
|
||||
- repo: https://github.com/pycqa/isort
|
||||
@@ -95,7 +98,7 @@ repos:
|
||||
|
||||
## PYTHON — API + MCP Server (ruff)
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.15.11
|
||||
rev: v0.15.12
|
||||
hooks:
|
||||
- id: ruff
|
||||
name: "API + MCP - ruff check"
|
||||
@@ -176,7 +179,8 @@ repos:
|
||||
language: system
|
||||
types: [python]
|
||||
files: '.*\.py'
|
||||
exclude: { glob: ["{contrib,skills}/**", "**/.venv/**", "**/*_test.py"] }
|
||||
exclude:
|
||||
{ glob: ["{contrib,skills}/**", "**/.venv/**", "**/*_test.py"] }
|
||||
priority: 40
|
||||
|
||||
- id: safety
|
||||
@@ -186,7 +190,16 @@ repos:
|
||||
entry: safety check --policy-file .safety-policy.yml
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: { glob: ["**/pyproject.toml", "**/poetry.lock", "**/requirements*.txt", ".safety-policy.yml"] }
|
||||
files:
|
||||
{
|
||||
glob:
|
||||
[
|
||||
"**/pyproject.toml",
|
||||
"**/poetry.lock",
|
||||
"**/requirements*.txt",
|
||||
".safety-policy.yml",
|
||||
],
|
||||
}
|
||||
priority: 40
|
||||
|
||||
- id: vulture
|
||||
|
||||
@@ -15,7 +15,7 @@ Use these skills for detailed patterns on-demand:
|
||||
|-------|-------------|-----|
|
||||
| `typescript` | Const types, flat interfaces, utility types | [SKILL.md](skills/typescript/SKILL.md) |
|
||||
| `react-19` | No useMemo/useCallback, React Compiler | [SKILL.md](skills/react-19/SKILL.md) |
|
||||
| `nextjs-16` | App Router, Server Actions, proxy.ts, streaming | [SKILL.md](skills/nextjs-16/SKILL.md) |
|
||||
| `nextjs-15` | App Router, Server Actions, streaming | [SKILL.md](skills/nextjs-15/SKILL.md) |
|
||||
| `tailwind-4` | cn() utility, no var() in className | [SKILL.md](skills/tailwind-4/SKILL.md) |
|
||||
| `playwright` | Page Object Model, MCP workflow, selectors | [SKILL.md](skills/playwright/SKILL.md) |
|
||||
| `pytest` | Fixtures, mocking, markers, parametrize | [SKILL.md](skills/pytest/SKILL.md) |
|
||||
@@ -60,14 +60,11 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
|
||||
|--------|-------|
|
||||
| Add changelog entry for a PR or feature | `prowler-changelog` |
|
||||
| Adding DRF pagination or permissions | `django-drf` |
|
||||
| Adding a compliance output formatter (per-provider class + table dispatcher) | `prowler-compliance` |
|
||||
| Adding indexes or constraints to database tables | `django-migration-psql` |
|
||||
| Adding new providers | `prowler-provider` |
|
||||
| Adding privilege escalation detection queries | `prowler-attack-paths-query` |
|
||||
| Adding services to existing providers | `prowler-provider` |
|
||||
| After creating/modifying a skill | `skill-sync` |
|
||||
| App Router / Server Actions | `nextjs-16` |
|
||||
| Auditing check-to-requirement mappings as a cloud auditor | `prowler-compliance` |
|
||||
| App Router / Server Actions | `nextjs-15` |
|
||||
| Building AI chat features | `ai-sdk-5` |
|
||||
| Committing changes | `prowler-commit` |
|
||||
| Configuring MCP servers in agentic workflows | `gh-aw` |
|
||||
@@ -81,7 +78,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
|
||||
| Creating a git commit | `prowler-commit` |
|
||||
| Creating new checks | `prowler-sdk-check` |
|
||||
| Creating new skills | `skill-creator` |
|
||||
| Creating or reviewing Django migrations | `django-migration-psql` |
|
||||
| Creating/modifying Prowler UI components | `prowler-ui` |
|
||||
| Creating/modifying models, views, serializers | `prowler-api` |
|
||||
| Creating/updating compliance frameworks | `prowler-compliance` |
|
||||
@@ -89,7 +85,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
|
||||
| Debugging gh-aw compilation errors | `gh-aw` |
|
||||
| Fill .github/pull_request_template.md (Context/Description/Steps to review/Checklist) | `prowler-pr` |
|
||||
| Fixing bug | `tdd` |
|
||||
| Fixing compliance JSON bugs (duplicate IDs, empty Section, stale refs) | `prowler-compliance` |
|
||||
| General Prowler development questions | `prowler` |
|
||||
| Implementing JSON:API endpoints | `django-drf` |
|
||||
| Implementing feature | `tdd` |
|
||||
@@ -107,8 +102,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
|
||||
| Review changelog format and conventions | `prowler-changelog` |
|
||||
| Reviewing JSON:API compliance | `jsonapi` |
|
||||
| Reviewing compliance framework PRs | `prowler-compliance-review` |
|
||||
| Running makemigrations or pgmakemigrations | `django-migration-psql` |
|
||||
| Syncing compliance framework with upstream catalog | `prowler-compliance` |
|
||||
| Testing RLS tenant isolation | `prowler-test-api` |
|
||||
| Testing hooks or utilities | `vitest` |
|
||||
| Troubleshoot why a skill is missing from AGENTS.md auto-invoke | `skill-sync` |
|
||||
@@ -136,7 +129,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
|
||||
| Writing React components | `react-19` |
|
||||
| Writing TypeScript types/interfaces | `typescript` |
|
||||
| Writing Vitest tests | `vitest` |
|
||||
| Writing data backfill or data migration | `django-migration-psql` |
|
||||
| Writing documentation | `prowler-docs` |
|
||||
| Writing unit tests for UI | `vitest` |
|
||||
|
||||
@@ -150,7 +142,7 @@ Prowler is an open-source cloud security assessment tool supporting AWS, Azure,
|
||||
|-----------|----------|------------|
|
||||
| SDK | `prowler/` | Python 3.10+, Poetry 2.3+ |
|
||||
| API | `api/` | Django 5.1, DRF, Celery |
|
||||
| UI | `ui/` | Next.js 16, React 19, Tailwind 4 |
|
||||
| UI | `ui/` | Next.js 15, React 19, Tailwind 4 |
|
||||
| MCP Server | `mcp_server/` | FastMCP, Python 3.12+ |
|
||||
| Dashboard | `dashboard/` | Dash, Plotly |
|
||||
|
||||
|
||||
@@ -1,34 +1,11 @@
|
||||
# Do you want to learn on how to...
|
||||
|
||||
- [Contribute with your code or fixes to Prowler](https://docs.prowler.com/developer-guide/introduction)
|
||||
- [Create a new provider](https://docs.prowler.com/developer-guide/provider)
|
||||
- [Create a new service](https://docs.prowler.com/developer-guide/services)
|
||||
- [Create a new check for a provider](https://docs.prowler.com/developer-guide/checks)
|
||||
- [Create a new security compliance framework](https://docs.prowler.com/developer-guide/security-compliance-framework)
|
||||
- [Add a custom output format](https://docs.prowler.com/developer-guide/outputs)
|
||||
- [Add a new integration](https://docs.prowler.com/developer-guide/integrations)
|
||||
- [Contribute with documentation](https://docs.prowler.com/developer-guide/documentation)
|
||||
- [Write unit tests](https://docs.prowler.com/developer-guide/unit-testing)
|
||||
- [Write integration tests](https://docs.prowler.com/developer-guide/integration-testing)
|
||||
- [Write end-to-end tests](https://docs.prowler.com/developer-guide/end2end-testing)
|
||||
- [Debug Prowler](https://docs.prowler.com/developer-guide/debugging)
|
||||
- [Configure checks](https://docs.prowler.com/developer-guide/configurable-checks)
|
||||
- [Rename checks](https://docs.prowler.com/developer-guide/renaming-checks)
|
||||
- [Follow the check metadata guidelines](https://docs.prowler.com/developer-guide/check-metadata-guidelines)
|
||||
- [Extend the MCP server](https://docs.prowler.com/developer-guide/mcp-server)
|
||||
- [Extend Lighthouse AI](https://docs.prowler.com/developer-guide/lighthouse-architecture)
|
||||
- [Add AI skills](https://docs.prowler.com/developer-guide/ai-skills)
|
||||
|
||||
Provider-specific developer notes:
|
||||
|
||||
- [AWS](https://docs.prowler.com/developer-guide/aws-details)
|
||||
- [Azure](https://docs.prowler.com/developer-guide/azure-details)
|
||||
- [Google Cloud](https://docs.prowler.com/developer-guide/gcp-details)
|
||||
- [Alibaba Cloud](https://docs.prowler.com/developer-guide/alibabacloud-details)
|
||||
- [Kubernetes](https://docs.prowler.com/developer-guide/kubernetes-details)
|
||||
- [Microsoft 365](https://docs.prowler.com/developer-guide/m365-details)
|
||||
- [GitHub](https://docs.prowler.com/developer-guide/github-details)
|
||||
- [LLM](https://docs.prowler.com/developer-guide/llm-details)
|
||||
- Contribute with your code or fixes to Prowler
|
||||
- Create a new check for a provider
|
||||
- Create a new security compliance framework
|
||||
- Add a custom output format
|
||||
- Add a new integration
|
||||
- Contribute with documentation
|
||||
|
||||
Want some swag as appreciation for your contribution?
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ LABEL org.opencontainers.image.source="https://github.com/prowler-cloud/prowler"
|
||||
ARG POWERSHELL_VERSION=7.5.0
|
||||
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
|
||||
|
||||
ARG TRIVY_VERSION=0.70.0
|
||||
ARG TRIVY_VERSION=0.69.2
|
||||
ENV TRIVY_VERSION=${TRIVY_VERSION}
|
||||
|
||||
ARG ZIZMOR_VERSION=1.24.1
|
||||
|
||||
@@ -2,48 +2,11 @@
|
||||
|
||||
All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
## [1.28.0] (Prowler UNRELEASED)
|
||||
## [1.27.0] (Prowler UNRELEASED)
|
||||
|
||||
### 🚀 Added
|
||||
|
||||
- GIN index on `findings(categories, resource_services, resource_regions, resource_types)` to speed up `/api/v1/finding-groups` array filters [(#11001)](https://github.com/prowler-cloud/prowler/pull/11001)
|
||||
|
||||
### 🔄 Changed
|
||||
|
||||
- Remove orphaned `gin_resources_search_idx` declaration from `Resource.Meta.indexes` (DB index dropped in `0072_drop_unused_indexes`) [(#11001)](https://github.com/prowler-cloud/prowler/pull/11001)
|
||||
- PDF compliance reports cap detail tables at 100 failed findings per check (configurable via `DJANGO_PDF_MAX_FINDINGS_PER_CHECK`) to bound worker memory on large scans [(#11160)](https://github.com/prowler-cloud/prowler/pull/11160)
|
||||
|
||||
---
|
||||
|
||||
## [1.27.2] (Prowler UNRELEASED)
|
||||
|
||||
### 🐞 Fixed
|
||||
|
||||
- Attack Paths: BEDROCK-001 and BEDROCK-002 now target roles trusting `bedrock-agentcore.amazonaws.com` instead of `bedrock.amazonaws.com`, eliminating false positives against regular Bedrock service roles (Agents, Knowledge Bases, model invocation) [(#11141)](https://github.com/prowler-cloud/prowler/pull/11141)
|
||||
|
||||
---
|
||||
|
||||
## [1.27.1] (Prowler v5.26.1)
|
||||
|
||||
### 🐞 Fixed
|
||||
|
||||
- `POST /api/v1/scans` was intermittently failing with `Scan matching query does not exist` in the `scan-perform` worker; the Celery task is now published via `transaction.on_commit` so the worker cannot read the Scan before the dispatch-wide transaction commits [(#11122)](https://github.com/prowler-cloud/prowler/pull/11122)
|
||||
|
||||
---
|
||||
|
||||
## [1.27.0] (Prowler v5.26.0)
|
||||
|
||||
### 🚀 Added
|
||||
|
||||
- `scan-reset-ephemeral-resources` post-scan task zeroes `failed_findings_count` for resources missing from the latest full-scope scan, keeping ephemeral resources from polluting the Resources page sort [(#10929)](https://github.com/prowler-cloud/prowler/pull/10929)
|
||||
|
||||
### 🔄 Changed
|
||||
|
||||
- ASD Essential Eight (AWS) compliance framework support [(#10982)](https://github.com/prowler-cloud/prowler/pull/10982)
|
||||
|
||||
### 🔐 Security
|
||||
|
||||
- `trivy` binary from 0.69.2 to 0.70.0 and `cryptography` from 46.0.6 to 46.0.7 (transitive via prowler SDK) in the API image for CVE-2026-33186 and CVE-2026-39892 [(#10978)](https://github.com/prowler-cloud/prowler/pull/10978)
|
||||
- New `scan-reset-ephemeral-resources` post-scan task zeroes `failed_findings_count` for resources missing from the latest full-scope scan, keeping ephemeral resources from polluting the Resources page sort [(#10929)](https://github.com/prowler-cloud/prowler/pull/10929)
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ LABEL maintainer="https://github.com/prowler-cloud/api"
|
||||
ARG POWERSHELL_VERSION=7.5.0
|
||||
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
|
||||
|
||||
ARG TRIVY_VERSION=0.70.0
|
||||
ARG TRIVY_VERSION=0.69.2
|
||||
ENV TRIVY_VERSION=${TRIVY_VERSION}
|
||||
|
||||
ARG ZIZMOR_VERSION=1.24.1
|
||||
|
||||
@@ -2504,61 +2504,61 @@ dev = ["bandit", "coverage", "flake8", "pydocstyle", "pylint", "pytest", "pytest
|
||||
|
||||
[[package]]
|
||||
name = "cryptography"
|
||||
version = "46.0.7"
|
||||
version = "46.0.6"
|
||||
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
|
||||
optional = false
|
||||
python-versions = "!=3.9.0,!=3.9.1,>=3.8"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "cryptography-46.0.7-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:ea42cbe97209df307fdc3b155f1b6fa2577c0defa8f1f7d3be7d31d189108ad4"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b36a4695e29fe69215d75960b22577197aca3f7a25b9cf9d165dcfe9d80bc325"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5ad9ef796328c5e3c4ceed237a183f5d41d21150f972455a9d926593a1dcb308"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:73510b83623e080a2c35c62c15298096e2a5dc8d51c3b4e1740211839d0dea77"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cbd5fb06b62bd0721e1170273d3f4d5a277044c47ca27ee257025146c34cbdd1"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:420b1e4109cc95f0e5700eed79908cef9268265c773d3a66f7af1eef53d409ef"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:24402210aa54baae71d99441d15bb5a1919c195398a87b563df84468160a65de"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:8a469028a86f12eb7d2fe97162d0634026d92a21f3ae0ac87ed1c4a447886c83"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:9694078c5d44c157ef3162e3bf3946510b857df5a3955458381d1c7cfc143ddb"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:42a1e5f98abb6391717978baf9f90dc28a743b7d9be7f0751a6f56a75d14065b"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91bbcb08347344f810cbe49065914fe048949648f6bd5c2519f34619142bbe85"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5d1c02a14ceb9148cc7816249f64f623fbfee39e8c03b3650d842ad3f34d637e"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-win32.whl", hash = "sha256:d23c8ca48e44ee015cd0a54aeccdf9f09004eba9fc96f38c911011d9ff1bd457"},
|
||||
{file = "cryptography-46.0.7-cp311-abi3-win_amd64.whl", hash = "sha256:397655da831414d165029da9bc483bed2fe0e75dde6a1523ec2fe63f3c46046b"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:d151173275e1728cf7839aaa80c34fe550c04ddb27b34f48c232193df8db5842"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:db0f493b9181c7820c8134437eb8b0b4792085d37dbb24da050476ccb664e59c"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ebd6daf519b9f189f85c479427bbd6e9c9037862cf8fe89ee35503bd209ed902"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:b7b412817be92117ec5ed95f880defe9cf18a832e8cafacf0a22337dc1981b4d"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:fbfd0e5f273877695cb93baf14b185f4878128b250cc9f8e617ea0c025dfb022"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:ffca7aa1d00cf7d6469b988c581598f2259e46215e0140af408966a24cf086ce"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:60627cf07e0d9274338521205899337c5d18249db56865f943cbe753aa96f40f"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:80406c3065e2c55d7f49a9550fe0c49b3f12e5bfff5dedb727e319e1afb9bf99"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:c5b1ccd1239f48b7151a65bc6dd54bcfcc15e028c8ac126d3fada09db0e07ef1"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:d5f7520159cd9c2154eb61eb67548ca05c5774d39e9c2c4339fd793fe7d097b2"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:fcd8eac50d9138c1d7fc53a653ba60a2bee81a505f9f8850b6b2888555a45d0e"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:65814c60f8cc400c63131584e3e1fad01235edba2614b61fbfbfa954082db0ee"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-win32.whl", hash = "sha256:fdd1736fed309b4300346f88f74cd120c27c56852c3838cab416e7a166f67298"},
|
||||
{file = "cryptography-46.0.7-cp314-cp314t-win_amd64.whl", hash = "sha256:e06acf3c99be55aa3b516397fe42f5855597f430add9c17fa46bf2e0fb34c9bb"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:462ad5cb1c148a22b2e3bcc5ad52504dff325d17daf5df8d88c17dda1f75f2a4"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:84d4cced91f0f159a7ddacad249cc077e63195c36aac40b4150e7a57e84fffe7"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:128c5edfe5e5938b86b03941e94fac9ee793a94452ad1365c9fc3f4f62216832"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5e51be372b26ef4ba3de3c167cd3d1022934bc838ae9eaad7e644986d2a3d163"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cdf1a610ef82abb396451862739e3fc93b071c844399e15b90726ef7470eeaf2"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1d25aee46d0c6f1a501adcddb2d2fee4b979381346a78558ed13e50aa8a59067"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:cdfbe22376065ffcf8be74dc9a909f032df19bc58a699456a21712d6e5eabfd0"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:abad9dac36cbf55de6eb49badd4016806b3165d396f64925bf2999bcb67837ba"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:935ce7e3cfdb53e3536119a542b839bb94ec1ad081013e9ab9b7cfd478b05006"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:35719dc79d4730d30f1c2b6474bd6acda36ae2dfae1e3c16f2051f215df33ce0"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:7bbc6ccf49d05ac8f7d7b5e2e2c33830d4fe2061def88210a126d130d7f71a85"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a1529d614f44b863a7b480c6d000fe93b59acee9c82ffa027cfadc77521a9f5e"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-win32.whl", hash = "sha256:f247c8c1a1fb45e12586afbb436ef21ff1e80670b2861a90353d9b025583d246"},
|
||||
{file = "cryptography-46.0.7-cp38-abi3-win_amd64.whl", hash = "sha256:506c4ff91eff4f82bdac7633318a526b1d1309fc07ca76a3ad182cb5b686d6d3"},
|
||||
{file = "cryptography-46.0.7-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:fc9ab8856ae6cf7c9358430e49b368f3108f050031442eaeb6b9d87e4dcf4e4f"},
|
||||
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:d3b99c535a9de0adced13d159c5a9cf65c325601aa30f4be08afd680643e9c15"},
|
||||
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d02c738dacda7dc2a74d1b2b3177042009d5cab7c7079db74afc19e56ca1b455"},
|
||||
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:04959522f938493042d595a736e7dbdff6eb6cc2339c11465b3ff89343b65f65"},
|
||||
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3986ac1dee6def53797289999eabe84798ad7817f3e97779b5061a95b0ee4968"},
|
||||
{file = "cryptography-46.0.7-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:258514877e15963bd43b558917bc9f54cf7cf866c38aa576ebf47a77ddbc43a4"},
|
||||
{file = "cryptography-46.0.7.tar.gz", hash = "sha256:e4cfd68c5f3e0bfdad0d38e023239b96a2fe84146481852dffbcca442c245aa5"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:64235194bad039a10bb6d2d930ab3323baaec67e2ce36215fd0952fad0930ca8"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:26031f1e5ca62fcb9d1fcb34b2b60b390d1aacaa15dc8b895a9ed00968b97b30"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9a693028b9cbe51b5a1136232ee8f2bc242e4e19d456ded3fa7c86e43c713b4a"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:67177e8a9f421aa2d3a170c3e56eca4e0128883cf52a071a7cbf53297f18b175"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:d9528b535a6c4f8ff37847144b8986a9a143585f0540fbcb1a98115b543aa463"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:22259338084d6ae497a19bae5d4c66b7ca1387d3264d1c2c0e72d9e9b6a77b97"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:760997a4b950ff00d418398ad73fbc91aa2894b5c1db7ccb45b4f68b42a63b3c"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3dfa6567f2e9e4c5dceb8ccb5a708158a2a871052fa75c8b78cb0977063f1507"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:cdcd3edcbc5d55757e5f5f3d330dd00007ae463a7e7aa5bf132d1f22a4b62b19"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:d4e4aadb7fc1f88687f47ca20bb7227981b03afaae69287029da08096853b738"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2b417edbe8877cda9022dde3a008e2deb50be9c407eef034aeeb3a8b11d9db3c"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:380343e0653b1c9d7e1f55b52aaa2dbb2fdf2730088d48c43ca1c7c0abb7cc2f"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-win32.whl", hash = "sha256:bcb87663e1f7b075e48c3be3ecb5f0b46c8fc50b50a97cf264e7f60242dca3f2"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:6739d56300662c468fddb0e5e291f9b4d084bead381667b9e654c7dd81705124"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:2ef9e69886cbb137c2aef9772c2e7138dc581fad4fcbcf13cc181eb5a3ab6275"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7f417f034f91dcec1cb6c5c35b07cdbb2ef262557f701b4ecd803ee8cefed4f4"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d24c13369e856b94892a89ddf70b332e0b70ad4a5c43cf3e9cb71d6d7ffa1f7b"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:aad75154a7ac9039936d50cf431719a2f8d4ed3d3c277ac03f3339ded1a5e707"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:3c21d92ed15e9cfc6eb64c1f5a0326db22ca9c2566ca46d845119b45b4400361"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:4668298aef7cddeaf5c6ecc244c2302a2b8e40f384255505c22875eebb47888b"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:8ce35b77aaf02f3b59c90b2c8a05c73bac12cea5b4e8f3fbece1f5fddea5f0ca"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:c89eb37fae9216985d8734c1afd172ba4927f5a05cfd9bf0e4863c6d5465b013"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:ed418c37d095aeddf5336898a132fba01091f0ac5844e3e8018506f014b6d2c4"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:69cf0056d6947edc6e6760e5f17afe4bea06b56a9ac8a06de9d2bd6b532d4f3a"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8e7304c4f4e9490e11efe56af6713983460ee0780f16c63f219984dab3af9d2d"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b928a3ca837c77a10e81a814a693f2295200adb3352395fad024559b7be7a736"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-win32.whl", hash = "sha256:97c8115b27e19e592a05c45d0dd89c57f81f841cc9880e353e0d3bf25b2139ed"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-win_amd64.whl", hash = "sha256:c797e2517cb7880f8297e2c0f43bb910e91381339336f75d2c1c2cbf811b70b4"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:12cae594e9473bca1a7aceb90536060643128bb274fcea0fc459ab90f7d1ae7a"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:639301950939d844a9e1c4464d7e07f902fe9a7f6b215bb0d4f28584729935d8"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ed3775295fb91f70b4027aeba878d79b3e55c0b3e97eaa4de71f8f23a9f2eb77"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8927ccfbe967c7df312ade694f987e7e9e22b2425976ddbf28271d7e58845290"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:b12c6b1e1651e42ab5de8b1e00dc3b6354fdfd778e7fa60541ddacc27cd21410"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:063b67749f338ca9c5a0b7fe438a52c25f9526b851e24e6c9310e7195aad3b4d"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:02fad249cb0e090b574e30b276a3da6a149e04ee2f049725b1f69e7b8351ec70"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:7e6142674f2a9291463e5e150090b95a8519b2fb6e6aaec8917dd8d094ce750d"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:456b3215172aeefb9284550b162801d62f5f264a081049a3e94307fe20792cfa"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:341359d6c9e68834e204ceaf25936dffeafea3829ab80e9503860dcc4f4dac58"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9a9c42a2723999a710445bc0d974e345c32adfd8d2fac6d8a251fa829ad31cfb"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6617f67b1606dfd9fe4dbfa354a9508d4a6d37afe30306fe6c101b7ce3274b72"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-win32.whl", hash = "sha256:7f6690b6c55e9c5332c0b59b9c8a3fb232ebf059094c17f9019a51e9827df91c"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-win_amd64.whl", hash = "sha256:79e865c642cfc5c0b3eb12af83c35c5aeff4fa5c672dc28c43721c2c9fdd2f0f"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:2ea0f37e9a9cf0df2952893ad145fd9627d326a59daec9b0802480fa3bcd2ead"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a3e84d5ec9ba01f8fd03802b2147ba77f0c8f2617b2aff254cedd551844209c8"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:12f0fa16cc247b13c43d56d7b35287ff1569b5b1f4c5e87e92cc4fcc00cd10c0"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:50575a76e2951fe7dbd1f56d181f8c5ceeeb075e9ff88e7ad997d2f42af06e7b"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:90e5f0a7b3be5f40c3a0a0eafb32c681d8d2c181fc2a1bdabe9b3f611d9f6b1a"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:6728c49e3b2c180ef26f8e9f0a883a2c585638db64cf265b49c9ba10652d430e"},
|
||||
{file = "cryptography-46.0.6.tar.gz", hash = "sha256:27550628a518c5c6c903d84f637fbecf287f6cb9ced3804838a1295dc1fd0759"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2571,7 +2571,7 @@ nox = ["nox[uv] (>=2024.4.15)"]
|
||||
pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
|
||||
sdist = ["build (>=1.0.0)"]
|
||||
ssh = ["bcrypt (>=3.1.5)"]
|
||||
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.7)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
|
||||
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
|
||||
test-randomorder = ["pytest-randomly"]
|
||||
|
||||
[[package]]
|
||||
@@ -6665,7 +6665,7 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "prowler"
|
||||
version = "5.26.0"
|
||||
version = "5.25.0"
|
||||
description = "Prowler is an Open Source security tool to perform AWS, GCP and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks."
|
||||
optional = false
|
||||
python-versions = ">=3.10,<3.13"
|
||||
@@ -6720,7 +6720,7 @@ boto3 = "1.40.61"
|
||||
botocore = "1.40.61"
|
||||
cloudflare = "4.3.1"
|
||||
colorama = "0.4.6"
|
||||
cryptography = "46.0.7"
|
||||
cryptography = "46.0.6"
|
||||
dash = "3.1.1"
|
||||
dash-bootstrap-components = "2.0.3"
|
||||
defusedxml = "0.7.1"
|
||||
@@ -6755,7 +6755,7 @@ uuid6 = "2024.7.10"
|
||||
type = "git"
|
||||
url = "https://github.com/prowler-cloud/prowler.git"
|
||||
reference = "master"
|
||||
resolved_reference = "16798e293da365965120961e6539e3a9756564f9"
|
||||
resolved_reference = "ca29e354b622198ff6a70e2ea5eb04e4a44a0903"
|
||||
|
||||
[[package]]
|
||||
name = "psutil"
|
||||
@@ -7912,26 +7912,26 @@ shaping = ["uharfbuzz"]
|
||||
|
||||
[[package]]
|
||||
name = "requests"
|
||||
version = "2.33.1"
|
||||
version = "2.32.5"
|
||||
description = "Python HTTP for Humans."
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "requests-2.33.1-py3-none-any.whl", hash = "sha256:4e6d1ef462f3626a1f0a0a9c42dd93c63bad33f9f1c1937509b8c5c8718ab56a"},
|
||||
{file = "requests-2.33.1.tar.gz", hash = "sha256:18817f8c57c6263968bc123d237e3b8b08ac046f5456bd1e307ee8f4250d3517"},
|
||||
{file = "requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6"},
|
||||
{file = "requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = ">=2023.5.7"
|
||||
certifi = ">=2017.4.17"
|
||||
charset_normalizer = ">=2,<4"
|
||||
idna = ">=2.5,<4"
|
||||
PySocks = {version = ">=1.5.6,<1.5.7 || >1.5.7", optional = true, markers = "extra == \"socks\""}
|
||||
urllib3 = ">=1.26,<3"
|
||||
urllib3 = ">=1.21.1,<3"
|
||||
|
||||
[package.extras]
|
||||
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
|
||||
use-chardet-on-py3 = ["chardet (>=3.0.2,<8)"]
|
||||
use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
|
||||
|
||||
[[package]]
|
||||
name = "requests-file"
|
||||
|
||||
@@ -50,7 +50,7 @@ name = "prowler-api"
|
||||
package-mode = false
|
||||
# Needed for the SDK compatibility
|
||||
requires-python = ">=3.11,<3.13"
|
||||
version = "1.28.0"
|
||||
version = "1.27.0"
|
||||
|
||||
[project.scripts]
|
||||
celery = "src.backend.config.settings.celery"
|
||||
@@ -63,7 +63,6 @@ docker = "7.1.0"
|
||||
filelock = "3.20.3"
|
||||
freezegun = "1.5.1"
|
||||
mypy = "1.10.1"
|
||||
prek = "0.3.9"
|
||||
pylint = "3.2.5"
|
||||
pytest = "9.0.3"
|
||||
pytest-cov = "5.0.0"
|
||||
@@ -75,3 +74,4 @@ ruff = "0.5.0"
|
||||
safety = "3.7.0"
|
||||
tqdm = "4.67.1"
|
||||
vulture = "2.14"
|
||||
prek = "0.3.9"
|
||||
|
||||
@@ -484,8 +484,8 @@ AWS_BEDROCK_PRIVESC_PASSROLE_CODE_INTERPRETER = AttackPathsQueryDefinition(
|
||||
OR action = '*'
|
||||
)
|
||||
|
||||
// Find roles that trust the Bedrock AgentCore service (can be passed to a code interpreter)
|
||||
MATCH path_target = (aws)--(target_role:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(:AWSPrincipal {{arn: 'bedrock-agentcore.amazonaws.com'}})
|
||||
// Find roles that trust Bedrock service (can be passed to Bedrock)
|
||||
MATCH path_target = (aws)--(target_role:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(:AWSPrincipal {{arn: 'bedrock.amazonaws.com'}})
|
||||
WHERE any(resource IN stmt_passrole.resource WHERE
|
||||
resource = '*'
|
||||
OR target_role.arn CONTAINS resource
|
||||
@@ -536,8 +536,8 @@ AWS_BEDROCK_PRIVESC_INVOKE_CODE_INTERPRETER = AttackPathsQueryDefinition(
|
||||
OR action = '*'
|
||||
)
|
||||
|
||||
// Find roles that trust the Bedrock AgentCore service (already attached to existing code interpreters)
|
||||
MATCH path_target = (aws)--(target_role:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(:AWSPrincipal {{arn: 'bedrock-agentcore.amazonaws.com'}})
|
||||
// Find roles that trust Bedrock service (already attached to existing code interpreters)
|
||||
MATCH path_target = (aws)--(target_role:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(:AWSPrincipal {{arn: 'bedrock.amazonaws.com'}})
|
||||
|
||||
WITH collect(path_principal) + collect(path_target) AS paths
|
||||
UNWIND paths AS p
|
||||
|
||||
@@ -1,31 +0,0 @@
|
||||
from functools import partial
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
atomic = False
|
||||
|
||||
dependencies = [
|
||||
("api", "0090_attack_paths_cleanup_priority"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(
|
||||
partial(
|
||||
create_index_on_partitions,
|
||||
parent_table="findings",
|
||||
index_name="gin_find_arrays_idx",
|
||||
columns="categories, resource_services, resource_regions, resource_types",
|
||||
method="GIN",
|
||||
all_partitions=True,
|
||||
),
|
||||
reverse_code=partial(
|
||||
drop_index_on_partitions,
|
||||
parent_table="findings",
|
||||
index_name="gin_find_arrays_idx",
|
||||
),
|
||||
)
|
||||
]
|
||||
@@ -1,73 +0,0 @@
|
||||
import django.contrib.postgres.indexes
|
||||
from django.db import migrations
|
||||
|
||||
INDEX_NAME = "gin_find_arrays_idx"
|
||||
PARENT_TABLE = "findings"
|
||||
|
||||
|
||||
def create_parent_and_attach(apps, schema_editor):
|
||||
with schema_editor.connection.cursor() as cursor:
|
||||
# Idempotent: the parent index may already exist if it was created
|
||||
# manually on an environment before this migration ran.
|
||||
cursor.execute(
|
||||
f"CREATE INDEX IF NOT EXISTS {INDEX_NAME} ON ONLY {PARENT_TABLE} "
|
||||
f"USING gin (categories, resource_services, resource_regions, resource_types)"
|
||||
)
|
||||
cursor.execute(
|
||||
"SELECT inhrelid::regclass::text "
|
||||
"FROM pg_inherits "
|
||||
"WHERE inhparent = %s::regclass",
|
||||
[PARENT_TABLE],
|
||||
)
|
||||
for (partition,) in cursor.fetchall():
|
||||
child_idx = f"{partition.replace('.', '_')}_{INDEX_NAME}"
|
||||
# ALTER INDEX ... ATTACH PARTITION has no IF NOT ATTACHED clause,
|
||||
# so check pg_inherits first to keep the migration re-runnable.
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT 1
|
||||
FROM pg_inherits i
|
||||
JOIN pg_class p ON p.oid = i.inhparent
|
||||
JOIN pg_class c ON c.oid = i.inhrelid
|
||||
WHERE p.relname = %s AND c.relname = %s
|
||||
""",
|
||||
[INDEX_NAME, child_idx],
|
||||
)
|
||||
if cursor.fetchone() is None:
|
||||
cursor.execute(f"ALTER INDEX {INDEX_NAME} ATTACH PARTITION {child_idx}")
|
||||
|
||||
|
||||
def drop_parent_index(apps, schema_editor):
|
||||
with schema_editor.connection.cursor() as cursor:
|
||||
cursor.execute(f"DROP INDEX IF EXISTS {INDEX_NAME}")
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0091_findings_arrays_gin_index_partitions"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.SeparateDatabaseAndState(
|
||||
state_operations=[
|
||||
migrations.AddIndex(
|
||||
model_name="finding",
|
||||
index=django.contrib.postgres.indexes.GinIndex(
|
||||
fields=[
|
||||
"categories",
|
||||
"resource_services",
|
||||
"resource_regions",
|
||||
"resource_types",
|
||||
],
|
||||
name=INDEX_NAME,
|
||||
),
|
||||
),
|
||||
],
|
||||
database_operations=[
|
||||
migrations.RunPython(
|
||||
create_parent_and_attach,
|
||||
reverse_code=drop_parent_index,
|
||||
),
|
||||
],
|
||||
),
|
||||
]
|
||||
@@ -946,6 +946,7 @@ class Resource(RowLevelSecurityProtectedModel):
|
||||
OpClass(Upper("name"), name="gin_trgm_ops"),
|
||||
name="res_name_trgm_idx",
|
||||
),
|
||||
GinIndex(fields=["text_search"], name="gin_resources_search_idx"),
|
||||
models.Index(fields=["tenant_id", "id"], name="resources_tenant_id_idx"),
|
||||
models.Index(
|
||||
fields=["tenant_id", "provider_id"],
|
||||
@@ -1151,15 +1152,6 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
|
||||
fields=["tenant_id", "scan_id", "check_id"],
|
||||
name="find_tenant_scan_check_idx",
|
||||
),
|
||||
GinIndex(
|
||||
fields=[
|
||||
"categories",
|
||||
"resource_services",
|
||||
"resource_regions",
|
||||
"resource_types",
|
||||
],
|
||||
name="gin_find_arrays_idx",
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
openapi: 3.0.3
|
||||
info:
|
||||
title: Prowler API
|
||||
version: 1.28.0
|
||||
version: 1.27.0
|
||||
description: |-
|
||||
Prowler API specification.
|
||||
|
||||
|
||||
@@ -4,7 +4,6 @@ import json
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
import uuid
|
||||
from collections import defaultdict
|
||||
from copy import deepcopy
|
||||
from datetime import datetime, timedelta, timezone
|
||||
@@ -17,7 +16,7 @@ from allauth.socialaccount.providers.github.views import GitHubOAuth2Adapter
|
||||
from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter
|
||||
from allauth.socialaccount.providers.saml.views import FinishACSView, LoginView
|
||||
from botocore.exceptions import ClientError, NoCredentialsError, ParamValidationError
|
||||
from celery import chain, states
|
||||
from celery import chain
|
||||
from celery.result import AsyncResult
|
||||
from config.custom_logging import BackendLogger
|
||||
from config.env import env
|
||||
@@ -61,7 +60,6 @@ from django.utils.dateparse import parse_date
|
||||
from django.utils.decorators import method_decorator
|
||||
from django.views.decorators.cache import cache_control
|
||||
from django_celery_beat.models import PeriodicTask
|
||||
from django_celery_results.models import TaskResult
|
||||
from drf_spectacular.settings import spectacular_settings
|
||||
from drf_spectacular.types import OpenApiTypes
|
||||
from drf_spectacular.utils import (
|
||||
@@ -424,7 +422,7 @@ class SchemaView(SpectacularAPIView):
|
||||
|
||||
def get(self, request, *args, **kwargs):
|
||||
spectacular_settings.TITLE = "Prowler API"
|
||||
spectacular_settings.VERSION = "1.28.0"
|
||||
spectacular_settings.VERSION = "1.27.0"
|
||||
spectacular_settings.DESCRIPTION = (
|
||||
"Prowler API specification.\n\nThis file is auto-generated."
|
||||
)
|
||||
@@ -2536,45 +2534,28 @@ class ScanViewSet(BaseRLSViewSet):
|
||||
def create(self, request, *args, **kwargs):
|
||||
input_serializer = self.get_serializer(data=request.data)
|
||||
input_serializer.is_valid(raise_exception=True)
|
||||
|
||||
# Broker publish is deferred to on_commit so the worker cannot read
|
||||
# Scan before BaseRLSViewSet's dispatch-wide atomic commits.
|
||||
pre_task_id = str(uuid.uuid4())
|
||||
|
||||
with transaction.atomic():
|
||||
scan = input_serializer.save()
|
||||
scan.task_id = pre_task_id
|
||||
scan.save(update_fields=["task_id"])
|
||||
|
||||
attack_paths_db_utils.create_attack_paths_scan(
|
||||
tenant_id=self.request.tenant_id,
|
||||
scan_id=str(scan.id),
|
||||
provider_id=str(scan.provider_id),
|
||||
with transaction.atomic():
|
||||
task = perform_scan_task.apply_async(
|
||||
kwargs={
|
||||
"tenant_id": self.request.tenant_id,
|
||||
"scan_id": str(scan.id),
|
||||
"provider_id": str(scan.provider_id),
|
||||
# Disabled for now
|
||||
# checks_to_execute=scan.scanner_args.get("checks_to_execute")
|
||||
},
|
||||
)
|
||||
|
||||
task_result, _ = TaskResult.objects.get_or_create(
|
||||
task_id=pre_task_id,
|
||||
defaults={"status": states.PENDING, "task_name": "scan-perform"},
|
||||
)
|
||||
prowler_task, _ = Task.objects.update_or_create(
|
||||
id=pre_task_id,
|
||||
tenant_id=self.request.tenant_id,
|
||||
defaults={"task_runner_task": task_result},
|
||||
)
|
||||
attack_paths_db_utils.create_attack_paths_scan(
|
||||
tenant_id=self.request.tenant_id,
|
||||
scan_id=str(scan.id),
|
||||
provider_id=str(scan.provider_id),
|
||||
)
|
||||
|
||||
scan_kwargs = {
|
||||
"tenant_id": self.request.tenant_id,
|
||||
"scan_id": str(scan.id),
|
||||
"provider_id": str(scan.provider_id),
|
||||
# Disabled for now
|
||||
# checks_to_execute=scan.scanner_args.get("checks_to_execute")
|
||||
}
|
||||
|
||||
transaction.on_commit(
|
||||
lambda: perform_scan_task.apply_async(
|
||||
kwargs=scan_kwargs, task_id=pre_task_id
|
||||
)
|
||||
)
|
||||
prowler_task = Task.objects.get(id=task.id)
|
||||
scan.task_id = task.id
|
||||
scan.save(update_fields=["task_id"])
|
||||
|
||||
self.response_serializer_class = TaskSerializer
|
||||
output_serializer = self.get_serializer(prowler_task)
|
||||
|
||||
@@ -47,9 +47,6 @@ from prowler.lib.outputs.compliance.csa.csa_oraclecloud import OracleCloudCSA
|
||||
from prowler.lib.outputs.compliance.ens.ens_aws import AWSENS
|
||||
from prowler.lib.outputs.compliance.ens.ens_azure import AzureENS
|
||||
from prowler.lib.outputs.compliance.ens.ens_gcp import GCPENS
|
||||
from prowler.lib.outputs.compliance.asd_essential_eight.asd_essential_eight_aws import (
|
||||
ASDEssentialEightAWS,
|
||||
)
|
||||
from prowler.lib.outputs.compliance.iso27001.iso27001_aws import AWSISO27001
|
||||
from prowler.lib.outputs.compliance.iso27001.iso27001_azure import AzureISO27001
|
||||
from prowler.lib.outputs.compliance.iso27001.iso27001_gcp import GCPISO27001
|
||||
@@ -103,7 +100,6 @@ COMPLIANCE_CLASS_MAP = {
|
||||
(lambda name: name.startswith("ccc_"), CCC_AWS),
|
||||
(lambda name: name.startswith("c5_"), AWSC5),
|
||||
(lambda name: name.startswith("csa_"), AWSCSA),
|
||||
(lambda name: name == "asd_essential_eight_aws", ASDEssentialEightAWS),
|
||||
],
|
||||
"azure": [
|
||||
(lambda name: name.startswith("cis_"), AzureCIS),
|
||||
|
||||
@@ -20,15 +20,11 @@ from tasks.jobs.reports import (
|
||||
ThreatScoreReportGenerator,
|
||||
)
|
||||
from tasks.jobs.threatscore import compute_threatscore_metrics
|
||||
from tasks.jobs.threatscore_utils import (
|
||||
_aggregate_requirement_statistics_from_database,
|
||||
_get_compliance_check_ids,
|
||||
)
|
||||
from tasks.jobs.threatscore_utils import _aggregate_requirement_statistics_from_database
|
||||
|
||||
from api.db_router import READ_REPLICA_ALIAS, MainRouter
|
||||
from api.db_utils import rls_transaction
|
||||
from api.models import Provider, Scan, ScanSummary, StateChoices, ThreatScoreSnapshot
|
||||
from api.utils import initialize_prowler_provider
|
||||
from prowler.lib.check.compliance_models import Compliance
|
||||
from prowler.lib.outputs.finding import Finding as FindingOutput
|
||||
|
||||
@@ -431,7 +427,6 @@ def generate_threatscore_report(
|
||||
provider_obj: Provider | None = None,
|
||||
requirement_statistics: dict[str, dict[str, int]] | None = None,
|
||||
findings_cache: dict[str, list[FindingOutput]] | None = None,
|
||||
prowler_provider=None,
|
||||
) -> None:
|
||||
"""
|
||||
Generate a PDF compliance report based on Prowler ThreatScore framework.
|
||||
@@ -460,7 +455,6 @@ def generate_threatscore_report(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
only_failed=only_failed,
|
||||
)
|
||||
|
||||
@@ -475,7 +469,6 @@ def generate_ens_report(
|
||||
provider_obj: Provider | None = None,
|
||||
requirement_statistics: dict[str, dict[str, int]] | None = None,
|
||||
findings_cache: dict[str, list[FindingOutput]] | None = None,
|
||||
prowler_provider=None,
|
||||
) -> None:
|
||||
"""
|
||||
Generate a PDF compliance report for ENS RD2022 framework.
|
||||
@@ -502,7 +495,6 @@ def generate_ens_report(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
include_manual=include_manual,
|
||||
)
|
||||
|
||||
@@ -518,7 +510,6 @@ def generate_nis2_report(
|
||||
provider_obj: Provider | None = None,
|
||||
requirement_statistics: dict[str, dict[str, int]] | None = None,
|
||||
findings_cache: dict[str, list[FindingOutput]] | None = None,
|
||||
prowler_provider=None,
|
||||
) -> None:
|
||||
"""
|
||||
Generate a PDF compliance report for NIS2 Directive (EU) 2022/2555.
|
||||
@@ -546,7 +537,6 @@ def generate_nis2_report(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
only_failed=only_failed,
|
||||
include_manual=include_manual,
|
||||
)
|
||||
@@ -563,7 +553,6 @@ def generate_csa_report(
|
||||
provider_obj: Provider | None = None,
|
||||
requirement_statistics: dict[str, dict[str, int]] | None = None,
|
||||
findings_cache: dict[str, list[FindingOutput]] | None = None,
|
||||
prowler_provider=None,
|
||||
) -> None:
|
||||
"""
|
||||
Generate a PDF compliance report for CSA Cloud Controls Matrix (CCM) v4.0.
|
||||
@@ -591,7 +580,6 @@ def generate_csa_report(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
only_failed=only_failed,
|
||||
include_manual=include_manual,
|
||||
)
|
||||
@@ -608,7 +596,6 @@ def generate_cis_report(
|
||||
provider_obj: Provider | None = None,
|
||||
requirement_statistics: dict[str, dict[str, int]] | None = None,
|
||||
findings_cache: dict[str, list[FindingOutput]] | None = None,
|
||||
prowler_provider=None,
|
||||
) -> None:
|
||||
"""
|
||||
Generate a PDF compliance report for a specific CIS Benchmark variant.
|
||||
@@ -640,7 +627,6 @@ def generate_cis_report(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
only_failed=only_failed,
|
||||
include_manual=include_manual,
|
||||
)
|
||||
@@ -785,17 +771,6 @@ def generate_compliance_reports(
|
||||
results["csa"] = {"upload": False, "path": ""}
|
||||
generate_csa = False
|
||||
|
||||
# Load the framework definitions for this provider once. We use this map
|
||||
# both to pick the latest CIS variant and to precompute the set of
|
||||
# check_ids each framework consumes (for findings_cache eviction).
|
||||
frameworks_bulk: dict = {}
|
||||
try:
|
||||
frameworks_bulk = Compliance.get_bulk(provider_type)
|
||||
except Exception as e:
|
||||
logger.error("Error loading compliance frameworks for %s: %s", provider_type, e)
|
||||
# Fall through; individual frameworks will still try and fail
|
||||
# gracefully if their compliance_id is missing.
|
||||
|
||||
# For CIS we do NOT pre-check the provider against a hard-coded whitelist
|
||||
# (that list drifts the moment a new CIS JSON ships). Instead, we inspect
|
||||
# the dynamically loaded framework map and pick the latest available CIS
|
||||
@@ -803,6 +778,7 @@ def generate_compliance_reports(
|
||||
latest_cis: str | None = None
|
||||
if generate_cis:
|
||||
try:
|
||||
frameworks_bulk = Compliance.get_bulk(provider_type)
|
||||
latest_cis = _pick_latest_cis_variant(
|
||||
name for name in frameworks_bulk.keys() if name.startswith("cis_")
|
||||
)
|
||||
@@ -839,84 +815,10 @@ def generate_compliance_reports(
|
||||
tenant_id, scan_id
|
||||
)
|
||||
|
||||
# Initialize the Prowler provider once for the whole report batch. Each
|
||||
# generator used to re-init this in _load_compliance_data, paying the
|
||||
# boto3/Azure-SDK construction cost 5 times per scan. The instance is
|
||||
# only used by FindingOutput.transform_api_finding to enrich findings,
|
||||
# so a single shared instance is correct.
|
||||
logger.info("Initializing prowler_provider once for all reports (scan %s)", scan_id)
|
||||
try:
|
||||
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
|
||||
prowler_provider = initialize_prowler_provider(provider_obj)
|
||||
except Exception as init_error:
|
||||
# If init fails the generators will fall back to lazy init in
|
||||
# _load_compliance_data; we just log and continue.
|
||||
logger.warning(
|
||||
"Could not pre-initialize prowler_provider for scan %s: %s",
|
||||
scan_id,
|
||||
init_error,
|
||||
)
|
||||
prowler_provider = None
|
||||
|
||||
# Create shared findings cache up front so the eviction closure below
|
||||
# can reference it. Defined BEFORE the closure to avoid the UnboundLocalError
|
||||
# trap if an early-return is later inserted between the closure and its
|
||||
# first use.
|
||||
findings_cache: dict[str, list[FindingOutput]] = {}
|
||||
# Create shared findings cache
|
||||
findings_cache = {}
|
||||
logger.info("Created shared findings cache for all reports")
|
||||
|
||||
# Precompute the set of check_ids each framework consumes. After a
|
||||
# framework finishes, every check_id that no remaining framework still
|
||||
# needs is evicted from findings_cache so the dict does not keep
|
||||
# growing through the batch (PROWLER-1733).
|
||||
pending_checks_by_framework: dict[str, set[str]] = {}
|
||||
if generate_threatscore:
|
||||
pending_checks_by_framework["threatscore"] = _get_compliance_check_ids(
|
||||
frameworks_bulk.get(f"prowler_threatscore_{provider_type}")
|
||||
)
|
||||
if generate_ens:
|
||||
pending_checks_by_framework["ens"] = _get_compliance_check_ids(
|
||||
frameworks_bulk.get(f"ens_rd2022_{provider_type}")
|
||||
)
|
||||
if generate_nis2:
|
||||
pending_checks_by_framework["nis2"] = _get_compliance_check_ids(
|
||||
frameworks_bulk.get(f"nis2_{provider_type}")
|
||||
)
|
||||
if generate_csa:
|
||||
pending_checks_by_framework["csa"] = _get_compliance_check_ids(
|
||||
frameworks_bulk.get(f"csa_ccm_4.0_{provider_type}")
|
||||
)
|
||||
if generate_cis and latest_cis:
|
||||
pending_checks_by_framework["cis"] = _get_compliance_check_ids(
|
||||
frameworks_bulk.get(latest_cis)
|
||||
)
|
||||
|
||||
def _evict_after_framework(done_key: str) -> int:
|
||||
"""Drop from findings_cache every check_id no pending framework still needs."""
|
||||
done = pending_checks_by_framework.pop(done_key, set())
|
||||
still_needed: set[str] = (
|
||||
set().union(*pending_checks_by_framework.values())
|
||||
if pending_checks_by_framework
|
||||
else set()
|
||||
)
|
||||
exclusive = done - still_needed
|
||||
evicted = 0
|
||||
for cid in exclusive:
|
||||
if findings_cache.pop(cid, None) is not None:
|
||||
evicted += 1
|
||||
if evicted:
|
||||
logger.info(
|
||||
"Evicted %d exclusive check entries from findings_cache after %s "
|
||||
"(remaining cache size: %d)",
|
||||
evicted,
|
||||
done_key,
|
||||
len(findings_cache),
|
||||
)
|
||||
# Release the lists' memory now instead of waiting for the next
|
||||
# gc cycle; FindingOutput instances retain quite a bit of state.
|
||||
gc.collect()
|
||||
return evicted
|
||||
|
||||
generated_report_keys: list[str] = []
|
||||
output_paths: dict[str, str] = {}
|
||||
out_dir: str | None = None
|
||||
@@ -1005,7 +907,6 @@ def generate_compliance_reports(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
)
|
||||
|
||||
# Compute and store ThreatScore metrics snapshot
|
||||
@@ -1083,15 +984,9 @@ def generate_compliance_reports(
|
||||
logger.warning("ThreatScore report saved locally at %s", out_dir)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(
|
||||
"compliance_report_failed framework=threatscore scan_id=%s tenant_id=%s",
|
||||
scan_id,
|
||||
tenant_id,
|
||||
)
|
||||
logger.error("Error generating ThreatScore report: %s", e)
|
||||
results["threatscore"] = {"upload": False, "path": "", "error": str(e)}
|
||||
|
||||
_evict_after_framework("threatscore")
|
||||
|
||||
# Generate ENS report
|
||||
if generate_ens:
|
||||
generated_report_keys.append("ens")
|
||||
@@ -1111,7 +1006,6 @@ def generate_compliance_reports(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
)
|
||||
|
||||
upload_uri_ens = _upload_to_s3(
|
||||
@@ -1126,15 +1020,9 @@ def generate_compliance_reports(
|
||||
logger.warning("ENS report saved locally at %s", out_dir)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(
|
||||
"compliance_report_failed framework=ens scan_id=%s tenant_id=%s",
|
||||
scan_id,
|
||||
tenant_id,
|
||||
)
|
||||
logger.error("Error generating ENS report: %s", e)
|
||||
results["ens"] = {"upload": False, "path": "", "error": str(e)}
|
||||
|
||||
_evict_after_framework("ens")
|
||||
|
||||
# Generate NIS2 report
|
||||
if generate_nis2:
|
||||
generated_report_keys.append("nis2")
|
||||
@@ -1155,7 +1043,6 @@ def generate_compliance_reports(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
)
|
||||
|
||||
upload_uri_nis2 = _upload_to_s3(
|
||||
@@ -1170,15 +1057,9 @@ def generate_compliance_reports(
|
||||
logger.warning("NIS2 report saved locally at %s", out_dir)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(
|
||||
"compliance_report_failed framework=nis2 scan_id=%s tenant_id=%s",
|
||||
scan_id,
|
||||
tenant_id,
|
||||
)
|
||||
logger.error("Error generating NIS2 report: %s", e)
|
||||
results["nis2"] = {"upload": False, "path": "", "error": str(e)}
|
||||
|
||||
_evict_after_framework("nis2")
|
||||
|
||||
# Generate CSA CCM report
|
||||
if generate_csa:
|
||||
generated_report_keys.append("csa")
|
||||
@@ -1199,7 +1080,6 @@ def generate_compliance_reports(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
)
|
||||
|
||||
upload_uri_csa = _upload_to_s3(
|
||||
@@ -1214,15 +1094,9 @@ def generate_compliance_reports(
|
||||
logger.warning("CSA CCM report saved locally at %s", out_dir)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(
|
||||
"compliance_report_failed framework=csa scan_id=%s tenant_id=%s",
|
||||
scan_id,
|
||||
tenant_id,
|
||||
)
|
||||
logger.error("Error generating CSA CCM report: %s", e)
|
||||
results["csa"] = {"upload": False, "path": "", "error": str(e)}
|
||||
|
||||
_evict_after_framework("csa")
|
||||
|
||||
# Generate CIS Benchmark report for the latest available version only.
|
||||
# CIS ships multiple versions per provider (e.g. cis_1.4_aws, cis_5.0_aws,
|
||||
# cis_6.0_aws); we dynamically pick the highest semantic version at run
|
||||
@@ -1245,7 +1119,6 @@ def generate_compliance_reports(
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
)
|
||||
|
||||
upload_uri_cis = _upload_to_s3(
|
||||
@@ -1274,22 +1147,14 @@ def generate_compliance_reports(
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(
|
||||
"compliance_report_failed framework=cis variant=%s scan_id=%s tenant_id=%s",
|
||||
latest_cis,
|
||||
scan_id,
|
||||
tenant_id,
|
||||
)
|
||||
logger.error("Error generating CIS report %s: %s", latest_cis, e)
|
||||
results["cis"] = {
|
||||
"upload": False,
|
||||
"path": "",
|
||||
"error": str(e),
|
||||
}
|
||||
finally:
|
||||
# Free ReportLab/matplotlib memory before moving on. CIS is
|
||||
# always the last framework, so evicting its entries clears the
|
||||
# cache entirely (subject to its check_ids set).
|
||||
_evict_after_framework("cis")
|
||||
# Free ReportLab/matplotlib memory before moving on.
|
||||
gc.collect()
|
||||
|
||||
# Clean up temporary files only if all generated reports were
|
||||
|
||||
@@ -1,9 +1,6 @@
|
||||
import gc
|
||||
import os
|
||||
import resource as _resource_module
|
||||
import time
|
||||
from abc import ABC, abstractmethod
|
||||
from contextlib import contextmanager
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
@@ -44,7 +41,6 @@ from .config import (
|
||||
COLOR_LIGHT_BLUE,
|
||||
COLOR_LIGHTER_BLUE,
|
||||
COLOR_PROWLER_DARK_GREEN,
|
||||
FINDINGS_TABLE_CHUNK_SIZE,
|
||||
PADDING_LARGE,
|
||||
PADDING_SMALL,
|
||||
FrameworkConfig,
|
||||
@@ -52,53 +48,6 @@ from .config import (
|
||||
|
||||
logger = get_task_logger(__name__)
|
||||
|
||||
|
||||
@contextmanager
|
||||
def _log_phase(phase: str, *, scan_id: str, framework: str):
|
||||
"""Log start/end timing and RSS deltas around a report-building section.
|
||||
|
||||
Emits structured key=value logs so Grafana/Datadog/CloudWatch queries
|
||||
can pivot by ``phase``, ``framework`` and ``scan_id`` to find the
|
||||
slow/heavy section on any given scan. ``getrusage`` returns KB on
|
||||
Linux and bytes on macOS; the values are still useful in relative
|
||||
terms even though units differ across platforms.
|
||||
"""
|
||||
start = time.perf_counter()
|
||||
rss_before = _resource_module.getrusage(_resource_module.RUSAGE_SELF).ru_maxrss
|
||||
logger.info(
|
||||
"phase_start phase=%s scan_id=%s framework=%s rss_kb=%d",
|
||||
phase,
|
||||
scan_id,
|
||||
framework,
|
||||
rss_before,
|
||||
)
|
||||
try:
|
||||
yield
|
||||
except Exception:
|
||||
elapsed = time.perf_counter() - start
|
||||
logger.exception(
|
||||
"phase_failed phase=%s scan_id=%s framework=%s elapsed_s=%.2f",
|
||||
phase,
|
||||
scan_id,
|
||||
framework,
|
||||
elapsed,
|
||||
)
|
||||
raise
|
||||
else:
|
||||
elapsed = time.perf_counter() - start
|
||||
rss_after = _resource_module.getrusage(_resource_module.RUSAGE_SELF).ru_maxrss
|
||||
logger.info(
|
||||
"phase_end phase=%s scan_id=%s framework=%s elapsed_s=%.2f "
|
||||
"rss_kb=%d delta_rss_kb=%d",
|
||||
phase,
|
||||
scan_id,
|
||||
framework,
|
||||
elapsed,
|
||||
rss_after,
|
||||
rss_after - rss_before,
|
||||
)
|
||||
|
||||
|
||||
# Register fonts (done once at module load)
|
||||
_fonts_registered: bool = False
|
||||
|
||||
@@ -386,7 +335,6 @@ class BaseComplianceReportGenerator(ABC):
|
||||
provider_obj: Provider | None = None,
|
||||
requirement_statistics: dict[str, dict[str, int]] | None = None,
|
||||
findings_cache: dict[str, list[FindingOutput]] | None = None,
|
||||
prowler_provider: Any | None = None,
|
||||
**kwargs,
|
||||
) -> None:
|
||||
"""Generate the PDF compliance report.
|
||||
@@ -403,35 +351,23 @@ class BaseComplianceReportGenerator(ABC):
|
||||
provider_obj: Optional pre-fetched Provider object
|
||||
requirement_statistics: Optional pre-aggregated statistics
|
||||
findings_cache: Optional pre-loaded findings cache
|
||||
prowler_provider: Optional pre-initialized Prowler provider. When
|
||||
generating multiple reports for the same scan the master
|
||||
function initializes this once and passes it in to avoid
|
||||
re-running boto3/Azure-SDK setup per framework.
|
||||
**kwargs: Additional framework-specific arguments
|
||||
"""
|
||||
framework = self.config.display_name
|
||||
logger.info(
|
||||
"report_generation_start framework=%s scan_id=%s compliance_id=%s",
|
||||
framework,
|
||||
scan_id,
|
||||
compliance_id,
|
||||
"Generating %s report for scan %s", self.config.display_name, scan_id
|
||||
)
|
||||
|
||||
try:
|
||||
# 1. Load compliance data
|
||||
with _log_phase(
|
||||
"load_compliance_data", scan_id=scan_id, framework=framework
|
||||
):
|
||||
data = self._load_compliance_data(
|
||||
tenant_id=tenant_id,
|
||||
scan_id=scan_id,
|
||||
compliance_id=compliance_id,
|
||||
provider_id=provider_id,
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
prowler_provider=prowler_provider,
|
||||
)
|
||||
data = self._load_compliance_data(
|
||||
tenant_id=tenant_id,
|
||||
scan_id=scan_id,
|
||||
compliance_id=compliance_id,
|
||||
provider_id=provider_id,
|
||||
provider_obj=provider_obj,
|
||||
requirement_statistics=requirement_statistics,
|
||||
findings_cache=findings_cache,
|
||||
)
|
||||
|
||||
# 2. Create PDF document
|
||||
doc = self._create_document(output_path, data)
|
||||
@@ -441,54 +377,37 @@ class BaseComplianceReportGenerator(ABC):
|
||||
elements = []
|
||||
|
||||
# Cover page (lightweight)
|
||||
with _log_phase("cover_page", scan_id=scan_id, framework=framework):
|
||||
elements.extend(self.create_cover_page(data))
|
||||
elements.append(PageBreak())
|
||||
elements.extend(self.create_cover_page(data))
|
||||
elements.append(PageBreak())
|
||||
|
||||
# Executive summary (framework-specific)
|
||||
with _log_phase("executive_summary", scan_id=scan_id, framework=framework):
|
||||
elements.extend(self.create_executive_summary(data))
|
||||
elements.extend(self.create_executive_summary(data))
|
||||
|
||||
# Body sections (charts + requirements index)
|
||||
# Override _build_body_sections() in subclasses to change section order
|
||||
with _log_phase("body_sections", scan_id=scan_id, framework=framework):
|
||||
elements.extend(self._build_body_sections(data))
|
||||
elements.extend(self._build_body_sections(data))
|
||||
|
||||
# Detailed findings - heaviest section, loads findings on-demand
|
||||
with _log_phase("detailed_findings", scan_id=scan_id, framework=framework):
|
||||
elements.extend(self.create_detailed_findings(data, **kwargs))
|
||||
gc.collect() # Free findings data after processing
|
||||
logger.info("Building detailed findings section...")
|
||||
elements.extend(self.create_detailed_findings(data, **kwargs))
|
||||
gc.collect() # Free findings data after processing
|
||||
|
||||
# 4. Build the PDF
|
||||
logger.info(
|
||||
"doc_build_about_to_run framework=%s scan_id=%s elements=%d",
|
||||
framework,
|
||||
scan_id,
|
||||
len(elements),
|
||||
)
|
||||
with _log_phase("doc_build", scan_id=scan_id, framework=framework):
|
||||
self._build_pdf(doc, elements, data)
|
||||
logger.info("Building PDF document with %d elements...", len(elements))
|
||||
self._build_pdf(doc, elements, data)
|
||||
|
||||
# Final cleanup
|
||||
del elements
|
||||
gc.collect()
|
||||
|
||||
logger.info(
|
||||
"report_generation_end framework=%s scan_id=%s output_path=%s",
|
||||
framework,
|
||||
scan_id,
|
||||
output_path,
|
||||
)
|
||||
logger.info("Successfully generated report at %s", output_path)
|
||||
|
||||
except Exception:
|
||||
# logger.exception captures the full traceback; the contextual
|
||||
# keys keep production search-by-scan-id viable.
|
||||
logger.exception(
|
||||
"report_generation_failed framework=%s scan_id=%s compliance_id=%s",
|
||||
framework,
|
||||
scan_id,
|
||||
compliance_id,
|
||||
)
|
||||
except Exception as e:
|
||||
import traceback
|
||||
|
||||
tb_lineno = e.__traceback__.tb_lineno if e.__traceback__ else "unknown"
|
||||
logger.error("Error generating report, line %s -- %s", tb_lineno, e)
|
||||
logger.error("Full traceback:\n%s", traceback.format_exc())
|
||||
raise
|
||||
|
||||
def _build_body_sections(self, data: ComplianceData) -> list:
|
||||
@@ -719,25 +638,15 @@ class BaseComplianceReportGenerator(ABC):
|
||||
for req in requirements:
|
||||
check_ids_to_load.extend(req.checks)
|
||||
|
||||
# Load findings on-demand only for the checks that will be displayed.
|
||||
# When ``only_failed`` is active at requirement level, also push the
|
||||
# FAIL filter down to the finding level: a requirement marked FAIL
|
||||
# because 1/1000 findings failed must not render a table dominated by
|
||||
# 999 PASS rows. That hides the actual failure under noise and
|
||||
# makes the per-check cap truncate the wrong rows.
|
||||
# ``total_counts`` is populated with the pre-cap total per check_id
|
||||
# (FAIL-only when only_failed is active) so the "Showing first N of
|
||||
# M" banner uses the same denominator the reader cares about.
|
||||
# Load findings on-demand only for the checks that will be displayed
|
||||
# Uses the shared findings cache to avoid duplicate queries across reports
|
||||
logger.info("Loading findings on-demand for %d requirements", len(requirements))
|
||||
total_counts: dict[str, int] = {}
|
||||
findings_by_check_id = _load_findings_for_requirement_checks(
|
||||
data.tenant_id,
|
||||
data.scan_id,
|
||||
check_ids_to_load,
|
||||
data.prowler_provider,
|
||||
data.findings_by_check_id, # Pass the cache to update it
|
||||
total_counts_out=total_counts,
|
||||
only_failed_findings=only_failed,
|
||||
)
|
||||
|
||||
for req in requirements:
|
||||
@@ -769,31 +678,9 @@ class BaseComplianceReportGenerator(ABC):
|
||||
)
|
||||
)
|
||||
else:
|
||||
# Surface truncation BEFORE the tables so readers see it
|
||||
# at the same scroll position as the data itself, not
|
||||
# after thousands of rendered rows.
|
||||
loaded = len(findings)
|
||||
total = total_counts.get(check_id, loaded)
|
||||
if total > loaded:
|
||||
kind = "failed findings" if only_failed else "findings"
|
||||
elements.append(
|
||||
Paragraph(
|
||||
f"<b>⚠ Showing first {loaded:,} of "
|
||||
f"{total:,} {kind} for this check.</b> "
|
||||
f"Use the CSV or JSON export for the full "
|
||||
f"list. The PDF caps detail rows to keep "
|
||||
f"the report readable and bounded in size.",
|
||||
self.styles["normal"],
|
||||
)
|
||||
)
|
||||
elements.append(Spacer(1, 0.05 * inch))
|
||||
|
||||
# Create chunked findings tables to prevent OOM when a
|
||||
# single check has thousands of findings (ReportLab
|
||||
# resolves layout per Flowable, so many small tables
|
||||
# render contiguously with a bounded memory peak).
|
||||
findings_tables = self._create_findings_tables(findings)
|
||||
elements.extend(findings_tables)
|
||||
# Create findings table
|
||||
findings_table = self._create_findings_table(findings)
|
||||
elements.append(findings_table)
|
||||
|
||||
elements.append(Spacer(1, 0.1 * inch))
|
||||
|
||||
@@ -848,7 +735,6 @@ class BaseComplianceReportGenerator(ABC):
|
||||
provider_obj: Provider | None,
|
||||
requirement_statistics: dict | None,
|
||||
findings_cache: dict | None,
|
||||
prowler_provider: Any | None = None,
|
||||
) -> ComplianceData:
|
||||
"""Load and aggregate compliance data from the database.
|
||||
|
||||
@@ -860,9 +746,6 @@ class BaseComplianceReportGenerator(ABC):
|
||||
provider_obj: Optional pre-fetched Provider
|
||||
requirement_statistics: Optional pre-aggregated statistics
|
||||
findings_cache: Optional pre-loaded findings
|
||||
prowler_provider: Optional pre-initialized Prowler provider. When
|
||||
the master function initializes it once and passes it in,
|
||||
we skip the per-report ``initialize_prowler_provider`` call.
|
||||
|
||||
Returns:
|
||||
Aggregated ComplianceData object
|
||||
@@ -872,8 +755,7 @@ class BaseComplianceReportGenerator(ABC):
|
||||
if provider_obj is None:
|
||||
provider_obj = Provider.objects.get(id=provider_id)
|
||||
|
||||
if prowler_provider is None:
|
||||
prowler_provider = initialize_prowler_provider(provider_obj)
|
||||
prowler_provider = initialize_prowler_provider(provider_obj)
|
||||
provider_type = provider_obj.provider
|
||||
|
||||
# Load compliance framework
|
||||
@@ -941,32 +823,13 @@ class BaseComplianceReportGenerator(ABC):
|
||||
) -> SimpleDocTemplate:
|
||||
"""Create the PDF document template.
|
||||
|
||||
Validates that ``output_path`` is a filesystem path string with an
|
||||
existing parent directory. SimpleDocTemplate technically accepts a
|
||||
BytesIO too, but we want every report to land on disk so the
|
||||
Celery worker doesn't hold the full PDF in memory while uploading
|
||||
to S3.
|
||||
|
||||
Args:
|
||||
output_path: Path for the output PDF
|
||||
data: Compliance data for metadata
|
||||
|
||||
Returns:
|
||||
Configured SimpleDocTemplate
|
||||
|
||||
Raises:
|
||||
TypeError: ``output_path`` is not a string.
|
||||
FileNotFoundError: The parent directory does not exist.
|
||||
"""
|
||||
if not isinstance(output_path, str):
|
||||
raise TypeError(
|
||||
"output_path must be a filesystem path string; "
|
||||
f"got {type(output_path).__name__}"
|
||||
)
|
||||
parent_dir = os.path.dirname(output_path)
|
||||
if parent_dir and not os.path.isdir(parent_dir):
|
||||
raise FileNotFoundError(f"Output directory does not exist: {parent_dir}")
|
||||
|
||||
return SimpleDocTemplate(
|
||||
output_path,
|
||||
pagesize=letter,
|
||||
@@ -1013,10 +876,47 @@ class BaseComplianceReportGenerator(ABC):
|
||||
onLaterPages=add_footer,
|
||||
)
|
||||
|
||||
# Column layout shared by all findings sub-tables. Defined as a method so
|
||||
# subclasses can override it without re-implementing the chunking logic.
|
||||
def _findings_table_columns(self) -> list[ColumnConfig]:
|
||||
return [
|
||||
def _create_findings_table(self, findings: list[FindingOutput]) -> Any:
|
||||
"""Create a findings table.
|
||||
|
||||
Args:
|
||||
findings: List of finding objects
|
||||
|
||||
Returns:
|
||||
ReportLab Table element
|
||||
"""
|
||||
|
||||
def get_finding_title(f):
|
||||
metadata = getattr(f, "metadata", None)
|
||||
if metadata:
|
||||
return getattr(metadata, "CheckTitle", getattr(f, "check_id", ""))
|
||||
return getattr(f, "check_id", "")
|
||||
|
||||
def get_resource_name(f):
|
||||
name = getattr(f, "resource_name", "")
|
||||
if not name:
|
||||
name = getattr(f, "resource_uid", "")
|
||||
return name
|
||||
|
||||
def get_severity(f):
|
||||
metadata = getattr(f, "metadata", None)
|
||||
if metadata:
|
||||
return getattr(metadata, "Severity", "").capitalize()
|
||||
return ""
|
||||
|
||||
# Convert findings to dicts for the table
|
||||
data = []
|
||||
for f in findings:
|
||||
item = {
|
||||
"title": get_finding_title(f),
|
||||
"resource_name": get_resource_name(f),
|
||||
"severity": get_severity(f),
|
||||
"status": getattr(f, "status", "").upper(),
|
||||
"region": getattr(f, "region", "global"),
|
||||
}
|
||||
data.append(item)
|
||||
|
||||
columns = [
|
||||
ColumnConfig("Finding", 2.5 * inch, "title"),
|
||||
ColumnConfig("Resource", 3 * inch, "resource_name"),
|
||||
ColumnConfig("Severity", 0.9 * inch, "severity"),
|
||||
@@ -1024,122 +924,9 @@ class BaseComplianceReportGenerator(ABC):
|
||||
ColumnConfig("Region", 0.9 * inch, "region"),
|
||||
]
|
||||
|
||||
@staticmethod
|
||||
def _finding_to_row(f: FindingOutput) -> dict[str, str]:
|
||||
"""Project a FindingOutput onto the row dict the table expects.
|
||||
|
||||
Kept defensive: missing metadata or attributes return empty strings
|
||||
rather than raising, so a single malformed finding never breaks the
|
||||
whole report.
|
||||
"""
|
||||
metadata = getattr(f, "metadata", None)
|
||||
title = (
|
||||
getattr(metadata, "CheckTitle", getattr(f, "check_id", ""))
|
||||
if metadata
|
||||
else getattr(f, "check_id", "")
|
||||
)
|
||||
resource_name = getattr(f, "resource_name", "") or getattr(
|
||||
f, "resource_uid", ""
|
||||
)
|
||||
severity = getattr(metadata, "Severity", "").capitalize() if metadata else ""
|
||||
return {
|
||||
"title": title,
|
||||
"resource_name": resource_name,
|
||||
"severity": severity,
|
||||
"status": getattr(f, "status", "").upper(),
|
||||
"region": getattr(f, "region", "global"),
|
||||
}
|
||||
|
||||
def _create_findings_tables(
|
||||
self,
|
||||
findings: list[FindingOutput],
|
||||
chunk_size: int | None = None,
|
||||
) -> list[Any]:
|
||||
"""Build a list of small findings tables to keep ``doc.build()`` memory bounded.
|
||||
|
||||
ReportLab resolves layout (column widths, row heights, page-breaks)
|
||||
per Flowable. A single ``LongTable`` of 15k rows forces all of that
|
||||
to be computed at once and reliably OOMs the worker on large scans.
|
||||
Splitting into chunks of ``chunk_size`` rows produces an equivalent-
|
||||
looking PDF (LongTable repeats headers; chunks render contiguously)
|
||||
with a bounded memory peak per chunk.
|
||||
|
||||
Args:
|
||||
findings: List of finding objects for a single check.
|
||||
chunk_size: Rows per sub-table. ``None`` uses
|
||||
``FINDINGS_TABLE_CHUNK_SIZE`` from config.
|
||||
|
||||
Returns:
|
||||
List of ReportLab flowables (interleaved ``Table``/``LongTable``
|
||||
and small ``Spacer`` between chunks). Empty list when there are
|
||||
no findings.
|
||||
"""
|
||||
if not findings:
|
||||
return []
|
||||
|
||||
chunk_size = chunk_size or FINDINGS_TABLE_CHUNK_SIZE
|
||||
|
||||
# Build all rows first so we can chunk without re-walking the
|
||||
# FindingOutput list. Malformed findings are skipped with a logged
|
||||
# exception, never enough to abort the entire report.
|
||||
rows: list[dict[str, str]] = []
|
||||
for f in findings:
|
||||
try:
|
||||
rows.append(self._finding_to_row(f))
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Skipping malformed finding while building table for check %s",
|
||||
getattr(f, "check_id", "unknown"),
|
||||
)
|
||||
|
||||
if not rows:
|
||||
return []
|
||||
|
||||
columns = self._findings_table_columns()
|
||||
|
||||
flowables: list = []
|
||||
total = len(rows)
|
||||
for start in range(0, total, chunk_size):
|
||||
chunk = rows[start : start + chunk_size]
|
||||
flowables.append(
|
||||
create_data_table(
|
||||
data=chunk,
|
||||
columns=columns,
|
||||
header_color=self.config.primary_color,
|
||||
normal_style=self.styles["normal_center"],
|
||||
)
|
||||
)
|
||||
# A tiny spacer between chunks keeps them visually contiguous
|
||||
# without forcing a page-break (KeepTogether would negate the
|
||||
# memory benefit of chunking).
|
||||
if start + chunk_size < total:
|
||||
flowables.append(Spacer(1, 0.05 * inch))
|
||||
|
||||
if total > chunk_size:
|
||||
logger.debug(
|
||||
"Built %d findings sub-tables (chunk_size=%d, total_findings=%d)",
|
||||
(total + chunk_size - 1) // chunk_size,
|
||||
chunk_size,
|
||||
total,
|
||||
)
|
||||
|
||||
return flowables
|
||||
|
||||
def _create_findings_table(self, findings: list[FindingOutput]) -> Any:
|
||||
"""Deprecated alias kept for backwards compatibility.
|
||||
|
||||
Returns the first chunk produced by ``_create_findings_tables``.
|
||||
New callers MUST use ``_create_findings_tables``, which returns a
|
||||
list of flowables and is what ``create_detailed_findings`` invokes.
|
||||
"""
|
||||
flowables = self._create_findings_tables(findings)
|
||||
if flowables:
|
||||
return flowables[0]
|
||||
# Empty input → return an empty (header-only) table so callers that
|
||||
# used to receive a Table never get None.
|
||||
return create_data_table(
|
||||
data=[],
|
||||
columns=self._findings_table_columns(),
|
||||
data=data,
|
||||
columns=columns,
|
||||
header_color=self.config.primary_color,
|
||||
normal_style=self.styles["normal_center"],
|
||||
)
|
||||
|
||||
@@ -1,11 +1,9 @@
|
||||
import gc
|
||||
import io
|
||||
import math
|
||||
import time
|
||||
from typing import Callable
|
||||
|
||||
import matplotlib
|
||||
from celery.utils.log import get_task_logger
|
||||
|
||||
# Use non-interactive Agg backend for memory efficiency in server environments
|
||||
# This MUST be set before importing pyplot
|
||||
@@ -22,26 +20,6 @@ from .config import ( # noqa: E402
|
||||
CHART_DPI_DEFAULT,
|
||||
)
|
||||
|
||||
logger = get_task_logger(__name__)
|
||||
|
||||
|
||||
def _log_chart_built(name: str, dpi: int, buffer: io.BytesIO, started: float) -> None:
|
||||
"""Emit a structured DEBUG line summarising a chart render.
|
||||
|
||||
Centralised so the formatting stays consistent across all chart helpers
|
||||
and so we never accidentally pay for buffer.getbuffer().nbytes when
|
||||
debug logging is disabled.
|
||||
"""
|
||||
if logger.isEnabledFor(10): # logging.DEBUG
|
||||
logger.debug(
|
||||
"chart_built name=%s dpi=%d bytes=%d elapsed_s=%.2f",
|
||||
name,
|
||||
dpi,
|
||||
buffer.getbuffer().nbytes,
|
||||
time.perf_counter() - started,
|
||||
)
|
||||
|
||||
|
||||
# Use centralized DPI setting from config
|
||||
DEFAULT_CHART_DPI = CHART_DPI_DEFAULT
|
||||
|
||||
@@ -99,7 +77,6 @@ def create_vertical_bar_chart(
|
||||
Returns:
|
||||
BytesIO buffer containing the PNG image
|
||||
"""
|
||||
_started = time.perf_counter()
|
||||
if color_func is None:
|
||||
color_func = get_chart_color_for_percentage
|
||||
|
||||
@@ -145,7 +122,6 @@ def create_vertical_bar_chart(
|
||||
plt.close(fig)
|
||||
gc.collect() # Force garbage collection after heavy matplotlib operation
|
||||
|
||||
_log_chart_built("vertical_bar", dpi, buffer, _started)
|
||||
return buffer
|
||||
|
||||
|
||||
@@ -180,7 +156,6 @@ def create_horizontal_bar_chart(
|
||||
Returns:
|
||||
BytesIO buffer containing the PNG image
|
||||
"""
|
||||
_started = time.perf_counter()
|
||||
if color_func is None:
|
||||
color_func = get_chart_color_for_percentage
|
||||
|
||||
@@ -232,7 +207,6 @@ def create_horizontal_bar_chart(
|
||||
plt.close(fig)
|
||||
gc.collect() # Force garbage collection after heavy matplotlib operation
|
||||
|
||||
_log_chart_built("horizontal_bar", dpi, buffer, _started)
|
||||
return buffer
|
||||
|
||||
|
||||
@@ -265,7 +239,6 @@ def create_radar_chart(
|
||||
Returns:
|
||||
BytesIO buffer containing the PNG image
|
||||
"""
|
||||
_started = time.perf_counter()
|
||||
num_vars = len(labels)
|
||||
angles = [n / float(num_vars) * 2 * math.pi for n in range(num_vars)]
|
||||
|
||||
@@ -302,7 +275,6 @@ def create_radar_chart(
|
||||
plt.close(fig)
|
||||
gc.collect() # Force garbage collection after heavy matplotlib operation
|
||||
|
||||
_log_chart_built("radar", dpi, buffer, _started)
|
||||
return buffer
|
||||
|
||||
|
||||
@@ -331,7 +303,6 @@ def create_pie_chart(
|
||||
Returns:
|
||||
BytesIO buffer containing the PNG image
|
||||
"""
|
||||
_started = time.perf_counter()
|
||||
fig, ax = plt.subplots(figsize=figsize)
|
||||
|
||||
_, _, autotexts = ax.pie(
|
||||
@@ -359,7 +330,6 @@ def create_pie_chart(
|
||||
plt.close(fig)
|
||||
gc.collect() # Force garbage collection after heavy matplotlib operation
|
||||
|
||||
_log_chart_built("pie", dpi, buffer, _started)
|
||||
return buffer
|
||||
|
||||
|
||||
@@ -392,7 +362,6 @@ def create_stacked_bar_chart(
|
||||
Returns:
|
||||
BytesIO buffer containing the PNG image
|
||||
"""
|
||||
_started = time.perf_counter()
|
||||
fig, ax = plt.subplots(figsize=figsize)
|
||||
|
||||
# Default colors if not provided
|
||||
@@ -432,5 +401,4 @@ def create_stacked_bar_chart(
|
||||
plt.close(fig)
|
||||
gc.collect() # Force garbage collection after heavy matplotlib operation
|
||||
|
||||
_log_chart_built("stacked_bar", dpi, buffer, _started)
|
||||
return buffer
|
||||
|
||||
@@ -475,15 +475,8 @@ def create_data_table(
|
||||
else:
|
||||
value = item.get(col.field, "")
|
||||
|
||||
# Wrap every string cell in Paragraph so the data rows keep the
|
||||
# caller-supplied font/colour/alignment. Skipping Paragraph for
|
||||
# short cells (a tempting micro-optimisation) breaks visual
|
||||
# consistency: ReportLab Table falls back to Helvetica/black for
|
||||
# raw strings, mixing fonts within the same table.
|
||||
# ``escape_html`` keeps ``<``/``>``/``&`` in resource names from
|
||||
# breaking Paragraph's mini-HTML parser.
|
||||
if normal_style and isinstance(value, str):
|
||||
value = Paragraph(escape_html(value), normal_style)
|
||||
value = Paragraph(value, normal_style)
|
||||
row.append(value)
|
||||
table_data.append(row)
|
||||
|
||||
@@ -515,26 +508,17 @@ def create_data_table(
|
||||
for idx, col in enumerate(columns):
|
||||
styles.append(("ALIGN", (idx, 0), (idx, -1), col.align))
|
||||
|
||||
# Alternate row backgrounds: single O(1) ROWBACKGROUNDS style entry.
|
||||
# The previous implementation appended N per-row BACKGROUND commands,
|
||||
# which scaled the TableStyle list linearly with row count. ReportLab
|
||||
# cycles through the colour list row-by-row so the visual is identical.
|
||||
# The ALTERNATE_ROWS_MAX_SIZE cap is preserved to mirror legacy
|
||||
# behaviour (very large tables stay plain), but the memory cost of the
|
||||
# styles list is now constant regardless of row count.
|
||||
# Alternate row backgrounds - skip for very large tables as it adds memory overhead
|
||||
if (
|
||||
alternate_rows
|
||||
and len(table_data) > 1
|
||||
and len(table_data) <= ALTERNATE_ROWS_MAX_SIZE
|
||||
):
|
||||
styles.append(
|
||||
(
|
||||
"ROWBACKGROUNDS",
|
||||
(0, 1),
|
||||
(-1, -1),
|
||||
[colors.white, colors.Color(0.98, 0.98, 0.98)],
|
||||
)
|
||||
)
|
||||
for i in range(1, len(table_data)):
|
||||
if i % 2 == 0:
|
||||
styles.append(
|
||||
("BACKGROUND", (0, i), (-1, i), colors.Color(0.98, 0.98, 0.98))
|
||||
)
|
||||
|
||||
table.setStyle(TableStyle(styles))
|
||||
return table
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
import os
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
from reportlab.lib import colors
|
||||
@@ -24,47 +23,6 @@ ALTERNATE_ROWS_MAX_SIZE = 200
|
||||
# Larger = fewer queries but more memory per batch
|
||||
FINDINGS_BATCH_SIZE = 2000
|
||||
|
||||
# Maximum rows per findings sub-table. ReportLab resolves layout per Flowable;
|
||||
# splitting a huge findings list into multiple smaller tables keeps the peak
|
||||
# memory of doc.build() bounded. A single 15k-row LongTable would force
|
||||
# ReportLab to compute all column widths/row heights/page-breaks at once and
|
||||
# OOM the worker; 300-row chunks are rendered contiguously with negligible
|
||||
# visual impact.
|
||||
FINDINGS_TABLE_CHUNK_SIZE = 300
|
||||
|
||||
# Maximum findings rendered per check in the detailed-findings section.
|
||||
#
|
||||
# Product behaviour: compliance PDFs render at most ``MAX_FINDINGS_PER_CHECK``
|
||||
# **failed** findings per check (PASS rows are excluded at SQL level by the
|
||||
# ``only_failed`` flag that all four list-rendering frameworks default to:
|
||||
# ThreatScore, NIS2, CSA, CIS; ENS does not render finding tables). Above
|
||||
# this cap each affected check renders an in-PDF banner
|
||||
# ("Showing first 100 of N failed findings for this check. Use the CSV
|
||||
# or JSON export for the full list") so the reader knows the table is
|
||||
# truncated and where to find the full data.
|
||||
#
|
||||
# Why a cap exists at all:
|
||||
# * ``FindingOutput.transform_api_finding`` is O(N) per finding (Pydantic
|
||||
# v1 validation + nested model construction).
|
||||
# * ReportLab resolves layout per Flowable; thousands of sub-tables make
|
||||
# ``doc.build()`` very slow and grow the PDF unboundedly.
|
||||
# * A human-readable executive/auditor PDF does not need 12,000 rows for
|
||||
# one check; that is forensic data and lives in the CSV/JSON exports.
|
||||
#
|
||||
# Why 100 specifically:
|
||||
# * Covers ~99% of real scans without truncation (most checks emit far
|
||||
# fewer than 100 findings even in enterprise estates).
|
||||
# * Worst-case rendered rows = 100 × ~500 checks = 50k rows across all
|
||||
# frameworks, which keeps RSS bounded and a 5-framework run completes
|
||||
# in minutes instead of hours.
|
||||
#
|
||||
# Override at runtime via ``DJANGO_PDF_MAX_FINDINGS_PER_CHECK``:
|
||||
# * Set to ``0`` to disable the cap entirely (load every finding; only
|
||||
# advisable for small scans).
|
||||
# * Set to a larger value (e.g. ``500``) for forensic detail in big runs;
|
||||
# watch RSS in the Celery worker.
|
||||
MAX_FINDINGS_PER_CHECK = int(os.environ.get("DJANGO_PDF_MAX_FINDINGS_PER_CHECK", "100"))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Base colors
|
||||
|
||||
@@ -202,9 +202,8 @@ def _get_attack_surface_mapping_from_provider(provider_type: str) -> dict:
|
||||
"iam_inline_policy_allows_privilege_escalation",
|
||||
},
|
||||
"ec2-imdsv1": {
|
||||
"ec2_instance_imdsv2_enabled",
|
||||
"ec2_instance_account_imdsv2_enabled",
|
||||
}, # AWS only - instance-level IMDSv1 exposure and account IMDS defaults
|
||||
"ec2_instance_imdsv2_enabled"
|
||||
}, # AWS only - IMDSv1 enabled findings
|
||||
}
|
||||
for category_name, check_ids in attack_surface_check_mappings.items():
|
||||
if check_ids is None:
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
from celery.utils.log import get_task_logger
|
||||
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
|
||||
from django.db.models import Count, F, Q, Window
|
||||
from django.db.models.functions import RowNumber
|
||||
from tasks.jobs.reports.config import MAX_FINDINGS_PER_CHECK
|
||||
from django.db.models import Count, Q
|
||||
|
||||
from api.db_router import READ_REPLICA_ALIAS
|
||||
from api.db_utils import rls_transaction
|
||||
@@ -156,8 +154,6 @@ def _load_findings_for_requirement_checks(
|
||||
check_ids: list[str],
|
||||
prowler_provider,
|
||||
findings_cache: dict[str, list[FindingOutput]] | None = None,
|
||||
total_counts_out: dict[str, int] | None = None,
|
||||
only_failed_findings: bool = False,
|
||||
) -> dict[str, list[FindingOutput]]:
|
||||
"""
|
||||
Load findings for specific check IDs on-demand with optional caching.
|
||||
@@ -182,23 +178,6 @@ def _load_findings_for_requirement_checks(
|
||||
prowler_provider: The initialized Prowler provider instance.
|
||||
findings_cache (dict, optional): Cache of already loaded findings.
|
||||
If provided, checks are first looked up in cache before querying database.
|
||||
total_counts_out (dict, optional): If provided, populated with
|
||||
``{check_id: total_findings_in_db}`` BEFORE any per-check cap is
|
||||
applied. Lets callers render a "Showing first N of M" banner for
|
||||
truncated checks. Only populated for ``check_ids`` actually
|
||||
queried (cache hits keep whatever value the caller already had).
|
||||
When ``only_failed_findings=True`` the total is FAIL-only.
|
||||
only_failed_findings (bool): When True, push the ``status=FAIL``
|
||||
filter down into the SQL query so PASS rows are never loaded
|
||||
from the DB nor pydantic-transformed. This matches the
|
||||
``only_failed`` requirement-level filter applied at PDF render
|
||||
time: a requirement marked FAIL because 1/1000 findings failed
|
||||
shouldn't render a table of 999 PASS rows. That hides the
|
||||
actual failure under noise and wastes the per-check cap on
|
||||
irrelevant data. NOTE: the findings cache stores whatever the
|
||||
first caller asked for, so all callers in a single
|
||||
``generate_compliance_reports`` run MUST pass the same flag
|
||||
(which they do: it threads from ``only_failed`` defaults).
|
||||
|
||||
Returns:
|
||||
dict[str, list[FindingOutput]]: Dictionary mapping check_id to list of FindingOutput objects.
|
||||
@@ -243,70 +222,17 @@ def _load_findings_for_requirement_checks(
|
||||
)
|
||||
|
||||
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
|
||||
base_qs = Finding.all_objects.filter(
|
||||
tenant_id=tenant_id,
|
||||
scan_id=scan_id,
|
||||
check_id__in=check_ids_to_load,
|
||||
# Use iterator with chunk_size for memory-efficient streaming
|
||||
# chunk_size controls how many rows Django fetches from DB at once
|
||||
findings_queryset = (
|
||||
Finding.all_objects.filter(
|
||||
tenant_id=tenant_id,
|
||||
scan_id=scan_id,
|
||||
check_id__in=check_ids_to_load,
|
||||
)
|
||||
.order_by("check_id", "uid")
|
||||
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
|
||||
)
|
||||
if only_failed_findings:
|
||||
# Push the FAIL filter down into SQL: DB returns ~N×FAIL
|
||||
# rows instead of N×ALL, and we never spend pydantic CPU on
|
||||
# PASS findings the PDF would never render.
|
||||
base_qs = base_qs.filter(status=StatusChoices.FAIL)
|
||||
|
||||
# Aggregate totals once so we (a) know which checks need capping
|
||||
# and (b) can surface "Showing first N of M" in the PDF banner.
|
||||
# Cheap: a single COUNT grouped by check_id.
|
||||
totals: dict[str, int] = {
|
||||
row["check_id"]: row["total"]
|
||||
for row in base_qs.values("check_id").annotate(total=Count("id"))
|
||||
}
|
||||
if total_counts_out is not None:
|
||||
total_counts_out.update(totals)
|
||||
|
||||
cap = MAX_FINDINGS_PER_CHECK
|
||||
checks_over_cap = (
|
||||
{cid for cid, n in totals.items() if n > cap} if cap > 0 else set()
|
||||
)
|
||||
|
||||
# Use iterator with chunk_size for memory-efficient streaming.
|
||||
# FindingOutput.transform_api_finding (prowler/lib/outputs/finding.py)
|
||||
# reads finding.resources.first() and resource.tags.all() per
|
||||
# finding, which without prefetch generates 2N queries per chunk.
|
||||
# prefetch_related runs once per iterator chunk (Django >=4.1) and
|
||||
# collapses that into a constant 2 extra queries per chunk.
|
||||
if checks_over_cap:
|
||||
# Top-N per check via a window function: PostgreSQL only
|
||||
# materialises ``cap * |checks_over_cap| + sum(uncapped)``
|
||||
# rows, vs the full table scan the previous path did.
|
||||
ranked = base_qs.annotate(
|
||||
rn=Window(
|
||||
expression=RowNumber(),
|
||||
partition_by=[F("check_id")],
|
||||
order_by=F("uid").asc(),
|
||||
)
|
||||
)
|
||||
findings_queryset = (
|
||||
Finding.all_objects.filter(
|
||||
id__in=ranked.filter(rn__lte=cap).values("id")
|
||||
)
|
||||
.prefetch_related("resources", "resources__tags")
|
||||
.order_by("check_id", "uid")
|
||||
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
|
||||
)
|
||||
logger.info(
|
||||
"Per-check cap=%d active for %d checks (max %d each); "
|
||||
"skipping transform for surplus rows",
|
||||
cap,
|
||||
len(checks_over_cap),
|
||||
cap,
|
||||
)
|
||||
else:
|
||||
findings_queryset = (
|
||||
base_qs.prefetch_related("resources", "resources__tags")
|
||||
.order_by("check_id", "uid")
|
||||
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
|
||||
)
|
||||
|
||||
# Pre-initialize empty lists for all check_ids to load
|
||||
# This avoids repeated dict lookups and 'if not in' checks
|
||||
@@ -322,11 +248,7 @@ def _load_findings_for_requirement_checks(
|
||||
findings_count += 1
|
||||
|
||||
logger.info(
|
||||
"Loaded %d findings for %d checks (truncated %d checks total=%d)",
|
||||
findings_count,
|
||||
len(check_ids_to_load),
|
||||
len(checks_over_cap),
|
||||
sum(totals.values()),
|
||||
f"Loaded {findings_count} findings for {len(check_ids_to_load)} checks"
|
||||
)
|
||||
|
||||
# Build result dict using cache references (no data duplication)
|
||||
@@ -336,40 +258,3 @@ def _load_findings_for_requirement_checks(
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def _get_compliance_check_ids(compliance_obj) -> set[str]:
|
||||
"""Return the union of all check_ids referenced by a compliance framework.
|
||||
|
||||
Used by the master report orchestrator to know which checks each
|
||||
framework consumes from the shared ``findings_cache``, so that once a
|
||||
framework finishes the entries no other pending framework needs can be
|
||||
evicted from the cache (PROWLER-1733).
|
||||
|
||||
Args:
|
||||
compliance_obj: A loaded Compliance framework object exposing a
|
||||
``Requirements`` iterable, each requirement carrying ``Checks``.
|
||||
``None`` is treated as "no checks" rather than raising, so the
|
||||
caller can pass ``frameworks_bulk.get(...)`` directly without
|
||||
an extra existence check.
|
||||
|
||||
Returns:
|
||||
Set of check_id strings (empty if ``compliance_obj`` is ``None``).
|
||||
"""
|
||||
if compliance_obj is None:
|
||||
return set()
|
||||
checks: set[str] = set()
|
||||
requirements = getattr(compliance_obj, "Requirements", None) or []
|
||||
try:
|
||||
# Defensive: Mock objects (used in unit tests) return another Mock
|
||||
# for any attribute access, which is truthy but not iterable. Treat
|
||||
# any non-iterable Requirements value as "no checks".
|
||||
for req in requirements:
|
||||
req_checks = getattr(req, "Checks", None) or []
|
||||
try:
|
||||
checks.update(req_checks)
|
||||
except TypeError:
|
||||
continue
|
||||
except TypeError:
|
||||
return set()
|
||||
return checks
|
||||
|
||||
@@ -44,8 +44,6 @@ from api.models import (
|
||||
Finding,
|
||||
Resource,
|
||||
ResourceFindingMapping,
|
||||
ResourceTag,
|
||||
ResourceTagMapping,
|
||||
StateChoices,
|
||||
StatusChoices,
|
||||
)
|
||||
@@ -369,317 +367,6 @@ class TestLoadFindingsForChecks:
|
||||
|
||||
assert result == {}
|
||||
|
||||
def test_prefetch_avoids_n_plus_one(self, tenants_fixture, scans_fixture):
|
||||
"""Loading N findings must NOT execute O(N) extra queries for resources/tags.
|
||||
|
||||
Regression test for PROWLER-1733. ``FindingOutput.transform_api_finding``
|
||||
reads ``finding.resources.first()`` and ``resource.tags.all()`` per
|
||||
finding. Without ``prefetch_related`` that's 2N additional queries;
|
||||
with prefetch it collapses to a small constant per iterator chunk.
|
||||
"""
|
||||
from django.test.utils import CaptureQueriesContext
|
||||
from django.db import connections
|
||||
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
|
||||
# Build N findings, each linked to one resource that owns 2 tags.
|
||||
N = 20
|
||||
for i in range(N):
|
||||
finding = Finding.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
scan=scan,
|
||||
uid=f"f-prefetch-{i}",
|
||||
check_id="aws_check_prefetch",
|
||||
status=StatusChoices.FAIL,
|
||||
severity=Severity.high,
|
||||
impact=Severity.high,
|
||||
check_metadata={
|
||||
"provider": "aws",
|
||||
"checkid": "aws_check_prefetch",
|
||||
"checktitle": "t",
|
||||
"checktype": [],
|
||||
"servicename": "s",
|
||||
"subservicename": "",
|
||||
"severity": "high",
|
||||
"resourcetype": "r",
|
||||
"description": "",
|
||||
"risk": "",
|
||||
"relatedurl": "",
|
||||
"remediation": {
|
||||
"recommendation": {"text": "", "url": ""},
|
||||
"code": {
|
||||
"nativeiac": "",
|
||||
"terraform": "",
|
||||
"cli": "",
|
||||
"other": "",
|
||||
},
|
||||
},
|
||||
"resourceidtemplate": "",
|
||||
"categories": [],
|
||||
"dependson": [],
|
||||
"relatedto": [],
|
||||
"notes": "",
|
||||
},
|
||||
raw_result={},
|
||||
)
|
||||
resource = Resource.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
provider=scan.provider,
|
||||
uid=f"r-prefetch-{i}",
|
||||
name=f"r-prefetch-{i}",
|
||||
metadata="{}",
|
||||
details="",
|
||||
region="us-east-1",
|
||||
service="s",
|
||||
type="t::r",
|
||||
)
|
||||
ResourceFindingMapping.objects.create(
|
||||
tenant_id=tenant.id, finding=finding, resource=resource
|
||||
)
|
||||
for k in ("env", "owner"):
|
||||
tag, _ = ResourceTag.objects.get_or_create(
|
||||
tenant_id=tenant.id, key=k, value=f"v-{i}-{k}"
|
||||
)
|
||||
ResourceTagMapping.objects.create(
|
||||
tenant_id=tenant.id, resource=resource, tag=tag
|
||||
)
|
||||
|
||||
mock_provider = Mock()
|
||||
mock_provider.type = "aws"
|
||||
mock_provider.identity.account = "test"
|
||||
|
||||
# Patch transform_api_finding to a no-op so the test isolates queries
|
||||
# to the queryset/prefetch path (transform itself is exercised by
|
||||
# the integration tests above and not by this regression check).
|
||||
with patch(
|
||||
"tasks.jobs.threatscore_utils.FindingOutput.transform_api_finding",
|
||||
side_effect=lambda model, provider: Mock(check_id=model.check_id),
|
||||
):
|
||||
with CaptureQueriesContext(
|
||||
connections["default_read_replica"]
|
||||
if "default_read_replica" in connections.databases
|
||||
else connections["default"]
|
||||
) as ctx:
|
||||
_load_findings_for_requirement_checks(
|
||||
str(tenant.id),
|
||||
str(scan.id),
|
||||
["aws_check_prefetch"],
|
||||
mock_provider,
|
||||
)
|
||||
|
||||
# Expected: a small constant number of queries irrespective of N.
|
||||
# Pre-fix this would be ~1 + 2*N. We give some slack for RLS SET
|
||||
# LOCAL statements that the rls_transaction emits.
|
||||
assert len(ctx.captured_queries) < N, (
|
||||
f"Expected O(1) queries with prefetch_related; got "
|
||||
f"{len(ctx.captured_queries)} for N={N} (N+1 regression?)"
|
||||
)
|
||||
|
||||
def test_max_findings_per_check_cap(self, tenants_fixture, scans_fixture):
|
||||
"""When a check exceeds ``MAX_FINDINGS_PER_CHECK``, only ``cap`` rows
|
||||
are loaded AND ``total_counts_out`` reports the pre-cap total.
|
||||
|
||||
Guards the PROWLER-1733 truncation knob: prevents both runaway memory
|
||||
and silent data loss in the PDF (the banner relies on knowing the
|
||||
real total).
|
||||
"""
|
||||
from unittest.mock import patch as _patch
|
||||
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
|
||||
# Create 12 findings for a single check; cap to 5.
|
||||
check_id = "aws_check_cap_test"
|
||||
for i in range(12):
|
||||
finding = Finding.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
scan=scan,
|
||||
uid=f"f-cap-{i:02d}",
|
||||
check_id=check_id,
|
||||
status=StatusChoices.FAIL,
|
||||
severity=Severity.high,
|
||||
impact=Severity.high,
|
||||
check_metadata={},
|
||||
raw_result={},
|
||||
)
|
||||
resource = Resource.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
provider=scan.provider,
|
||||
uid=f"r-cap-{i:02d}",
|
||||
name=f"r-cap-{i:02d}",
|
||||
metadata="{}",
|
||||
details="",
|
||||
region="us-east-1",
|
||||
service="s",
|
||||
type="t::r",
|
||||
)
|
||||
ResourceFindingMapping.objects.create(
|
||||
tenant_id=tenant.id, finding=finding, resource=resource
|
||||
)
|
||||
|
||||
mock_provider = Mock(type="aws")
|
||||
mock_provider.identity.account = "test"
|
||||
|
||||
totals: dict = {}
|
||||
# Patch the cap to a small value AND skip the heavy transform so we
|
||||
# only assert on row counts and totals.
|
||||
with (
|
||||
_patch("tasks.jobs.threatscore_utils.MAX_FINDINGS_PER_CHECK", 5),
|
||||
_patch(
|
||||
"tasks.jobs.threatscore_utils.FindingOutput.transform_api_finding",
|
||||
side_effect=lambda model, provider: Mock(check_id=model.check_id),
|
||||
),
|
||||
):
|
||||
result = _load_findings_for_requirement_checks(
|
||||
str(tenant.id),
|
||||
str(scan.id),
|
||||
[check_id],
|
||||
mock_provider,
|
||||
total_counts_out=totals,
|
||||
)
|
||||
|
||||
assert len(result[check_id]) == 5, (
|
||||
f"cap=5 should yield exactly 5 loaded findings, got {len(result[check_id])}"
|
||||
)
|
||||
assert totals[check_id] == 12, (
|
||||
f"total_counts_out should report the pre-cap total (12), got {totals[check_id]}"
|
||||
)
|
||||
|
||||
def test_only_failed_findings_pushes_down_to_sql(
|
||||
self, tenants_fixture, scans_fixture
|
||||
):
|
||||
"""When ``only_failed_findings=True``, PASS rows are excluded by the
|
||||
DB filter, not just visually hidden afterwards.
|
||||
|
||||
Regression for the consistency fix: previously the requirement-level
|
||||
``only_failed`` flag filtered which requirements appeared, but inside
|
||||
each rendered requirement the table still showed PASS rows mixed
|
||||
with FAIL, which combined with ``MAX_FINDINGS_PER_CHECK`` could
|
||||
truncate to 1000 PASS findings and hide the actual failure.
|
||||
"""
|
||||
from unittest.mock import patch as _patch
|
||||
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
check_id = "aws_check_only_failed_test"
|
||||
|
||||
# Mix PASS and FAIL so the filter has something to drop.
|
||||
for i in range(6):
|
||||
status = StatusChoices.FAIL if i % 2 == 0 else StatusChoices.PASS
|
||||
finding = Finding.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
scan=scan,
|
||||
uid=f"f-of-{i:02d}",
|
||||
check_id=check_id,
|
||||
status=status,
|
||||
severity=Severity.high,
|
||||
impact=Severity.high,
|
||||
check_metadata={},
|
||||
raw_result={},
|
||||
)
|
||||
resource = Resource.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
provider=scan.provider,
|
||||
uid=f"r-of-{i:02d}",
|
||||
name=f"r-of-{i:02d}",
|
||||
metadata="{}",
|
||||
details="",
|
||||
region="us-east-1",
|
||||
service="s",
|
||||
type="t::r",
|
||||
)
|
||||
ResourceFindingMapping.objects.create(
|
||||
tenant_id=tenant.id, finding=finding, resource=resource
|
||||
)
|
||||
|
||||
mock_provider = Mock(type="aws")
|
||||
mock_provider.identity.account = "test"
|
||||
|
||||
totals: dict = {}
|
||||
with _patch(
|
||||
"tasks.jobs.threatscore_utils.FindingOutput.transform_api_finding",
|
||||
side_effect=lambda model, provider: Mock(
|
||||
check_id=model.check_id, status=model.status
|
||||
),
|
||||
):
|
||||
result = _load_findings_for_requirement_checks(
|
||||
str(tenant.id),
|
||||
str(scan.id),
|
||||
[check_id],
|
||||
mock_provider,
|
||||
total_counts_out=totals,
|
||||
only_failed_findings=True,
|
||||
)
|
||||
|
||||
# 3 FAIL + 3 PASS in DB; FAIL-only filter should load just 3.
|
||||
loaded = result[check_id]
|
||||
assert len(loaded) == 3, f"expected 3 FAIL findings, got {len(loaded)}"
|
||||
statuses = {getattr(f, "status", None) for f in loaded}
|
||||
assert statuses == {StatusChoices.FAIL}, (
|
||||
f"expected all loaded findings to be FAIL; got statuses {statuses}"
|
||||
)
|
||||
# total_counts must reflect the FAIL-only total, not the global total.
|
||||
assert totals[check_id] == 3, (
|
||||
f"total_counts should be FAIL-only (3), got {totals[check_id]}"
|
||||
)
|
||||
|
||||
def test_max_findings_per_check_disabled(self, tenants_fixture, scans_fixture):
|
||||
"""``MAX_FINDINGS_PER_CHECK=0`` disables the cap; load all rows."""
|
||||
from unittest.mock import patch as _patch
|
||||
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
|
||||
check_id = "aws_check_uncapped"
|
||||
for i in range(8):
|
||||
f = Finding.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
scan=scan,
|
||||
uid=f"f-unc-{i:02d}",
|
||||
check_id=check_id,
|
||||
status=StatusChoices.FAIL,
|
||||
severity=Severity.high,
|
||||
impact=Severity.high,
|
||||
check_metadata={},
|
||||
raw_result={},
|
||||
)
|
||||
r = Resource.objects.create(
|
||||
tenant_id=tenant.id,
|
||||
provider=scan.provider,
|
||||
uid=f"r-unc-{i:02d}",
|
||||
name=f"r-unc-{i:02d}",
|
||||
metadata="{}",
|
||||
details="",
|
||||
region="us-east-1",
|
||||
service="s",
|
||||
type="t::r",
|
||||
)
|
||||
ResourceFindingMapping.objects.create(
|
||||
tenant_id=tenant.id, finding=f, resource=r
|
||||
)
|
||||
|
||||
mock_provider = Mock(type="aws")
|
||||
mock_provider.identity.account = "test"
|
||||
totals: dict = {}
|
||||
with (
|
||||
_patch("tasks.jobs.threatscore_utils.MAX_FINDINGS_PER_CHECK", 0),
|
||||
_patch(
|
||||
"tasks.jobs.threatscore_utils.FindingOutput.transform_api_finding",
|
||||
side_effect=lambda model, provider: Mock(check_id=model.check_id),
|
||||
),
|
||||
):
|
||||
result = _load_findings_for_requirement_checks(
|
||||
str(tenant.id),
|
||||
str(scan.id),
|
||||
[check_id],
|
||||
mock_provider,
|
||||
total_counts_out=totals,
|
||||
)
|
||||
|
||||
assert len(result[check_id]) == 8
|
||||
assert totals[check_id] == 8
|
||||
|
||||
|
||||
class TestCleanupStaleTmpOutputDirectories:
|
||||
"""Unit tests for opportunistic stale cleanup under tmp output root."""
|
||||
@@ -1168,181 +855,6 @@ class TestGenerateComplianceReportsOptimized:
|
||||
assert result["cis"] == {"upload": False, "path": ""}
|
||||
mock_cis.assert_not_called()
|
||||
|
||||
@patch("api.utils.initialize_prowler_provider")
|
||||
@patch("tasks.jobs.report.rmtree")
|
||||
@patch("tasks.jobs.report._upload_to_s3")
|
||||
@patch("tasks.jobs.report.generate_cis_report")
|
||||
@patch("tasks.jobs.report.generate_csa_report")
|
||||
@patch("tasks.jobs.report.generate_nis2_report")
|
||||
@patch("tasks.jobs.report.generate_ens_report")
|
||||
@patch("tasks.jobs.report.generate_threatscore_report")
|
||||
@patch("tasks.jobs.report._generate_compliance_output_directory")
|
||||
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
|
||||
@patch("tasks.jobs.report.Compliance.get_bulk")
|
||||
@patch("tasks.jobs.report.Provider.objects.get")
|
||||
@patch("tasks.jobs.report.ScanSummary.objects.filter")
|
||||
def test_findings_cache_eviction_after_framework(
|
||||
self,
|
||||
mock_scan_summary_filter,
|
||||
mock_provider_get,
|
||||
mock_get_bulk,
|
||||
mock_aggregate_stats,
|
||||
mock_generate_output_dir,
|
||||
mock_threatscore,
|
||||
mock_ens,
|
||||
mock_nis2,
|
||||
mock_csa,
|
||||
mock_cis,
|
||||
mock_upload_to_s3,
|
||||
mock_rmtree,
|
||||
mock_init_provider,
|
||||
):
|
||||
"""After each framework finishes, exclusive entries are evicted.
|
||||
|
||||
Threat scenario for PROWLER-1733: the shared ``findings_cache`` used
|
||||
to grow monotonically through all 5 frameworks. With the new
|
||||
eviction logic, check_ids only used by ThreatScore are dropped when
|
||||
ThreatScore finishes, before ENS runs.
|
||||
"""
|
||||
from types import SimpleNamespace
|
||||
from tasks.jobs import report as report_mod
|
||||
|
||||
mock_scan_summary_filter.return_value.exists.return_value = True
|
||||
mock_provider_get.return_value = Mock(uid="provider-uid", provider="aws")
|
||||
# ThreatScore consumes {tsc_only, shared}; ENS consumes {ens_only,
|
||||
# shared}. After ThreatScore evicts, tsc_only must be gone but
|
||||
# shared and ens_only must remain.
|
||||
mock_get_bulk.return_value = {
|
||||
"prowler_threatscore_aws": SimpleNamespace(
|
||||
Requirements=[SimpleNamespace(Checks=["tsc_only", "shared"])]
|
||||
),
|
||||
"ens_rd2022_aws": SimpleNamespace(
|
||||
Requirements=[SimpleNamespace(Checks=["ens_only", "shared"])]
|
||||
),
|
||||
}
|
||||
mock_aggregate_stats.return_value = {}
|
||||
mock_generate_output_dir.return_value = "/tmp/tenant/scan/x/prowler-out"
|
||||
mock_upload_to_s3.return_value = "s3://bucket/tenant/scan/x/report.pdf"
|
||||
mock_init_provider.return_value = Mock(name="prowler_provider")
|
||||
|
||||
# Seed the cache as if both frameworks had already loaded their
|
||||
# findings. We mutate it indirectly: each generator wrapper is a
|
||||
# Mock: make ThreatScore populate the cache, and have ENS observe
|
||||
# the state at call time so we can introspect post-eviction.
|
||||
observed_state: dict = {}
|
||||
|
||||
def _threatscore_side_effect(**kwargs):
|
||||
cache = kwargs["findings_cache"]
|
||||
cache["tsc_only"] = ["tsc-finding"]
|
||||
cache["shared"] = ["shared-finding"]
|
||||
|
||||
def _ens_side_effect(**kwargs):
|
||||
# ENS runs AFTER threatscore's _evict_after_framework("threatscore").
|
||||
observed_state["cache_keys_when_ens_runs"] = set(
|
||||
kwargs["findings_cache"].keys()
|
||||
)
|
||||
kwargs["findings_cache"]["ens_only"] = ["ens-finding"]
|
||||
|
||||
mock_threatscore.side_effect = _threatscore_side_effect
|
||||
mock_ens.side_effect = _ens_side_effect
|
||||
|
||||
report_mod.generate_compliance_reports(
|
||||
tenant_id=str(uuid.uuid4()),
|
||||
scan_id=str(uuid.uuid4()),
|
||||
provider_id=str(uuid.uuid4()),
|
||||
generate_threatscore=True,
|
||||
generate_ens=True,
|
||||
generate_nis2=False,
|
||||
generate_csa=False,
|
||||
generate_cis=False,
|
||||
)
|
||||
|
||||
# ``tsc_only`` was exclusive to ThreatScore → evicted before ENS ran.
|
||||
# ``shared`` is still pending for ENS → must remain.
|
||||
assert "tsc_only" not in observed_state["cache_keys_when_ens_runs"], (
|
||||
"tsc_only should have been evicted before ENS ran"
|
||||
)
|
||||
assert "shared" in observed_state["cache_keys_when_ens_runs"], (
|
||||
"shared must remain in cache because ENS still needs it"
|
||||
)
|
||||
|
||||
@patch("api.utils.initialize_prowler_provider")
|
||||
@patch("tasks.jobs.report.rmtree")
|
||||
@patch("tasks.jobs.report._upload_to_s3")
|
||||
@patch("tasks.jobs.report.generate_cis_report")
|
||||
@patch("tasks.jobs.report.generate_csa_report")
|
||||
@patch("tasks.jobs.report.generate_nis2_report")
|
||||
@patch("tasks.jobs.report.generate_ens_report")
|
||||
@patch("tasks.jobs.report.generate_threatscore_report")
|
||||
@patch("tasks.jobs.report._generate_compliance_output_directory")
|
||||
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
|
||||
@patch("tasks.jobs.report.Compliance.get_bulk")
|
||||
@patch("tasks.jobs.report.Provider.objects.get")
|
||||
@patch("tasks.jobs.report.ScanSummary.objects.filter")
|
||||
def test_prowler_provider_initialized_once(
|
||||
self,
|
||||
mock_scan_summary_filter,
|
||||
mock_provider_get,
|
||||
mock_get_bulk,
|
||||
mock_aggregate_stats,
|
||||
mock_generate_output_dir,
|
||||
mock_threatscore,
|
||||
mock_ens,
|
||||
mock_nis2,
|
||||
mock_csa,
|
||||
mock_cis,
|
||||
mock_upload_to_s3,
|
||||
mock_rmtree,
|
||||
mock_init_provider,
|
||||
):
|
||||
"""``initialize_prowler_provider`` must be called exactly once for
|
||||
the whole batch (PROWLER-1733). Previously each generator re-init'd
|
||||
the SDK provider in ``_load_compliance_data`` → 5 inits per scan.
|
||||
"""
|
||||
mock_scan_summary_filter.return_value.exists.return_value = True
|
||||
mock_provider_get.return_value = Mock(uid="provider-uid", provider="aws")
|
||||
# CIS variant discovery needs at least one cis_* key.
|
||||
mock_get_bulk.return_value = {"cis_6.0_aws": Mock()}
|
||||
mock_aggregate_stats.return_value = {}
|
||||
mock_generate_output_dir.return_value = "/tmp/tenant/scan/x/prowler-out"
|
||||
mock_upload_to_s3.return_value = "s3://bucket/tenant/scan/x/report.pdf"
|
||||
mock_init_provider.return_value = Mock(name="prowler_provider")
|
||||
|
||||
generate_compliance_reports(
|
||||
tenant_id=str(uuid.uuid4()),
|
||||
scan_id=str(uuid.uuid4()),
|
||||
provider_id=str(uuid.uuid4()),
|
||||
generate_threatscore=True,
|
||||
generate_ens=True,
|
||||
generate_nis2=True,
|
||||
generate_csa=True,
|
||||
generate_cis=True,
|
||||
)
|
||||
|
||||
# All 5 wrappers were invoked once each…
|
||||
mock_threatscore.assert_called_once()
|
||||
mock_ens.assert_called_once()
|
||||
mock_nis2.assert_called_once()
|
||||
mock_csa.assert_called_once()
|
||||
mock_cis.assert_called_once()
|
||||
# …but the SDK provider was initialized only once.
|
||||
assert mock_init_provider.call_count == 1, (
|
||||
f"expected 1 init, got {mock_init_provider.call_count} "
|
||||
f"(prowler_provider must be shared across reports)"
|
||||
)
|
||||
|
||||
# The shared instance must reach every wrapper as kwargs.
|
||||
shared = mock_init_provider.return_value
|
||||
for mock_wrapper in (
|
||||
mock_threatscore,
|
||||
mock_ens,
|
||||
mock_nis2,
|
||||
mock_csa,
|
||||
mock_cis,
|
||||
):
|
||||
_, call_kwargs = mock_wrapper.call_args
|
||||
assert call_kwargs.get("prowler_provider") is shared
|
||||
|
||||
@patch("tasks.jobs.report.rmtree")
|
||||
@patch("tasks.jobs.report._upload_to_s3")
|
||||
@patch("tasks.jobs.report.generate_threatscore_report")
|
||||
|
||||
@@ -1269,48 +1269,6 @@ class TestComponentEdgeCases:
|
||||
# Should be a LongTable for large datasets
|
||||
assert isinstance(table, LongTable)
|
||||
|
||||
def test_zebra_uses_rowbackgrounds_not_per_row_background(self, monkeypatch):
|
||||
"""The styles list must contain exactly one ROWBACKGROUNDS entry
|
||||
regardless of row count, never N per-row BACKGROUND entries.
|
||||
"""
|
||||
captured: dict = {}
|
||||
|
||||
# Capture the list passed to TableStyle. create_data_table builds a
|
||||
# list of style tuples and wraps it in a TableStyle exactly once;
|
||||
# by patching TableStyle we intercept that list.
|
||||
import tasks.jobs.reports.components as comp_mod
|
||||
|
||||
original_table_style = comp_mod.TableStyle
|
||||
|
||||
def _capture_table_style(style_list):
|
||||
captured["styles"] = list(style_list)
|
||||
return original_table_style(style_list)
|
||||
|
||||
monkeypatch.setattr(comp_mod, "TableStyle", _capture_table_style)
|
||||
|
||||
data = [{"name": f"Item {i}"} for i in range(60)]
|
||||
columns = [ColumnConfig("Name", 2 * inch, "name")]
|
||||
comp_mod.create_data_table(data, columns, alternate_rows=True)
|
||||
|
||||
styles = captured["styles"]
|
||||
# Count by command name.
|
||||
names = [s[0] for s in styles if isinstance(s, tuple) and s]
|
||||
# Exactly one ROWBACKGROUNDS entry.
|
||||
assert names.count("ROWBACKGROUNDS") == 1
|
||||
# Zero per-row BACKGROUND entries on data rows. (The header row
|
||||
# BACKGROUND command is intentional and lives at coords (0,0)/(-1,0).)
|
||||
data_row_bg = [
|
||||
s
|
||||
for s in styles
|
||||
if isinstance(s, tuple)
|
||||
and s[0] == "BACKGROUND"
|
||||
and not (s[1] == (0, 0) and s[2] == (-1, 0))
|
||||
]
|
||||
assert data_row_bg == [], (
|
||||
f"expected no per-row BACKGROUND entries on data rows; "
|
||||
f"got {len(data_row_bg)}"
|
||||
)
|
||||
|
||||
def test_create_risk_component_zero_values(self):
|
||||
"""Test risk component with zero values."""
|
||||
component = create_risk_component(risk_level=0, weight=0, score=0)
|
||||
@@ -1386,194 +1344,3 @@ class TestFrameworkConfigEdgeCases:
|
||||
assert get_framework_config("my_custom_threatscore_compliance") is not None
|
||||
assert get_framework_config("ens_something_else") is not None
|
||||
assert get_framework_config("nis2_gcp") is not None
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Findings Table Chunking Tests (PROWLER-1733)
|
||||
# =============================================================================
|
||||
#
|
||||
# These tests guard the OOM-prevention behaviour added in PROWLER-1733:
|
||||
# ``_create_findings_tables`` must split a list of findings into multiple
|
||||
# small sub-tables instead of producing one giant Table, which would force
|
||||
# ReportLab to resolve layout for all rows at once and OOM the worker on
|
||||
# scans with thousands of findings per check.
|
||||
|
||||
|
||||
class _DummyMetadata:
|
||||
"""Lightweight stand-in for FindingOutput.metadata used in chunking tests."""
|
||||
|
||||
def __init__(self, check_title: str = "Title", severity: str = "high"):
|
||||
self.CheckTitle = check_title
|
||||
self.Severity = severity
|
||||
|
||||
|
||||
class _DummyFinding:
|
||||
"""Lightweight stand-in for FindingOutput used in chunking tests.
|
||||
|
||||
The chunking code only reads a small set of attributes via ``getattr``,
|
||||
so a duck-typed object is enough and lets the tests run without touching
|
||||
the DB or pydantic deserialisation.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
check_id: str = "aws_check",
|
||||
resource_name: str = "res-1",
|
||||
resource_uid: str = "",
|
||||
status: str = "FAIL",
|
||||
region: str = "us-east-1",
|
||||
with_metadata: bool = True,
|
||||
):
|
||||
self.check_id = check_id
|
||||
self.resource_name = resource_name
|
||||
self.resource_uid = resource_uid
|
||||
self.status = status
|
||||
self.region = region
|
||||
if with_metadata:
|
||||
self.metadata = _DummyMetadata()
|
||||
else:
|
||||
self.metadata = None
|
||||
|
||||
|
||||
def _make_concrete_generator():
|
||||
"""Return a minimal concrete subclass of BaseComplianceReportGenerator."""
|
||||
|
||||
class _Concrete(BaseComplianceReportGenerator):
|
||||
def create_executive_summary(self, data):
|
||||
return []
|
||||
|
||||
def create_charts_section(self, data):
|
||||
return []
|
||||
|
||||
def create_requirements_index(self, data):
|
||||
return []
|
||||
|
||||
return _Concrete(FrameworkConfig(name="test", display_name="Test"))
|
||||
|
||||
|
||||
class TestFindingsTableChunking:
|
||||
"""Tests for ``_create_findings_tables`` (PROWLER-1733)."""
|
||||
|
||||
def test_chunking_produces_expected_number_of_subtables(self):
|
||||
"""5000 findings @ chunk_size=300 → 17 sub-tables + 16 spacers."""
|
||||
generator = _make_concrete_generator()
|
||||
findings = [_DummyFinding(check_id="c1") for _ in range(5000)]
|
||||
|
||||
flowables = generator._create_findings_tables(findings, chunk_size=300)
|
||||
|
||||
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
|
||||
spacers = [f for f in flowables if isinstance(f, Spacer)]
|
||||
# ceil(5000 / 300) == 17
|
||||
assert len(tables) == 17
|
||||
# Spacer between every pair of contiguous tables, not after the last
|
||||
assert len(spacers) == 16
|
||||
|
||||
def test_chunk_size_param_overrides_default(self):
|
||||
"""250 findings @ chunk_size=100 → 3 sub-tables."""
|
||||
generator = _make_concrete_generator()
|
||||
findings = [_DummyFinding(check_id="c2") for _ in range(250)]
|
||||
|
||||
flowables = generator._create_findings_tables(findings, chunk_size=100)
|
||||
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
|
||||
assert len(tables) == 3
|
||||
|
||||
def test_empty_findings_returns_empty_list(self):
|
||||
"""No findings → no flowables. Callers can extend(...) safely."""
|
||||
generator = _make_concrete_generator()
|
||||
assert generator._create_findings_tables([]) == []
|
||||
|
||||
def test_single_chunk_has_no_spacer(self):
|
||||
"""A single sub-table must not emit a trailing spacer."""
|
||||
generator = _make_concrete_generator()
|
||||
findings = [_DummyFinding(check_id="c3") for _ in range(10)]
|
||||
|
||||
flowables = generator._create_findings_tables(findings, chunk_size=300)
|
||||
assert len(flowables) == 1
|
||||
assert isinstance(flowables[0], (Table, LongTable))
|
||||
|
||||
def test_malformed_finding_is_skipped(self):
|
||||
"""A broken finding must not abort the report; it is logged and skipped."""
|
||||
generator = _make_concrete_generator()
|
||||
|
||||
class _Broken:
|
||||
# No attributes at all; getattr() defaults will mostly cope, but
|
||||
# we force an explicit error by making the metadata attribute
|
||||
# itself raise on access.
|
||||
@property
|
||||
def metadata(self):
|
||||
raise RuntimeError("boom")
|
||||
|
||||
check_id = "broken"
|
||||
|
||||
findings = [
|
||||
_DummyFinding(check_id="c4"),
|
||||
_Broken(),
|
||||
_DummyFinding(check_id="c4"),
|
||||
]
|
||||
flowables = generator._create_findings_tables(findings, chunk_size=300)
|
||||
# Two good rows → one sub-table containing them; the broken one is
|
||||
# logged and dropped, not propagated.
|
||||
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
|
||||
assert len(tables) == 1
|
||||
|
||||
def test_create_findings_table_alias_returns_first_chunk(self):
|
||||
"""The deprecated alias must keep returning a single Table flowable."""
|
||||
generator = _make_concrete_generator()
|
||||
findings = [_DummyFinding(check_id="c5") for _ in range(700)]
|
||||
|
||||
first = generator._create_findings_table(findings)
|
||||
assert isinstance(first, (Table, LongTable))
|
||||
|
||||
def test_create_findings_table_alias_empty(self):
|
||||
"""Alias on empty input returns an empty (header-only) Table, not None."""
|
||||
generator = _make_concrete_generator()
|
||||
result = generator._create_findings_table([])
|
||||
# The legacy alias never returned None; an empty header-only table
|
||||
# is a strict superset of that contract.
|
||||
assert isinstance(result, (Table, LongTable))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Logging Context Manager Tests (PROWLER-1733)
|
||||
# =============================================================================
|
||||
|
||||
|
||||
class TestLogPhaseContextManager:
|
||||
"""Tests for ``_log_phase`` (PROWLER-1733).
|
||||
|
||||
The context manager emits structured ``phase_start`` / ``phase_end``
|
||||
logs with ``scan_id``, ``framework`` and ``elapsed_s``, so Datadog/
|
||||
CloudWatch queries can pivot by scan and find the slow section.
|
||||
"""
|
||||
|
||||
def test_emits_start_and_end_with_elapsed_and_rss(self, caplog):
|
||||
from tasks.jobs.reports.base import _log_phase
|
||||
|
||||
caplog.set_level("INFO", logger="tasks.jobs.reports.base")
|
||||
with _log_phase("unit_test_phase", scan_id="s-1", framework="Test FW"):
|
||||
pass
|
||||
|
||||
messages = [r.getMessage() for r in caplog.records]
|
||||
starts = [m for m in messages if "phase_start" in m]
|
||||
ends = [m for m in messages if "phase_end" in m]
|
||||
|
||||
assert len(starts) == 1 and len(ends) == 1
|
||||
assert "phase=unit_test_phase" in starts[0]
|
||||
assert "scan_id=s-1" in starts[0]
|
||||
assert "framework=Test FW" in starts[0]
|
||||
assert "elapsed_s=" in ends[0]
|
||||
assert "rss_kb=" in ends[0]
|
||||
assert "delta_rss_kb=" in ends[0]
|
||||
|
||||
def test_failure_logs_phase_failed_and_reraises(self, caplog):
|
||||
from tasks.jobs.reports.base import _log_phase
|
||||
|
||||
caplog.set_level("INFO", logger="tasks.jobs.reports.base")
|
||||
with pytest.raises(RuntimeError, match="boom"):
|
||||
with _log_phase("failing_phase", scan_id="s-2", framework="FW"):
|
||||
raise RuntimeError("boom")
|
||||
|
||||
messages = [r.getMessage() for r in caplog.records]
|
||||
assert any("phase_failed" in m and "failing_phase" in m for m in messages)
|
||||
# No phase_end on the failure path.
|
||||
assert not any("phase_end" in m for m in messages)
|
||||
|
||||
@@ -3853,7 +3853,6 @@ class TestAggregateAttackSurface:
|
||||
in result["privilege-escalation"]
|
||||
)
|
||||
assert "ec2_instance_imdsv2_enabled" in result["ec2-imdsv1"]
|
||||
assert "ec2_instance_account_imdsv2_enabled" in result["ec2-imdsv1"]
|
||||
|
||||
@patch("tasks.jobs.scan.AttackSurfaceOverview.objects.bulk_create")
|
||||
@patch("tasks.jobs.scan.Finding.all_objects.filter")
|
||||
|
||||
@@ -1,335 +0,0 @@
|
||||
# AWS Inventory Connectivity Graph
|
||||
|
||||
A community-contributed tool that generates interactive connectivity graphs from Prowler AWS scans, visualizing relationships between AWS resources with zero additional API calls.
|
||||
|
||||
## Overview
|
||||
|
||||
This tool extends Prowler by producing two artifacts after a scan completes:
|
||||
|
||||
- **`<output>.inventory.json`** – Machine-readable graph (nodes + edges)
|
||||
- **`<output>.inventory.html`** – Interactive D3.js force-directed visualization
|
||||
|
||||
### Why?
|
||||
|
||||
Prowler's existing outputs (CSV, ASFF, OCSF, HTML) report individual check findings but provide no cross-service topology view. Security engineers need to understand **how** resources are connected—which Lambda functions sit inside which VPC, which IAM roles can be assumed by which services, which event sources trigger which functions—before they can reason about attack paths, blast-radius, or lateral-movement risk.
|
||||
|
||||
This tool fills that gap by building a connectivity graph from the service clients that are already loaded during a Prowler scan.
|
||||
|
||||
## Features
|
||||
|
||||
### Supported AWS Services
|
||||
|
||||
The tool currently extracts connectivity information from:
|
||||
|
||||
- **Lambda** – Functions, VPC/subnet/SG edges, event source mappings, layers, DLQ, KMS
|
||||
- **EC2** – Instances, security groups, subnet/VPC edges
|
||||
- **VPC** – VPCs, subnets, peering connections
|
||||
- **RDS** – DB instances, VPC/SG/cluster/KMS edges
|
||||
- **ELBv2** – ALB/NLB load balancers, SG and VPC edges
|
||||
- **S3** – Buckets, replication targets, logging buckets, KMS keys
|
||||
- **IAM** – Roles, trust-relationship edges (who can assume what)
|
||||
|
||||
### Edge Semantic Types
|
||||
|
||||
Edges are typed for downstream filtering and attack-path analysis:
|
||||
|
||||
- `network` – Resources share a network path (VPC/subnet/SG)
|
||||
- `iam` – IAM trust or permission relationship
|
||||
- `triggers` – One resource can invoke another (event source → Lambda)
|
||||
- `data_flow` – Data is written/read (Lambda → SQS dead-letter queue)
|
||||
- `depends_on` – Soft dependency (Lambda layer, subnet belongs to VPC)
|
||||
- `routes_to` – Traffic routing (LB → target)
|
||||
- `replicates_to` – S3 replication
|
||||
- `encrypts` – KMS key encrypts the resource
|
||||
- `logs_to` – Logging relationship
|
||||
|
||||
### Interactive HTML Graph Features
|
||||
|
||||
- Force-directed layout with drag-and-drop node pinning
|
||||
- Zoom / pan (mouse wheel + click-drag on background)
|
||||
- Per-service color-coded nodes with a legend
|
||||
- Hover tooltips showing ARN + all metadata properties
|
||||
- Service filter dropdown (show only Lambda, EC2, RDS, etc.)
|
||||
- Adjustable link-distance and charge-strength physics sliders
|
||||
- Edge labels on every arrow
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python 3.9.1 or higher
|
||||
- Prowler installed and configured (see [Prowler documentation](https://docs.prowler.com/))
|
||||
|
||||
### Setup
|
||||
|
||||
1. Clone or download this directory to your local machine
|
||||
2. Ensure Prowler is installed and working
|
||||
3. No additional dependencies required beyond Prowler's existing requirements
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run Prowler with your desired checks, then use the inventory graph script:
|
||||
|
||||
```bash
|
||||
# Run Prowler scan (example)
|
||||
prowler aws --output-formats csv
|
||||
|
||||
# Generate inventory graph from the scan
|
||||
python contrib/inventory-graph/inventory_graph.py --output-directory ./output
|
||||
```
|
||||
|
||||
### Command-Line Options
|
||||
|
||||
```bash
|
||||
python contrib/inventory-graph/inventory_graph.py [OPTIONS]
|
||||
|
||||
Options:
|
||||
--output-directory DIR Directory to save output files (default: ./output)
|
||||
--output-filename NAME Base filename without extension (default: prowler-inventory-<timestamp>)
|
||||
--help Show this help message and exit
|
||||
```
|
||||
|
||||
### Example Workflow
|
||||
|
||||
```bash
|
||||
# 1. Run a Prowler scan on your AWS account
|
||||
prowler aws --profile my-aws-profile --output-formats csv html
|
||||
|
||||
# 2. Generate the inventory graph
|
||||
python contrib/inventory-graph/inventory_graph.py \
|
||||
--output-directory ./output \
|
||||
--output-filename my-aws-inventory
|
||||
|
||||
# 3. Open the HTML file in your browser
|
||||
open output/my-aws-inventory.inventory.html
|
||||
```
|
||||
|
||||
### Integration with Prowler Scans
|
||||
|
||||
The tool reads from already-loaded AWS service clients in memory (via `sys.modules`). This means:
|
||||
|
||||
- **Zero extra AWS API calls** – Uses data already collected during the Prowler scan
|
||||
- **Graceful degradation** – Services not scanned are silently skipped
|
||||
- **Flexible** – Works with any subset of Prowler checks
|
||||
|
||||
## Output Files
|
||||
|
||||
### JSON Output (`*.inventory.json`)
|
||||
|
||||
Machine-readable graph structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"generated_at": "2026-03-19T12:34:56Z",
|
||||
"nodes": [
|
||||
{
|
||||
"id": "arn:aws:lambda:us-east-1:123456789012:function:my-function",
|
||||
"type": "lambda_function",
|
||||
"name": "my-function",
|
||||
"service": "lambda",
|
||||
"region": "us-east-1",
|
||||
"account_id": "123456789012",
|
||||
"properties": {
|
||||
"runtime": "python3.9",
|
||||
"vpc_id": "vpc-abc123"
|
||||
}
|
||||
}
|
||||
],
|
||||
"edges": [
|
||||
{
|
||||
"source_id": "arn:aws:lambda:...",
|
||||
"target_id": "arn:aws:ec2:...:vpc/vpc-abc123",
|
||||
"edge_type": "network",
|
||||
"label": "in-vpc"
|
||||
}
|
||||
],
|
||||
"stats": {
|
||||
"node_count": 42,
|
||||
"edge_count": 87
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### HTML Output (`*.inventory.html`)
|
||||
|
||||
Self-contained interactive visualization that opens in any modern browser. No server or build step required.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Design Decisions
|
||||
|
||||
| Decision | Rationale |
|
||||
|----------|-----------|
|
||||
| **Read from sys.modules** | Zero extra AWS API calls; services not scanned are silently skipped |
|
||||
| **Self-contained HTML** | D3.js v7 via CDN; no server, no build step; opens in any browser |
|
||||
| **One extractor per service** | Each extractor is independently testable; adding a new service = one new file + one line in the registry |
|
||||
| **Typed edges** | Semantic types allow downstream consumers (attack-path tools, Neo4j import) to filter by relationship class |
|
||||
|
||||
### Project Structure
|
||||
|
||||
```
|
||||
contrib/inventory-graph/
|
||||
├── README.md # This file
|
||||
├── inventory_graph.py # Main entry point script
|
||||
├── lib/
|
||||
│ ├── __init__.py
|
||||
│ ├── models.py # ResourceNode, ResourceEdge, ConnectivityGraph dataclasses
|
||||
│ ├── graph_builder.py # Reads loaded service clients from sys.modules
|
||||
│ ├── inventory_output.py # write_json(), write_html()
|
||||
│ └── extractors/
|
||||
│ ├── __init__.py
|
||||
│ ├── lambda_extractor.py # Lambda functions → VPC/subnet/SG/event-sources/layers/DLQ/KMS
|
||||
│ ├── ec2_extractor.py # EC2 instances + security groups → subnet/VPC
|
||||
│ ├── vpc_extractor.py # VPCs, subnets, peering connections
|
||||
│ ├── rds_extractor.py # RDS instances → VPC/SG/cluster/KMS
|
||||
│ ├── elbv2_extractor.py # ALB/NLB load balancers → SG/VPC
|
||||
│ ├── s3_extractor.py # S3 buckets → replication targets/logging buckets/KMS keys
|
||||
│ └── iam_extractor.py # IAM roles + trust-relationship edges
|
||||
└── examples/
|
||||
└── sample_output.html # Example output (optional)
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Smoke Test (No AWS Credentials Needed)
|
||||
|
||||
```python
|
||||
import sys
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
# Wire a fake Lambda client
|
||||
mock_module = MagicMock()
|
||||
mock_fn = MagicMock()
|
||||
mock_fn.arn = "arn:aws:lambda:us-east-1:123:function:test"
|
||||
mock_fn.name = "test"
|
||||
mock_fn.region = "us-east-1"
|
||||
mock_fn.vpc_id = "vpc-abc"
|
||||
mock_fn.security_groups = ["sg-111"]
|
||||
mock_fn.subnet_ids = {"subnet-aaa"}
|
||||
mock_fn.environment = None
|
||||
mock_fn.kms_key_arn = None
|
||||
mock_fn.layers = []
|
||||
mock_fn.dead_letter_config = None
|
||||
mock_fn.event_source_mappings = []
|
||||
mock_module.awslambda_client.functions = {mock_fn.arn: mock_fn}
|
||||
mock_module.awslambda_client.audited_account = "123"
|
||||
sys.modules["prowler.providers.aws.services.awslambda.awslambda_client"] = mock_module
|
||||
|
||||
from contrib.inventory_graph.lib.graph_builder import build_graph
|
||||
from contrib.inventory_graph.lib.inventory_output import write_json, write_html
|
||||
|
||||
graph = build_graph()
|
||||
write_json(graph, "/tmp/test.inventory.json")
|
||||
write_html(graph, "/tmp/test.inventory.html")
|
||||
# Open /tmp/test.inventory.html in a browser
|
||||
```
|
||||
|
||||
## Extending
|
||||
|
||||
### Adding a New Service
|
||||
|
||||
1. Create a new extractor file in `lib/extractors/` (e.g., `dynamodb_extractor.py`)
|
||||
2. Implement the `extract(client)` function that returns `(nodes, edges)`
|
||||
3. Register it in `lib/graph_builder.py` in the `_SERVICE_REGISTRY` tuple
|
||||
|
||||
Example extractor template:
|
||||
|
||||
```python
|
||||
from typing import List, Tuple
|
||||
from prowler.lib.outputs.inventory.models import ResourceNode, ResourceEdge
|
||||
|
||||
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
|
||||
"""Extract DynamoDB tables and their relationships."""
|
||||
nodes = []
|
||||
edges = []
|
||||
|
||||
for table in client.tables:
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=table.arn,
|
||||
type="dynamodb_table",
|
||||
name=table.name,
|
||||
service="dynamodb",
|
||||
region=table.region,
|
||||
account_id=client.audited_account,
|
||||
properties={"billing_mode": table.billing_mode}
|
||||
)
|
||||
)
|
||||
|
||||
# Add edges for KMS encryption, streams, etc.
|
||||
if table.kms_key_arn:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=table.kms_key_arn,
|
||||
target_id=table.arn,
|
||||
edge_type="encrypts",
|
||||
label="encrypts"
|
||||
)
|
||||
)
|
||||
|
||||
return nodes, edges
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No nodes discovered
|
||||
|
||||
**Problem:** The tool reports "no nodes discovered" after running.
|
||||
|
||||
**Solution:** Ensure you've run a Prowler scan first. The tool reads from in-memory service clients loaded during the scan. If no services were scanned, no nodes will be discovered.
|
||||
|
||||
### Missing services in the graph
|
||||
|
||||
**Problem:** Some AWS services are not appearing in the graph.
|
||||
|
||||
**Solution:** The tool only includes services that have been scanned by Prowler. Run Prowler with the services you want to include, or run without service filters to scan all available services.
|
||||
|
||||
### HTML file doesn't display properly
|
||||
|
||||
**Problem:** The HTML visualization doesn't load or shows errors.
|
||||
|
||||
**Solution:**
|
||||
- Ensure you're opening the file in a modern browser (Chrome, Firefox, Safari, Edge)
|
||||
- Check your browser's console for JavaScript errors
|
||||
- Verify the file was generated completely (check file size > 0)
|
||||
- The HTML requires internet access to load D3.js from CDN
|
||||
|
||||
## Roadmap
|
||||
|
||||
Potential future enhancements:
|
||||
|
||||
- [ ] Support for additional AWS services (DynamoDB, SQS, SNS, etc.)
|
||||
- [ ] Export to Neo4j / Cartography format
|
||||
- [ ] Attack path analysis integration
|
||||
- [ ] Multi-account/multi-region aggregation
|
||||
- [ ] Custom edge type filtering in HTML UI
|
||||
- [ ] Graph diff between two scans
|
||||
|
||||
## Contributing
|
||||
|
||||
This is a community contribution. If you'd like to enhance it:
|
||||
|
||||
1. Fork the Prowler repository
|
||||
2. Make your changes in `contrib/inventory-graph/`
|
||||
3. Test thoroughly
|
||||
4. Submit a pull request with a clear description
|
||||
|
||||
## License
|
||||
|
||||
This tool is part of the Prowler project and is licensed under the Apache License 2.0.
|
||||
|
||||
## Credits
|
||||
|
||||
- **Author:** [@sandiyochristan](https://github.com/sandiyochristan)
|
||||
- **Related PR:** [#10382](https://github.com/prowler-cloud/prowler/pull/10382)
|
||||
- **Prowler Project:** [prowler-cloud/prowler](https://github.com/prowler-cloud/prowler)
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Open an issue in the [Prowler repository](https://github.com/prowler-cloud/prowler/issues)
|
||||
- Join the [Prowler Community Slack](https://goto.prowler.com/slack)
|
||||
- Tag your issue with `contrib:inventory-graph`
|
||||
@@ -1,181 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Example: Generate AWS Inventory Graph with Mock Data
|
||||
|
||||
This example demonstrates how to use the inventory graph tool with mock AWS data.
|
||||
No AWS credentials required.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from lib.graph_builder import build_graph
|
||||
from lib.inventory_output import write_json, write_html
|
||||
|
||||
|
||||
def create_mock_lambda_client():
|
||||
"""Create a mock Lambda client with sample data."""
|
||||
mock_module = MagicMock()
|
||||
|
||||
# Create a mock Lambda function
|
||||
mock_fn = MagicMock()
|
||||
mock_fn.arn = "arn:aws:lambda:us-east-1:123456789012:function:my-test-function"
|
||||
mock_fn.name = "my-test-function"
|
||||
mock_fn.region = "us-east-1"
|
||||
mock_fn.vpc_id = "vpc-abc123"
|
||||
mock_fn.security_groups = ["sg-111222"]
|
||||
mock_fn.subnet_ids = {"subnet-aaa111", "subnet-bbb222"}
|
||||
mock_fn.environment = {"Variables": {"ENV": "production"}}
|
||||
mock_fn.kms_key_arn = (
|
||||
"arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
|
||||
)
|
||||
mock_fn.layers = []
|
||||
mock_fn.dead_letter_config = None
|
||||
mock_fn.event_source_mappings = []
|
||||
|
||||
mock_module.awslambda_client.functions = {mock_fn.arn: mock_fn}
|
||||
mock_module.awslambda_client.audited_account = "123456789012"
|
||||
|
||||
return mock_module
|
||||
|
||||
|
||||
def create_mock_ec2_client():
|
||||
"""Create a mock EC2 client with sample data."""
|
||||
mock_module = MagicMock()
|
||||
|
||||
# Create a mock EC2 instance
|
||||
mock_instance = MagicMock()
|
||||
mock_instance.arn = (
|
||||
"arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
|
||||
)
|
||||
mock_instance.id = "i-1234567890abcdef0"
|
||||
mock_instance.region = "us-east-1"
|
||||
mock_instance.vpc_id = "vpc-abc123"
|
||||
mock_instance.subnet_id = "subnet-aaa111"
|
||||
mock_instance.security_groups = [MagicMock(id="sg-111222")]
|
||||
mock_instance.state = "running"
|
||||
mock_instance.type = "t3.micro"
|
||||
mock_instance.tags = [{"Key": "Name", "Value": "test-instance"}]
|
||||
|
||||
# Create a mock security group
|
||||
mock_sg = MagicMock()
|
||||
mock_sg.arn = "arn:aws:ec2:us-east-1:123456789012:security-group/sg-111222"
|
||||
mock_sg.id = "sg-111222"
|
||||
mock_sg.name = "test-security-group"
|
||||
mock_sg.region = "us-east-1"
|
||||
mock_sg.vpc_id = "vpc-abc123"
|
||||
|
||||
mock_module.ec2_client.instances = [mock_instance]
|
||||
mock_module.ec2_client.security_groups = [mock_sg]
|
||||
mock_module.ec2_client.audited_account = "123456789012"
|
||||
|
||||
return mock_module
|
||||
|
||||
|
||||
def create_mock_vpc_client():
|
||||
"""Create a mock VPC client with sample data."""
|
||||
mock_module = MagicMock()
|
||||
|
||||
# Create a mock VPC
|
||||
mock_vpc = MagicMock()
|
||||
mock_vpc.arn = "arn:aws:ec2:us-east-1:123456789012:vpc/vpc-abc123"
|
||||
mock_vpc.id = "vpc-abc123"
|
||||
mock_vpc.region = "us-east-1"
|
||||
mock_vpc.cidr_block = "10.0.0.0/16"
|
||||
mock_vpc.tags = [{"Key": "Name", "Value": "test-vpc"}]
|
||||
|
||||
# Create mock subnets
|
||||
mock_subnet1 = MagicMock()
|
||||
mock_subnet1.arn = "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-aaa111"
|
||||
mock_subnet1.id = "subnet-aaa111"
|
||||
mock_subnet1.region = "us-east-1"
|
||||
mock_subnet1.vpc_id = "vpc-abc123"
|
||||
mock_subnet1.cidr_block = "10.0.1.0/24"
|
||||
mock_subnet1.availability_zone = "us-east-1a"
|
||||
|
||||
mock_subnet2 = MagicMock()
|
||||
mock_subnet2.arn = "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-bbb222"
|
||||
mock_subnet2.id = "subnet-bbb222"
|
||||
mock_subnet2.region = "us-east-1"
|
||||
mock_subnet2.vpc_id = "vpc-abc123"
|
||||
mock_subnet2.cidr_block = "10.0.2.0/24"
|
||||
mock_subnet2.availability_zone = "us-east-1b"
|
||||
|
||||
mock_module.vpc_client.vpcs = [mock_vpc]
|
||||
mock_module.vpc_client.subnets = [mock_subnet1, mock_subnet2]
|
||||
mock_module.vpc_client.vpc_peering_connections = []
|
||||
mock_module.vpc_client.audited_account = "123456789012"
|
||||
|
||||
return mock_module
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to demonstrate the inventory graph generation."""
|
||||
print("=" * 70)
|
||||
print("AWS Inventory Graph - Mock Data Example")
|
||||
print("=" * 70)
|
||||
print()
|
||||
|
||||
# Create mock clients and inject them into sys.modules
|
||||
print("Creating mock AWS service clients...")
|
||||
sys.modules["prowler.providers.aws.services.awslambda.awslambda_client"] = (
|
||||
create_mock_lambda_client()
|
||||
)
|
||||
sys.modules["prowler.providers.aws.services.ec2.ec2_client"] = (
|
||||
create_mock_ec2_client()
|
||||
)
|
||||
sys.modules["prowler.providers.aws.services.vpc.vpc_client"] = (
|
||||
create_mock_vpc_client()
|
||||
)
|
||||
print("✓ Mock clients created")
|
||||
print()
|
||||
|
||||
# Build the graph
|
||||
print("Building connectivity graph...")
|
||||
graph = build_graph()
|
||||
print(f"✓ Graph built: {len(graph.nodes)} nodes, {len(graph.edges)} edges")
|
||||
print()
|
||||
|
||||
# Display discovered nodes
|
||||
print("Discovered nodes:")
|
||||
for node in graph.nodes:
|
||||
print(f" - {node.type}: {node.name} ({node.region})")
|
||||
print()
|
||||
|
||||
# Display discovered edges
|
||||
print("Discovered edges:")
|
||||
for edge in graph.edges:
|
||||
source_node = next((n for n in graph.nodes if n.id == edge.source_id), None)
|
||||
target_node = next((n for n in graph.nodes if n.id == edge.target_id), None)
|
||||
source_name = source_node.name if source_node else edge.source_id
|
||||
target_name = target_node.name if target_node else edge.target_id
|
||||
print(f" - {source_name} --[{edge.edge_type}]--> {target_name}")
|
||||
print()
|
||||
|
||||
# Write outputs
|
||||
output_dir = Path(__file__).parent
|
||||
json_path = output_dir / "example_output.inventory.json"
|
||||
html_path = output_dir / "example_output.inventory.html"
|
||||
|
||||
print("Writing output files...")
|
||||
write_json(graph, str(json_path))
|
||||
write_html(graph, str(html_path))
|
||||
print(f"✓ JSON written to: {json_path}")
|
||||
print(f"✓ HTML written to: {html_path}")
|
||||
print()
|
||||
|
||||
print("=" * 70)
|
||||
print("✓ Example complete!")
|
||||
print("=" * 70)
|
||||
print()
|
||||
print(f"Open the HTML file to view the interactive graph:")
|
||||
print(f" open {html_path}")
|
||||
print()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,158 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
AWS Inventory Connectivity Graph Generator
|
||||
|
||||
A standalone tool that generates interactive connectivity graphs from Prowler AWS scans.
|
||||
This tool reads from already-loaded AWS service clients in memory and produces:
|
||||
- JSON graph (nodes + edges)
|
||||
- Interactive HTML visualization
|
||||
|
||||
Usage:
|
||||
python inventory_graph.py --output-directory ./output --output-filename my-inventory
|
||||
|
||||
For more information, see README.md
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
# Add the contrib directory to the path so we can import the lib modules
|
||||
CONTRIB_DIR = Path(__file__).parent
|
||||
sys.path.insert(0, str(CONTRIB_DIR))
|
||||
|
||||
from lib.graph_builder import build_graph
|
||||
from lib.inventory_output import write_json, write_html
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
"""Parse command-line arguments."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate AWS inventory connectivity graph from Prowler scan data",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
# Generate graph with default settings
|
||||
python inventory_graph.py
|
||||
|
||||
# Specify custom output directory and filename
|
||||
python inventory_graph.py --output-directory ./my-output --output-filename aws-inventory
|
||||
|
||||
# After running a Prowler scan
|
||||
prowler aws --profile my-profile
|
||||
python inventory_graph.py --output-directory ./output
|
||||
|
||||
For more information, see README.md
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--output-directory",
|
||||
"-o",
|
||||
default="./output",
|
||||
help="Directory to save output files (default: ./output)",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--output-filename",
|
||||
"-f",
|
||||
default=None,
|
||||
help="Base filename without extension (default: prowler-inventory-<timestamp>)",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
"-v",
|
||||
action="store_true",
|
||||
help="Enable verbose output",
|
||||
)
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for the inventory graph generator."""
|
||||
args = parse_arguments()
|
||||
|
||||
# Set up output paths
|
||||
output_dir = Path(args.output_directory)
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Generate filename with timestamp if not provided
|
||||
if args.output_filename:
|
||||
base_filename = args.output_filename
|
||||
else:
|
||||
timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
|
||||
base_filename = f"prowler-inventory-{timestamp}"
|
||||
|
||||
json_path = output_dir / f"{base_filename}.inventory.json"
|
||||
html_path = output_dir / f"{base_filename}.inventory.html"
|
||||
|
||||
print("=" * 70)
|
||||
print("AWS Inventory Connectivity Graph Generator")
|
||||
print("=" * 70)
|
||||
print()
|
||||
|
||||
# Build the graph from loaded service clients
|
||||
if args.verbose:
|
||||
print("Building connectivity graph from loaded AWS service clients...")
|
||||
|
||||
graph = build_graph()
|
||||
|
||||
# Check if any nodes were discovered
|
||||
if not graph.nodes:
|
||||
print("⚠️ WARNING: No nodes discovered!")
|
||||
print()
|
||||
print("This usually means:")
|
||||
print(" 1. No Prowler scan has been run yet in this Python session")
|
||||
print(" 2. No AWS service clients are loaded in memory")
|
||||
print()
|
||||
print("To fix this:")
|
||||
print(" 1. Run a Prowler scan first: prowler aws --output-formats csv")
|
||||
print(" 2. Then run this script in the same session")
|
||||
print()
|
||||
print(
|
||||
"Alternatively, integrate this tool directly into Prowler's output pipeline."
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
print(f"✓ Discovered {len(graph.nodes)} nodes and {len(graph.edges)} edges")
|
||||
print()
|
||||
|
||||
# Write outputs
|
||||
if args.verbose:
|
||||
print(f"Writing JSON output to: {json_path}")
|
||||
write_json(graph, str(json_path))
|
||||
|
||||
if args.verbose:
|
||||
print(f"Writing HTML output to: {html_path}")
|
||||
write_html(graph, str(html_path))
|
||||
|
||||
print()
|
||||
print("=" * 70)
|
||||
print("✓ Graph generation complete!")
|
||||
print("=" * 70)
|
||||
print()
|
||||
print(f"📄 JSON: {json_path}")
|
||||
print(f"🌐 HTML: {html_path}")
|
||||
print()
|
||||
print(f"Open the HTML file in your browser to explore the interactive graph:")
|
||||
print(f" open {html_path}")
|
||||
print()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except KeyboardInterrupt:
|
||||
print("\n\nInterrupted by user. Exiting...")
|
||||
sys.exit(130)
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {e}", file=sys.stderr)
|
||||
if "--verbose" in sys.argv or "-v" in sys.argv:
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
@@ -1,94 +0,0 @@
|
||||
from typing import List, Tuple
|
||||
|
||||
from lib.models import ResourceEdge, ResourceNode
|
||||
|
||||
|
||||
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
|
||||
"""
|
||||
Extract EC2 instance and security-group nodes with their edges.
|
||||
|
||||
Edges produced:
|
||||
- instance → security-group [network]
|
||||
- instance → subnet [network]
|
||||
- security-group → VPC [network]
|
||||
"""
|
||||
nodes: List[ResourceNode] = []
|
||||
edges: List[ResourceEdge] = []
|
||||
|
||||
# EC2 Instances
|
||||
for instance in client.instances:
|
||||
name = instance.id
|
||||
for tag in instance.tags or []:
|
||||
if tag.get("Key") == "Name":
|
||||
name = tag["Value"]
|
||||
break
|
||||
|
||||
props = {
|
||||
"instance_type": getattr(instance, "type", None),
|
||||
"state": getattr(instance, "state", None),
|
||||
"vpc_id": getattr(instance, "vpc_id", None),
|
||||
"subnet_id": getattr(instance, "subnet_id", None),
|
||||
"public_ip": getattr(instance, "public_ip_address", None),
|
||||
"private_ip": getattr(instance, "private_ip_address", None),
|
||||
}
|
||||
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=instance.arn,
|
||||
type="ec2_instance",
|
||||
name=name,
|
||||
service="ec2",
|
||||
region=instance.region,
|
||||
account_id=client.audited_account,
|
||||
properties={k: v for k, v in props.items() if v is not None},
|
||||
)
|
||||
)
|
||||
|
||||
for sg_id in instance.security_groups or []:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=instance.arn,
|
||||
target_id=sg_id,
|
||||
edge_type="network",
|
||||
label="sg",
|
||||
)
|
||||
)
|
||||
|
||||
if instance.subnet_id:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=instance.arn,
|
||||
target_id=instance.subnet_id,
|
||||
edge_type="network",
|
||||
label="subnet",
|
||||
)
|
||||
)
|
||||
|
||||
# Security Groups
|
||||
for sg in client.security_groups.values():
|
||||
name = (
|
||||
sg.name if hasattr(sg, "name") else sg.id if hasattr(sg, "id") else sg.arn
|
||||
)
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=sg.arn,
|
||||
type="security_group",
|
||||
name=name,
|
||||
service="ec2",
|
||||
region=sg.region,
|
||||
account_id=client.audited_account,
|
||||
properties={"vpc_id": sg.vpc_id},
|
||||
)
|
||||
)
|
||||
|
||||
if sg.vpc_id:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=sg.arn,
|
||||
target_id=sg.vpc_id,
|
||||
edge_type="network",
|
||||
label="in-vpc",
|
||||
)
|
||||
)
|
||||
|
||||
return nodes, edges
|
||||
@@ -1,60 +0,0 @@
|
||||
from typing import List, Tuple
|
||||
|
||||
from lib.models import ResourceEdge, ResourceNode
|
||||
|
||||
|
||||
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
|
||||
"""
|
||||
Extract ELBv2 (ALB/NLB) load balancer nodes and their edges.
|
||||
|
||||
Edges produced:
|
||||
- load_balancer → security-group [network]
|
||||
- load_balancer → VPC [network]
|
||||
"""
|
||||
nodes: List[ResourceNode] = []
|
||||
edges: List[ResourceEdge] = []
|
||||
|
||||
for lb in client.loadbalancersv2.values():
|
||||
props = {
|
||||
"type": getattr(lb, "type", None),
|
||||
"scheme": getattr(lb, "scheme", None),
|
||||
"dns_name": getattr(lb, "dns", None),
|
||||
"vpc_id": getattr(lb, "vpc_id", None),
|
||||
}
|
||||
|
||||
name = getattr(lb, "name", lb.arn.split("/")[-2] if "/" in lb.arn else lb.arn)
|
||||
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=lb.arn,
|
||||
type="load_balancer",
|
||||
name=name,
|
||||
service="elbv2",
|
||||
region=lb.region,
|
||||
account_id=client.audited_account,
|
||||
properties={k: v for k, v in props.items() if v is not None},
|
||||
)
|
||||
)
|
||||
|
||||
for sg_id in lb.security_groups or []:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=lb.arn,
|
||||
target_id=sg_id,
|
||||
edge_type="network",
|
||||
label="sg",
|
||||
)
|
||||
)
|
||||
|
||||
vpc_id = getattr(lb, "vpc_id", None)
|
||||
if vpc_id:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=lb.arn,
|
||||
target_id=vpc_id,
|
||||
edge_type="network",
|
||||
label="in-vpc",
|
||||
)
|
||||
)
|
||||
|
||||
return nodes, edges
|
||||
@@ -1,84 +0,0 @@
|
||||
import json
|
||||
from typing import Any, Dict, List, Tuple
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from lib.models import ResourceEdge, ResourceNode
|
||||
|
||||
|
||||
def _parse_trust_principals(assume_role_policy: Any) -> List[str]:
|
||||
"""
|
||||
Return a flat list of principal strings from an IAM assume-role policy document.
|
||||
The policy may be a dict already or a JSON string.
|
||||
"""
|
||||
if not assume_role_policy:
|
||||
return []
|
||||
|
||||
if isinstance(assume_role_policy, str):
|
||||
try:
|
||||
assume_role_policy = json.loads(assume_role_policy)
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
return []
|
||||
|
||||
principals = []
|
||||
for statement in assume_role_policy.get("Statement", []):
|
||||
principal = statement.get("Principal", {})
|
||||
if isinstance(principal, str):
|
||||
principals.append(principal)
|
||||
elif isinstance(principal, dict):
|
||||
for v in principal.values():
|
||||
if isinstance(v, list):
|
||||
principals.extend(v)
|
||||
else:
|
||||
principals.append(v)
|
||||
elif isinstance(principal, list):
|
||||
principals.extend(principal)
|
||||
|
||||
return principals
|
||||
|
||||
|
||||
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
|
||||
"""
|
||||
Extract IAM role nodes and their trust-relationship edges.
|
||||
|
||||
Edges produced:
|
||||
- trusted-principal → role [iam] (who can assume this role)
|
||||
"""
|
||||
nodes: List[ResourceNode] = []
|
||||
edges: List[ResourceEdge] = []
|
||||
|
||||
for role in client.roles:
|
||||
props: Dict[str, Any] = {
|
||||
"path": getattr(role, "path", None),
|
||||
"create_date": str(getattr(role, "create_date", "") or ""),
|
||||
}
|
||||
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=role.arn,
|
||||
type="iam_role",
|
||||
name=role.name,
|
||||
service="iam",
|
||||
region="global",
|
||||
account_id=client.audited_account,
|
||||
properties={k: v for k, v in props.items() if v},
|
||||
)
|
||||
)
|
||||
|
||||
# Trust-relationship edges: principal → role (principal CAN assume role)
|
||||
try:
|
||||
for principal in _parse_trust_principals(role.assume_role_policy):
|
||||
if principal and principal != "*":
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=principal,
|
||||
target_id=role.arn,
|
||||
edge_type="iam",
|
||||
label="can-assume",
|
||||
)
|
||||
)
|
||||
except Exception as e:
|
||||
logger.debug(
|
||||
f"inventory iam_extractor: could not parse trust policy for {role.arn}: {e}"
|
||||
)
|
||||
|
||||
return nodes, edges
|
||||
@@ -1,118 +0,0 @@
|
||||
from typing import List, Tuple
|
||||
|
||||
from lib.models import ResourceEdge, ResourceNode
|
||||
|
||||
|
||||
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
|
||||
"""
|
||||
Extract Lambda function nodes and their edges from an awslambda_client.
|
||||
|
||||
Edges produced:
|
||||
- lambda → VPC [network]
|
||||
- lambda → subnet [network]
|
||||
- lambda → sg [network]
|
||||
- lambda → event-source[triggers] (from EventSourceMapping)
|
||||
- lambda → layer ARN [depends_on]
|
||||
- lambda → DLQ target [data_flow]
|
||||
- lambda → KMS key [encrypts]
|
||||
"""
|
||||
nodes: List[ResourceNode] = []
|
||||
edges: List[ResourceEdge] = []
|
||||
|
||||
for fn in client.functions.values():
|
||||
props = {
|
||||
"runtime": fn.runtime,
|
||||
"vpc_id": fn.vpc_id,
|
||||
}
|
||||
if fn.environment:
|
||||
props["has_env_vars"] = True
|
||||
if fn.kms_key_arn:
|
||||
props["kms_key_arn"] = fn.kms_key_arn
|
||||
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=fn.arn,
|
||||
type="lambda_function",
|
||||
name=fn.name,
|
||||
service="lambda",
|
||||
region=fn.region,
|
||||
account_id=client.audited_account,
|
||||
properties=props,
|
||||
)
|
||||
)
|
||||
|
||||
# Network edges → VPC, subnets, security groups
|
||||
if fn.vpc_id:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=fn.arn,
|
||||
target_id=fn.vpc_id,
|
||||
edge_type="network",
|
||||
label="in-vpc",
|
||||
)
|
||||
)
|
||||
for sg_id in fn.security_groups or []:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=fn.arn,
|
||||
target_id=sg_id,
|
||||
edge_type="network",
|
||||
label="sg",
|
||||
)
|
||||
)
|
||||
for subnet_id in fn.subnet_ids or set():
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=fn.arn,
|
||||
target_id=subnet_id,
|
||||
edge_type="network",
|
||||
label="subnet",
|
||||
)
|
||||
)
|
||||
|
||||
# Trigger edges from event source mappings
|
||||
for esm in getattr(fn, "event_source_mappings", []):
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=esm.event_source_arn,
|
||||
target_id=fn.arn,
|
||||
edge_type="triggers",
|
||||
label=f"esm:{esm.state}",
|
||||
)
|
||||
)
|
||||
|
||||
# Layer dependency edges
|
||||
for layer in getattr(fn, "layers", []):
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=fn.arn,
|
||||
target_id=layer.arn,
|
||||
edge_type="depends_on",
|
||||
label="layer",
|
||||
)
|
||||
)
|
||||
|
||||
# Dead-letter queue data-flow edge
|
||||
dlq = getattr(fn, "dead_letter_config", None)
|
||||
if dlq and dlq.target_arn:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=fn.arn,
|
||||
target_id=dlq.target_arn,
|
||||
edge_type="data_flow",
|
||||
label="dlq",
|
||||
)
|
||||
)
|
||||
|
||||
# KMS encryption edge
|
||||
if fn.kms_key_arn:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=fn.kms_key_arn,
|
||||
target_id=fn.arn,
|
||||
edge_type="encrypts",
|
||||
label="kms",
|
||||
)
|
||||
)
|
||||
|
||||
return nodes, edges
|
||||
@@ -1,86 +0,0 @@
|
||||
from typing import List, Tuple
|
||||
|
||||
from lib.models import ResourceEdge, ResourceNode
|
||||
|
||||
|
||||
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
|
||||
"""
|
||||
Extract RDS DB instance nodes and their edges.
|
||||
|
||||
Edges produced:
|
||||
- db_instance → security-group [network]
|
||||
- db_instance → VPC [network]
|
||||
- db_instance → cluster [depends_on]
|
||||
- db_instance → KMS key [encrypts]
|
||||
"""
|
||||
nodes: List[ResourceNode] = []
|
||||
edges: List[ResourceEdge] = []
|
||||
|
||||
for db in client.db_instances.values():
|
||||
props = {
|
||||
"engine": getattr(db, "engine", None),
|
||||
"engine_version": getattr(db, "engine_version", None),
|
||||
"instance_class": getattr(db, "db_instance_class", None),
|
||||
"vpc_id": getattr(db, "vpc_id", None),
|
||||
"multi_az": getattr(db, "multi_az", None),
|
||||
"publicly_accessible": getattr(db, "publicly_accessible", None),
|
||||
"storage_encrypted": getattr(db, "storage_encrypted", None),
|
||||
}
|
||||
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=db.arn,
|
||||
type="rds_instance",
|
||||
name=db.id,
|
||||
service="rds",
|
||||
region=db.region,
|
||||
account_id=client.audited_account,
|
||||
properties={k: v for k, v in props.items() if v is not None},
|
||||
)
|
||||
)
|
||||
|
||||
for sg in getattr(db, "security_groups", []):
|
||||
sg_id = sg if isinstance(sg, str) else getattr(sg, "id", str(sg))
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=db.arn,
|
||||
target_id=sg_id,
|
||||
edge_type="network",
|
||||
label="sg",
|
||||
)
|
||||
)
|
||||
|
||||
vpc_id = getattr(db, "vpc_id", None)
|
||||
if vpc_id:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=db.arn,
|
||||
target_id=vpc_id,
|
||||
edge_type="network",
|
||||
label="in-vpc",
|
||||
)
|
||||
)
|
||||
|
||||
cluster_arn = getattr(db, "cluster_arn", None)
|
||||
if cluster_arn:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=db.arn,
|
||||
target_id=cluster_arn,
|
||||
edge_type="depends_on",
|
||||
label="cluster-member",
|
||||
)
|
||||
)
|
||||
|
||||
kms_key_id = getattr(db, "kms_key_id", None)
|
||||
if kms_key_id:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=kms_key_id,
|
||||
target_id=db.arn,
|
||||
edge_type="encrypts",
|
||||
label="kms",
|
||||
)
|
||||
)
|
||||
|
||||
return nodes, edges
|
||||
@@ -1,92 +0,0 @@
|
||||
from typing import List, Tuple
|
||||
|
||||
from lib.models import ResourceEdge, ResourceNode
|
||||
|
||||
|
||||
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
|
||||
"""
|
||||
Extract S3 bucket nodes and their edges.
|
||||
|
||||
Edges produced:
|
||||
- bucket → replication-target bucket [replicates_to]
|
||||
- bucket → KMS key [encrypts]
|
||||
- bucket → logging bucket [logs_to]
|
||||
"""
|
||||
nodes: List[ResourceNode] = []
|
||||
edges: List[ResourceEdge] = []
|
||||
|
||||
for bucket in client.buckets.values():
|
||||
encryption = getattr(bucket, "encryption", None)
|
||||
versioning = getattr(bucket, "versioning_enabled", None)
|
||||
logging = getattr(bucket, "logging", None)
|
||||
public = getattr(bucket, "public_access_block", None)
|
||||
|
||||
props = {}
|
||||
if versioning is not None:
|
||||
props["versioning"] = versioning
|
||||
if encryption:
|
||||
enc_type = getattr(encryption, "type", str(encryption))
|
||||
props["encryption"] = enc_type
|
||||
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=bucket.arn,
|
||||
type="s3_bucket",
|
||||
name=bucket.name,
|
||||
service="s3",
|
||||
region=bucket.region,
|
||||
account_id=client.audited_account,
|
||||
properties=props,
|
||||
)
|
||||
)
|
||||
|
||||
# Replication edges
|
||||
for rule in getattr(bucket, "replication_rules", None) or []:
|
||||
dest_bucket = getattr(rule, "destination_bucket", None)
|
||||
if dest_bucket:
|
||||
dest_arn = (
|
||||
dest_bucket
|
||||
if dest_bucket.startswith("arn:")
|
||||
else f"arn:aws:s3:::{dest_bucket}"
|
||||
)
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=bucket.arn,
|
||||
target_id=dest_arn,
|
||||
edge_type="replicates_to",
|
||||
label="s3-replication",
|
||||
)
|
||||
)
|
||||
|
||||
# Logging edges
|
||||
if logging:
|
||||
target_bucket = getattr(logging, "target_bucket", None)
|
||||
if target_bucket:
|
||||
target_arn = (
|
||||
target_bucket
|
||||
if target_bucket.startswith("arn:")
|
||||
else f"arn:aws:s3:::{target_bucket}"
|
||||
)
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=bucket.arn,
|
||||
target_id=target_arn,
|
||||
edge_type="logs_to",
|
||||
label="access-logs",
|
||||
)
|
||||
)
|
||||
|
||||
# KMS encryption edges
|
||||
if encryption:
|
||||
kms_arn = getattr(encryption, "kms_master_key_id", None)
|
||||
if kms_arn:
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=kms_arn,
|
||||
target_id=bucket.arn,
|
||||
edge_type="encrypts",
|
||||
label="kms",
|
||||
)
|
||||
)
|
||||
|
||||
return nodes, edges
|
||||
@@ -1,92 +0,0 @@
|
||||
from typing import List, Tuple
|
||||
|
||||
from lib.models import ResourceEdge, ResourceNode
|
||||
|
||||
|
||||
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
|
||||
"""
|
||||
Extract VPC and subnet nodes with their edges.
|
||||
|
||||
Edges produced:
|
||||
- subnet → VPC [depends_on]
|
||||
- peering connection between VPCs [network]
|
||||
"""
|
||||
nodes: List[ResourceNode] = []
|
||||
edges: List[ResourceEdge] = []
|
||||
|
||||
# VPCs
|
||||
for vpc in client.vpcs.values():
|
||||
name = vpc.id if hasattr(vpc, "id") else vpc.arn
|
||||
for tag in vpc.tags or []:
|
||||
if isinstance(tag, dict) and tag.get("Key") == "Name":
|
||||
name = tag["Value"]
|
||||
break
|
||||
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=vpc.arn,
|
||||
type="vpc",
|
||||
name=name,
|
||||
service="vpc",
|
||||
region=vpc.region,
|
||||
account_id=client.audited_account,
|
||||
properties={
|
||||
"cidr_block": getattr(vpc, "cidr_block", None),
|
||||
"is_default": getattr(vpc, "is_default", None),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
# VPC Subnets
|
||||
for subnet in client.vpc_subnets.values():
|
||||
name = subnet.id if hasattr(subnet, "id") else subnet.arn
|
||||
for tag in getattr(subnet, "tags", None) or []:
|
||||
if isinstance(tag, dict) and tag.get("Key") == "Name":
|
||||
name = tag["Value"]
|
||||
break
|
||||
|
||||
nodes.append(
|
||||
ResourceNode(
|
||||
id=subnet.arn,
|
||||
type="subnet",
|
||||
name=name,
|
||||
service="vpc",
|
||||
region=subnet.region,
|
||||
account_id=client.audited_account,
|
||||
properties={
|
||||
"vpc_id": getattr(subnet, "vpc_id", None),
|
||||
"cidr_block": getattr(subnet, "cidr_block", None),
|
||||
"availability_zone": getattr(subnet, "availability_zone", None),
|
||||
"public": getattr(subnet, "public", None),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
vpc_id = getattr(subnet, "vpc_id", None)
|
||||
if vpc_id:
|
||||
# Find the VPC ARN for this vpc_id
|
||||
vpc_arn = next(
|
||||
(v.arn for v in client.vpcs.values() if v.id == vpc_id),
|
||||
vpc_id,
|
||||
)
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=subnet.arn,
|
||||
target_id=vpc_arn,
|
||||
edge_type="depends_on",
|
||||
label="subnet-of",
|
||||
)
|
||||
)
|
||||
|
||||
# VPC Peering Connections
|
||||
for peering in getattr(client, "vpc_peering_connections", {}).values():
|
||||
edges.append(
|
||||
ResourceEdge(
|
||||
source_id=peering.arn,
|
||||
target_id=getattr(peering, "accepter_vpc_id", peering.arn),
|
||||
edge_type="network",
|
||||
label="vpc-peer",
|
||||
)
|
||||
)
|
||||
|
||||
return nodes, edges
|
||||
@@ -1,106 +0,0 @@
|
||||
"""
|
||||
graph_builder.py
|
||||
----------------
|
||||
Builds a ConnectivityGraph by reading already-loaded AWS service clients from
|
||||
sys.modules. Only services that were actually scanned (i.e. whose client
|
||||
module is already imported) contribute nodes and edges. Unknown / unloaded
|
||||
services are silently skipped, so the output degrades gracefully when only a
|
||||
subset of checks has been run.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from typing import Tuple
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from lib.models import ConnectivityGraph
|
||||
|
||||
# Registry: (sys.modules key, attribute name inside that module, extractor module path)
|
||||
_SERVICE_REGISTRY: Tuple[Tuple[str, str, str], ...] = (
|
||||
(
|
||||
"prowler.providers.aws.services.awslambda.awslambda_client",
|
||||
"awslambda_client",
|
||||
"lib.extractors.lambda_extractor",
|
||||
),
|
||||
(
|
||||
"prowler.providers.aws.services.ec2.ec2_client",
|
||||
"ec2_client",
|
||||
"lib.extractors.ec2_extractor",
|
||||
),
|
||||
(
|
||||
"prowler.providers.aws.services.vpc.vpc_client",
|
||||
"vpc_client",
|
||||
"lib.extractors.vpc_extractor",
|
||||
),
|
||||
(
|
||||
"prowler.providers.aws.services.rds.rds_client",
|
||||
"rds_client",
|
||||
"lib.extractors.rds_extractor",
|
||||
),
|
||||
(
|
||||
"prowler.providers.aws.services.elbv2.elbv2_client",
|
||||
"elbv2_client",
|
||||
"lib.extractors.elbv2_extractor",
|
||||
),
|
||||
(
|
||||
"prowler.providers.aws.services.s3.s3_client",
|
||||
"s3_client",
|
||||
"lib.extractors.s3_extractor",
|
||||
),
|
||||
(
|
||||
"prowler.providers.aws.services.iam.iam_client",
|
||||
"iam_client",
|
||||
"lib.extractors.iam_extractor",
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def build_graph() -> ConnectivityGraph:
|
||||
"""
|
||||
Iterate over every registered service, check whether its client module is
|
||||
already loaded, and call the corresponding extractor.
|
||||
|
||||
Returns a ConnectivityGraph with all discovered nodes and edges.
|
||||
Duplicate node IDs are silently deduplicated (first occurrence wins).
|
||||
"""
|
||||
graph = ConnectivityGraph()
|
||||
seen_node_ids: set = set()
|
||||
|
||||
for client_module_key, client_attr, extractor_module_key in _SERVICE_REGISTRY:
|
||||
client_module = sys.modules.get(client_module_key)
|
||||
if client_module is None:
|
||||
continue
|
||||
|
||||
service_client = getattr(client_module, client_attr, None)
|
||||
if service_client is None:
|
||||
continue
|
||||
|
||||
extractor_module = sys.modules.get(extractor_module_key)
|
||||
if extractor_module is None:
|
||||
try:
|
||||
import importlib
|
||||
|
||||
extractor_module = importlib.import_module(extractor_module_key)
|
||||
except ImportError as e:
|
||||
logger.debug(
|
||||
f"inventory graph_builder: cannot import extractor {extractor_module_key}: {e}"
|
||||
)
|
||||
continue
|
||||
|
||||
try:
|
||||
nodes, edges = extractor_module.extract(service_client)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"inventory graph_builder: extractor {extractor_module_key} failed: "
|
||||
f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}]: {e}"
|
||||
)
|
||||
continue
|
||||
|
||||
for node in nodes:
|
||||
if node.id not in seen_node_ids:
|
||||
graph.add_node(node)
|
||||
seen_node_ids.add(node.id)
|
||||
|
||||
for edge in edges:
|
||||
graph.add_edge(edge)
|
||||
|
||||
return graph
|
||||
@@ -1,502 +0,0 @@
|
||||
"""
|
||||
inventory_output.py
|
||||
-------------------
|
||||
Writes the ConnectivityGraph produced by graph_builder to two files:
|
||||
|
||||
<output_path>.inventory.json – machine-readable graph (nodes + edges)
|
||||
<output_path>.inventory.html – interactive D3.js force-directed graph
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from dataclasses import asdict
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from lib.models import ConnectivityGraph
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# JSON output
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def write_json(graph: ConnectivityGraph, file_path: str) -> None:
|
||||
"""Serialise the graph to a JSON file."""
|
||||
try:
|
||||
os.makedirs(os.path.dirname(file_path), exist_ok=True)
|
||||
data = {
|
||||
"generated_at": datetime.utcnow().isoformat() + "Z",
|
||||
"nodes": [asdict(n) for n in graph.nodes],
|
||||
"edges": [asdict(e) for e in graph.edges],
|
||||
"stats": {
|
||||
"node_count": len(graph.nodes),
|
||||
"edge_count": len(graph.edges),
|
||||
},
|
||||
}
|
||||
with open(file_path, "w", encoding="utf-8") as fh:
|
||||
json.dump(data, fh, indent=2, default=str)
|
||||
logger.info(f"Inventory graph JSON written to {file_path}")
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"inventory_output.write_json: {e.__class__.__name__}[{e.__traceback__.tb_lineno}]: {e}"
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# HTML output (self-contained, D3.js CDN)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Colour palette per node type
|
||||
_NODE_COLOURS = {
|
||||
"lambda_function": "#f59e0b",
|
||||
"ec2_instance": "#3b82f6",
|
||||
"security_group": "#6366f1",
|
||||
"vpc": "#10b981",
|
||||
"subnet": "#34d399",
|
||||
"rds_instance": "#ef4444",
|
||||
"load_balancer": "#8b5cf6",
|
||||
"s3_bucket": "#06b6d4",
|
||||
"iam_role": "#f97316",
|
||||
"default": "#94a3b8",
|
||||
}
|
||||
|
||||
# Edge stroke colours per edge type
|
||||
_EDGE_COLOURS = {
|
||||
"network": "#64748b",
|
||||
"iam": "#f97316",
|
||||
"triggers": "#a855f7",
|
||||
"data_flow": "#0ea5e9",
|
||||
"depends_on": "#94a3b8",
|
||||
"routes_to": "#22c55e",
|
||||
"replicates_to": "#ec4899",
|
||||
"encrypts": "#eab308",
|
||||
"logs_to": "#78716c",
|
||||
}
|
||||
|
||||
_HTML_TEMPLATE = """\
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8"/>
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
|
||||
<title>Prowler – AWS Connectivity Graph</title>
|
||||
<script src="https://d3js.org/d3.v7.min.js"></script>
|
||||
<style>
|
||||
*, *::before, *::after {{ box-sizing: border-box; }}
|
||||
body {{
|
||||
margin: 0;
|
||||
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif;
|
||||
background: #0f172a;
|
||||
color: #e2e8f0;
|
||||
}}
|
||||
#header {{
|
||||
padding: 12px 20px;
|
||||
background: #1e293b;
|
||||
border-bottom: 1px solid #334155;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 16px;
|
||||
}}
|
||||
#header h1 {{ margin: 0; font-size: 18px; font-weight: 700; }}
|
||||
#header .stats {{ font-size: 13px; color: #94a3b8; }}
|
||||
#controls {{
|
||||
padding: 8px 20px;
|
||||
background: #1e293b;
|
||||
border-bottom: 1px solid #334155;
|
||||
display: flex;
|
||||
gap: 12px;
|
||||
align-items: center;
|
||||
flex-wrap: wrap;
|
||||
}}
|
||||
#controls label {{ font-size: 12px; color: #94a3b8; }}
|
||||
#controls select, #controls input[type=range] {{
|
||||
background: #0f172a;
|
||||
color: #e2e8f0;
|
||||
border: 1px solid #334155;
|
||||
border-radius: 4px;
|
||||
padding: 3px 6px;
|
||||
font-size: 12px;
|
||||
}}
|
||||
#graph-container {{ width: 100%; height: calc(100vh - 100px); position: relative; }}
|
||||
svg {{ width: 100%; height: 100%; }}
|
||||
.node circle {{
|
||||
stroke: #1e293b;
|
||||
stroke-width: 1.5px;
|
||||
cursor: pointer;
|
||||
transition: r 0.15s;
|
||||
}}
|
||||
.node circle:hover {{ stroke-width: 3px; }}
|
||||
.node text {{
|
||||
font-size: 10px;
|
||||
fill: #e2e8f0;
|
||||
pointer-events: none;
|
||||
text-shadow: 0 0 4px #0f172a;
|
||||
}}
|
||||
.link {{
|
||||
stroke-opacity: 0.6;
|
||||
stroke-width: 1.5px;
|
||||
}}
|
||||
.link-label {{
|
||||
font-size: 8px;
|
||||
fill: #94a3b8;
|
||||
pointer-events: none;
|
||||
}}
|
||||
#tooltip {{
|
||||
position: fixed;
|
||||
background: #1e293b;
|
||||
border: 1px solid #334155;
|
||||
border-radius: 6px;
|
||||
padding: 10px 14px;
|
||||
font-size: 12px;
|
||||
pointer-events: none;
|
||||
max-width: 320px;
|
||||
word-break: break-all;
|
||||
z-index: 9999;
|
||||
display: none;
|
||||
}}
|
||||
#tooltip strong {{ color: #f8fafc; }}
|
||||
#tooltip .prop {{ color: #94a3b8; margin-top: 4px; }}
|
||||
#legend {{
|
||||
position: absolute;
|
||||
top: 10px;
|
||||
right: 10px;
|
||||
background: rgba(30,41,59,0.9);
|
||||
border: 1px solid #334155;
|
||||
border-radius: 6px;
|
||||
padding: 10px 14px;
|
||||
font-size: 11px;
|
||||
}}
|
||||
#legend h3 {{ margin: 0 0 6px; font-size: 12px; }}
|
||||
.legend-row {{ display: flex; align-items: center; gap: 6px; margin: 3px 0; }}
|
||||
.legend-dot {{ width: 12px; height: 12px; border-radius: 50%; flex-shrink: 0; }}
|
||||
.legend-line {{ width: 20px; height: 2px; flex-shrink: 0; }}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div id="header">
|
||||
<h1>🔗 AWS Connectivity Graph</h1>
|
||||
<span class="stats" id="stat-label">Generated: {generated_at}</span>
|
||||
</div>
|
||||
<div id="controls">
|
||||
<label>Filter service:
|
||||
<select id="filter-service">
|
||||
<option value="">All services</option>
|
||||
</select>
|
||||
</label>
|
||||
<label>Link distance:
|
||||
<input type="range" id="link-distance" min="40" max="300" value="120"/>
|
||||
</label>
|
||||
<label>Charge strength:
|
||||
<input type="range" id="charge-strength" min="-800" max="-20" value="-250"/>
|
||||
</label>
|
||||
<span class="stats" id="visible-count"></span>
|
||||
</div>
|
||||
<div id="graph-container">
|
||||
<svg id="graph-svg"></svg>
|
||||
<div id="tooltip"></div>
|
||||
<div id="legend">
|
||||
<h3>Node types</h3>
|
||||
{legend_nodes_html}
|
||||
<h3 style="margin-top:8px">Edge types</h3>
|
||||
{legend_edges_html}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
const RAW_NODES = {nodes_json};
|
||||
const RAW_EDGES = {edges_json};
|
||||
const NODE_COLOURS = {node_colours_json};
|
||||
const EDGE_COLOURS = {edge_colours_json};
|
||||
|
||||
// ── helpers ──────────────────────────────────────────────────────────────
|
||||
function nodeColour(d) {{
|
||||
return NODE_COLOURS[d.type] || NODE_COLOURS["default"];
|
||||
}}
|
||||
function edgeColour(d) {{
|
||||
return EDGE_COLOURS[d.edge_type] || "#94a3b8";
|
||||
}}
|
||||
function nodeRadius(d) {{
|
||||
const base = {{
|
||||
lambda_function: 9, ec2_instance: 10, vpc: 14, subnet: 8,
|
||||
security_group: 7, rds_instance: 11, load_balancer: 12,
|
||||
s3_bucket: 9, iam_role: 9
|
||||
}};
|
||||
return base[d.type] || 8;
|
||||
}}
|
||||
|
||||
// ── filter controls ───────────────────────────────────────────────────────
|
||||
const services = [...new Set(RAW_NODES.map(n => n.service))].sort();
|
||||
const sel = document.getElementById("filter-service");
|
||||
services.forEach(s => {{
|
||||
const o = document.createElement("option");
|
||||
o.value = s; o.textContent = s;
|
||||
sel.appendChild(o);
|
||||
}});
|
||||
|
||||
// ── D3 setup ──────────────────────────────────────────────────────────────
|
||||
const svg = d3.select("#graph-svg");
|
||||
const container = svg.append("g");
|
||||
|
||||
// zoom
|
||||
svg.call(
|
||||
d3.zoom().scaleExtent([0.05, 8])
|
||||
.on("zoom", e => container.attr("transform", e.transform))
|
||||
);
|
||||
|
||||
// arrowhead marker
|
||||
const defs = svg.append("defs");
|
||||
defs.append("marker")
|
||||
.attr("id", "arrow")
|
||||
.attr("viewBox", "0 -5 10 10")
|
||||
.attr("refX", 20).attr("refY", 0)
|
||||
.attr("markerWidth", 6).attr("markerHeight", 6)
|
||||
.attr("orient", "auto")
|
||||
.append("path")
|
||||
.attr("d", "M0,-5L10,0L0,5")
|
||||
.attr("fill", "#94a3b8");
|
||||
|
||||
// tooltip
|
||||
const tooltip = document.getElementById("tooltip");
|
||||
|
||||
// ── simulation ────────────────────────────────────────────────────────────
|
||||
let simulation, linkSel, nodeSel, labelSel;
|
||||
|
||||
function buildGraph(nodeFilter) {{
|
||||
// Determine which nodes to show
|
||||
const visibleNodes = nodeFilter
|
||||
? RAW_NODES.filter(n => n.service === nodeFilter)
|
||||
: RAW_NODES;
|
||||
const visibleIds = new Set(visibleNodes.map(n => n.id));
|
||||
|
||||
// Only show edges where BOTH endpoints are visible
|
||||
const visibleEdges = RAW_EDGES.filter(
|
||||
e => visibleIds.has(e.source_id) && visibleIds.has(e.target_id)
|
||||
);
|
||||
|
||||
document.getElementById("visible-count").textContent =
|
||||
`Showing ${{visibleNodes.length}} nodes · ${{visibleEdges.length}} edges`;
|
||||
|
||||
container.selectAll("*").remove();
|
||||
|
||||
if (simulation) simulation.stop();
|
||||
|
||||
const nodes = visibleNodes.map(n => ({{ ...n }}));
|
||||
const nodeIndex = Object.fromEntries(nodes.map(n => [n.id, n]));
|
||||
|
||||
const links = visibleEdges.map(e => ({{
|
||||
...e,
|
||||
source: nodeIndex[e.source_id] || e.source_id,
|
||||
target: nodeIndex[e.target_id] || e.target_id,
|
||||
}}));
|
||||
|
||||
const dist = +document.getElementById("link-distance").value;
|
||||
const charge = +document.getElementById("charge-strength").value;
|
||||
|
||||
simulation = d3.forceSimulation(nodes)
|
||||
.force("link", d3.forceLink(links).id(d => d.id).distance(dist))
|
||||
.force("charge", d3.forceManyBody().strength(charge))
|
||||
.force("center", d3.forceCenter(
|
||||
document.getElementById("graph-container").clientWidth / 2,
|
||||
document.getElementById("graph-container").clientHeight / 2
|
||||
))
|
||||
.force("collision", d3.forceCollide().radius(d => nodeRadius(d) + 6));
|
||||
|
||||
// Edges
|
||||
linkSel = container.append("g").attr("class", "links")
|
||||
.selectAll("line")
|
||||
.data(links)
|
||||
.join("line")
|
||||
.attr("class", "link")
|
||||
.attr("stroke", edgeColour)
|
||||
.attr("marker-end", "url(#arrow)");
|
||||
|
||||
// Edge labels
|
||||
labelSel = container.append("g").attr("class", "link-labels")
|
||||
.selectAll("text")
|
||||
.data(links)
|
||||
.join("text")
|
||||
.attr("class", "link-label")
|
||||
.text(d => d.label || "");
|
||||
|
||||
// Nodes
|
||||
nodeSel = container.append("g").attr("class", "nodes")
|
||||
.selectAll("g")
|
||||
.data(nodes)
|
||||
.join("g")
|
||||
.attr("class", "node")
|
||||
.call(
|
||||
d3.drag()
|
||||
.on("start", (event, d) => {{
|
||||
if (!event.active) simulation.alphaTarget(0.3).restart();
|
||||
d.fx = d.x; d.fy = d.y;
|
||||
}})
|
||||
.on("drag", (event, d) => {{ d.fx = event.x; d.fy = event.y; }})
|
||||
.on("end", (event, d) => {{
|
||||
if (!event.active) simulation.alphaTarget(0);
|
||||
d.fx = null; d.fy = null;
|
||||
}})
|
||||
)
|
||||
.on("mouseover", (event, d) => {{
|
||||
const props = Object.entries(d.properties || {{}})
|
||||
.map(([k, v]) => `<div class="prop"><b>${{k}}</b>: ${{v}}</div>`)
|
||||
.join("");
|
||||
tooltip.innerHTML = `
|
||||
<strong>${{d.name}}</strong>
|
||||
<div class="prop"><b>type</b>: ${{d.type}}</div>
|
||||
<div class="prop"><b>service</b>: ${{d.service}}</div>
|
||||
<div class="prop"><b>region</b>: ${{d.region}}</div>
|
||||
<div class="prop"><b>account</b>: ${{d.account_id}}</div>
|
||||
<div class="prop" style="word-break:break-all"><b>arn</b>: ${{d.id}}</div>
|
||||
${{props}}
|
||||
`;
|
||||
tooltip.style.display = "block";
|
||||
tooltip.style.left = (event.clientX + 12) + "px";
|
||||
tooltip.style.top = (event.clientY - 10) + "px";
|
||||
}})
|
||||
.on("mousemove", event => {{
|
||||
tooltip.style.left = (event.clientX + 12) + "px";
|
||||
tooltip.style.top = (event.clientY - 10) + "px";
|
||||
}})
|
||||
.on("mouseout", () => {{ tooltip.style.display = "none"; }});
|
||||
|
||||
nodeSel.append("circle")
|
||||
.attr("r", nodeRadius)
|
||||
.attr("fill", nodeColour);
|
||||
|
||||
nodeSel.append("text")
|
||||
.attr("dx", d => nodeRadius(d) + 3)
|
||||
.attr("dy", "0.35em")
|
||||
.text(d => d.name.length > 24 ? d.name.slice(0, 22) + "…" : d.name);
|
||||
|
||||
simulation.on("tick", () => {{
|
||||
linkSel
|
||||
.attr("x1", d => d.source.x)
|
||||
.attr("y1", d => d.source.y)
|
||||
.attr("x2", d => d.target.x)
|
||||
.attr("y2", d => d.target.y);
|
||||
|
||||
labelSel
|
||||
.attr("x", d => (d.source.x + d.target.x) / 2)
|
||||
.attr("y", d => (d.source.y + d.target.y) / 2);
|
||||
|
||||
nodeSel.attr("transform", d => `translate(${{d.x}},${{d.y}})`);
|
||||
}});
|
||||
}}
|
||||
|
||||
// Initial render
|
||||
buildGraph(null);
|
||||
|
||||
// Filter change
|
||||
sel.addEventListener("change", () => buildGraph(sel.value || null));
|
||||
|
||||
// Simulation control sliders — restart on change
|
||||
document.getElementById("link-distance").addEventListener("input", () => buildGraph(sel.value || null));
|
||||
document.getElementById("charge-strength").addEventListener("input", () => buildGraph(sel.value || null));
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
|
||||
def _build_legend_html(colours: dict, shape: str) -> str:
|
||||
rows = []
|
||||
for key, colour in sorted(colours.items()):
|
||||
if shape == "dot":
|
||||
rows.append(
|
||||
f'<div class="legend-row">'
|
||||
f'<div class="legend-dot" style="background:{colour}"></div>'
|
||||
f"<span>{key}</span></div>"
|
||||
)
|
||||
else:
|
||||
rows.append(
|
||||
f'<div class="legend-row">'
|
||||
f'<div class="legend-line" style="background:{colour}"></div>'
|
||||
f"<span>{key}</span></div>"
|
||||
)
|
||||
return "\n".join(rows)
|
||||
|
||||
|
||||
def write_html(graph: ConnectivityGraph, file_path: str) -> None:
|
||||
"""Render the graph as a self-contained interactive HTML page."""
|
||||
try:
|
||||
os.makedirs(os.path.dirname(file_path), exist_ok=True)
|
||||
|
||||
nodes_json = json.dumps(
|
||||
[
|
||||
{
|
||||
"id": n.id,
|
||||
"type": n.type,
|
||||
"name": n.name,
|
||||
"service": n.service,
|
||||
"region": n.region,
|
||||
"account_id": n.account_id,
|
||||
"properties": n.properties,
|
||||
}
|
||||
for n in graph.nodes
|
||||
],
|
||||
indent=None,
|
||||
default=str,
|
||||
)
|
||||
edges_json = json.dumps(
|
||||
[
|
||||
{
|
||||
"source_id": e.source_id,
|
||||
"target_id": e.target_id,
|
||||
"edge_type": e.edge_type,
|
||||
"label": e.label or "",
|
||||
}
|
||||
for e in graph.edges
|
||||
],
|
||||
indent=None,
|
||||
default=str,
|
||||
)
|
||||
|
||||
html = _HTML_TEMPLATE.format(
|
||||
generated_at=datetime.utcnow().strftime("%Y-%m-%d %H:%M UTC"),
|
||||
nodes_json=nodes_json,
|
||||
edges_json=edges_json,
|
||||
node_colours_json=json.dumps(_NODE_COLOURS),
|
||||
edge_colours_json=json.dumps(_EDGE_COLOURS),
|
||||
legend_nodes_html=_build_legend_html(_NODE_COLOURS, "dot"),
|
||||
legend_edges_html=_build_legend_html(_EDGE_COLOURS, "line"),
|
||||
)
|
||||
|
||||
with open(file_path, "w", encoding="utf-8") as fh:
|
||||
fh.write(html)
|
||||
|
||||
logger.info(f"Inventory graph HTML written to {file_path}")
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"inventory_output.write_html: {e.__class__.__name__}[{e.__traceback__.tb_lineno}]: {e}"
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Convenience entry-point called from __main__.py
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def generate_inventory_outputs(output_path: str) -> None:
|
||||
"""
|
||||
Build the connectivity graph from currently-loaded service clients and write
|
||||
both JSON and HTML outputs.
|
||||
|
||||
Args:
|
||||
output_path: base file path WITHOUT extension, e.g.
|
||||
"output/prowler-output-20240101120000".
|
||||
The function appends .inventory.json and .inventory.html.
|
||||
"""
|
||||
from lib.graph_builder import build_graph
|
||||
|
||||
graph = build_graph()
|
||||
|
||||
if not graph.nodes:
|
||||
logger.warning(
|
||||
"Inventory graph: no nodes discovered. "
|
||||
"Make sure at least one AWS service was scanned before generating the inventory."
|
||||
)
|
||||
|
||||
write_json(graph, f"{output_path}.inventory.json")
|
||||
write_html(graph, f"{output_path}.inventory.html")
|
||||
@@ -1,71 +0,0 @@
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
|
||||
@dataclass
|
||||
class ResourceNode:
|
||||
"""
|
||||
Represents a single AWS resource as a node in the connectivity graph.
|
||||
|
||||
id : globally unique identifier — always the resource ARN
|
||||
type : coarse resource type used for grouping/colour, e.g. "lambda_function"
|
||||
name : human-readable label shown on the graph
|
||||
service : AWS service name, e.g. "lambda", "ec2", "rds"
|
||||
region : AWS region the resource lives in
|
||||
account_id: AWS account ID
|
||||
properties: additional resource-specific metadata (runtime, vpc_id, etc.)
|
||||
"""
|
||||
|
||||
id: str
|
||||
type: str
|
||||
name: str
|
||||
service: str
|
||||
region: str
|
||||
account_id: str
|
||||
properties: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ResourceEdge:
|
||||
"""
|
||||
Represents a directional relationship between two resource nodes.
|
||||
|
||||
source_id : ARN of the source node
|
||||
target_id : ARN of the target node
|
||||
edge_type : semantic type of the relationship, e.g.:
|
||||
"network" – resources share a network path (VPC/subnet/SG)
|
||||
"iam" – IAM trust or permission relationship
|
||||
"triggers" – one resource can invoke another (event source → Lambda)
|
||||
"data_flow" – data is written/read (Lambda → SQS dead-letter queue)
|
||||
"depends_on" – soft dependency (Lambda layer, subnet belongs to VPC)
|
||||
"routes_to" – traffic routing (LB → target)
|
||||
"encrypts" – KMS key encrypts the resource
|
||||
label : optional short label rendered on the edge in the HTML graph
|
||||
"""
|
||||
|
||||
source_id: str
|
||||
target_id: str
|
||||
edge_type: str
|
||||
label: Optional[str] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConnectivityGraph:
|
||||
"""
|
||||
Container for the full inventory connectivity graph.
|
||||
|
||||
nodes: all discovered resource nodes
|
||||
edges: all discovered edges between nodes
|
||||
"""
|
||||
|
||||
nodes: List[ResourceNode] = field(default_factory=list)
|
||||
edges: List[ResourceEdge] = field(default_factory=list)
|
||||
|
||||
def add_node(self, node: ResourceNode) -> None:
|
||||
self.nodes.append(node)
|
||||
|
||||
def add_edge(self, edge: ResourceEdge) -> None:
|
||||
self.edges.append(edge)
|
||||
|
||||
def node_ids(self) -> set:
|
||||
return {n.id for n in self.nodes}
|
||||
@@ -73,58 +73,6 @@ The best reference to understand how to implement a new service is following the
|
||||
- AWS API calls are wrapped in try/except blocks, with specific handling for `ClientError` and generic exceptions, always logging errors.
|
||||
- If ARN is not present for some resource, it can be constructed using string interpolation, always including partition, service, region, account, and resource ID.
|
||||
- Tags and additional attributes that cannot be retrieved from the default call, should be collected and stored for each resource using dedicated methods and threading using the resource object list as iterator.
|
||||
- When accessing dictionary values from AWS API responses, always use `.get()` with a default value instead of direct dictionary access (e.g., `response.get("Policies", {})` instead of `response["Policies"]`). AWS API responses may not always include all keys, and direct access can cause `KeyError` exceptions that break the entire scan for that service.
|
||||
|
||||
### Extending an Existing Service with New Attributes
|
||||
|
||||
When adding a new check that requires data not yet collected by an existing service, you need to extend the service by adding new attributes to its resource models and updating the data collection methods. This is a common contributor task that follows a consistent pattern:
|
||||
|
||||
1. **Identify the missing data**: Determine which AWS API call provides the data you need and whether it's already being called by the service.
|
||||
|
||||
2. **Add new attributes to the resource model**: Extend the Pydantic `BaseModel` class for the resource with the new fields. Use `Optional` types with `None` as the default value to maintain backward compatibility with existing checks.
|
||||
|
||||
3. **Update the data collection method**: Modify the existing method that fetches resource details to also extract and store the new attributes. If no existing method fetches the data, add a new method and call it in the constructor using `self.__threading_call__` if possible.
|
||||
|
||||
4. **Use safe dictionary access**: When extracting values from API responses, always use `.get()` with appropriate defaults to prevent `KeyError` exceptions when the API doesn't return certain fields.
|
||||
|
||||
#### Example: Adding DKIM Status to SES Identities
|
||||
|
||||
```python
|
||||
# Step 1 & 2: Add new fields to the resource model
|
||||
class Identity(BaseModel):
|
||||
name: str
|
||||
arn: str
|
||||
region: str
|
||||
type: Optional[str]
|
||||
policy: Optional[dict] = None
|
||||
tags: Optional[list] = []
|
||||
# New attributes for DKIM check
|
||||
dkim_status: Optional[str] = None
|
||||
dkim_signing_attributes_origin: Optional[str] = None
|
||||
|
||||
# Step 3: Update the data collection method
|
||||
def _get_email_identities(self, identity):
|
||||
try:
|
||||
regional_client = self.regional_clients[identity.region]
|
||||
identity_attributes = regional_client.get_email_identity(
|
||||
EmailIdentity=identity.name
|
||||
)
|
||||
# Step 4: Use .get() for safe dictionary access
|
||||
for content_key, content_value in identity_attributes.get("Policies", {}).items():
|
||||
identity.policy = loads(content)
|
||||
identity.tags = identity_attributes.get("Tags", [])
|
||||
# Extract new DKIM attributes
|
||||
identity.dkim_status = identity_attributes.get("DkimStatus")
|
||||
identity.dkim_signing_attributes_origin = (
|
||||
identity_attributes.get("DkimSigningAttributesOrigin")
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
```
|
||||
|
||||
5. **Update the service tests**: Add the new attributes to the test mock data and assertions to verify correct data extraction.
|
||||
|
||||
## Specific Patterns in AWS Checks
|
||||
|
||||
|
||||
@@ -215,6 +215,3 @@ Also is important to keep all code examples as short as possible, including the
|
||||
| e5 | M365 and Azure Entra checks enabled by or dependent on an E5 license (e.g., advanced threat protection, audit, DLP, and eDiscovery) |
|
||||
| privilege-escalation | Detects IAM policies or permissions that allow identities to elevate their privileges beyond their intended scope, potentially gaining administrator or higher-level access through specific action combinations |
|
||||
| ec2-imdsv1 | Identifies EC2 instances using Instance Metadata Service version 1 (IMDSv1), which is vulnerable to SSRF attacks and should be replaced with IMDSv2 for enhanced security |
|
||||
| vercel-hobby-plan | Vercel checks whose audited feature is available on the Hobby plan (and therefore also on Pro and Enterprise plans) |
|
||||
| vercel-pro-plan | Vercel checks whose audited feature requires a Pro plan or higher, including features also available on Enterprise or via supported paid add-ons for Pro plans |
|
||||
| vercel-enterprise-plan | Vercel checks whose audited feature requires the Enterprise plan |
|
||||
|
||||
@@ -27,28 +27,14 @@ The most common high level steps to create a new check are:
|
||||
|
||||
### Naming Format for Checks
|
||||
|
||||
If you already know the check name when creating a request or implementing a check, use a descriptive identifier with lowercase letters and underscores only.
|
||||
|
||||
Recommended patterns:
|
||||
|
||||
- `<service>_<resource>_<best_practice>`
|
||||
Checks must be named following the format: `service_subservice_resource_action`.
|
||||
|
||||
The name components are:
|
||||
|
||||
- `service` – The main service or product area being audited (e.g., ec2, entra, iam, bedrock).
|
||||
- `resource` – The resource, feature, or configuration being evaluated. It can be a single word or a compound phrase joined with underscores (e.g., instance, policy, guardrail, sensitive_information_filter).
|
||||
- `best_practice` – The expected secure state or best practice being checked (e.g., enabled, encrypted, restricted, configured, not_publicly_accessible).
|
||||
|
||||
Additional guidance:
|
||||
|
||||
- Use underscores only. Do not use hyphens.
|
||||
- Keep the name specific enough to describe the behavior of the check.
|
||||
- The first segment should match the service or product area whenever possible.
|
||||
|
||||
Examples:
|
||||
|
||||
- `s3_bucket_versioning_enabled`
|
||||
- `bedrock_guardrail_sensitive_information_filter_enabled`
|
||||
- `service` – The main service being audited (e.g., ec2, entra, iam, etc.)
|
||||
- `subservice` – An individual component or subset of functionality within the service that is being audited. This may correspond to a shortened version of the class attribute accessed within the check. If there is no subservice, just omit.
|
||||
- `resource` – The specific resource type being evaluated (e.g., instance, policy, role, etc.)
|
||||
- `action` – The security aspect or configuration being checked (e.g., public, encrypted, enabled, etc.)
|
||||
|
||||
### File Creation
|
||||
|
||||
@@ -401,7 +387,7 @@ Provides both code examples and best practice recommendations for addressing the
|
||||
|
||||
#### Categories
|
||||
|
||||
One or more functional groupings used for execution filtering (e.g., `internet-exposed`). Categories must match the predefined values enforced by `CheckMetadata`; adding a new category requires updating the validator and the metadata documentation.
|
||||
One or more functional groupings used for execution filtering (e.g., `internet-exposed`). You can define new categories just by adding to this field.
|
||||
|
||||
For the complete list of available categories, see [Categories Guidelines](/developer-guide/check-metadata-guidelines#categories-guidelines).
|
||||
|
||||
|
||||
@@ -134,22 +134,6 @@ prek installed at `.git/hooks/pre-commit`
|
||||
If pre-commit hooks were previously installed, run `prek install --overwrite` to replace the existing hook. Otherwise, both tools will run on each commit.
|
||||
</Warning>
|
||||
|
||||
#### Enable TruffleHog as a Pre-Push Hook
|
||||
|
||||
By default, only `pre-commit` hooks are installed. To enable [`TruffleHog`](https://github.com/trufflesecurity/trufflehog) secret scanning on every push, install the `pre-push` hook type explicitly:
|
||||
|
||||
```shell
|
||||
prek install --hook-type pre-push
|
||||
```
|
||||
|
||||
Successful installation should produce the following output:
|
||||
|
||||
```shell
|
||||
prek installed at `.git/hooks/pre-push`
|
||||
```
|
||||
|
||||
Once installed, TruffleHog runs before each push and blocks the operation when verified secrets are detected.
|
||||
|
||||
### Code Quality and Security Checks
|
||||
|
||||
Before merging pull requests, several automated checks and utilities ensure code security and updated dependencies:
|
||||
|
||||
@@ -1003,7 +1003,7 @@ class ProwlerArgumentParser:
|
||||
formatter_class=RawTextHelpFormatter,
|
||||
usage="prowler [-h] [--version] {aws,azure,gcp,kubernetes,m365,github,nhn,dashboard,iac,your_provider} ...",
|
||||
epilog="""
|
||||
Available Providers:
|
||||
Available Cloud Providers:
|
||||
{aws,azure,gcp,kubernetes,m365,github,iac,nhn,your_provider}
|
||||
aws AWS Provider
|
||||
azure Azure Provider
|
||||
|
||||
@@ -1,131 +0,0 @@
|
||||
---
|
||||
title: 'Prowler Studio'
|
||||
---
|
||||
|
||||
**Prowler Studio is an AI workflow that ensures Claude Code follows Prowler's skills, guardrails, and best practices when creating new security checks.** What lands in the resulting pull request is consistent, tested, and ready for human review — not half-correct boilerplate that needs to be rewritten.
|
||||
|
||||
<Info>
|
||||
**Contributor Tool**: Prowler Studio is a workflow for advanced contributors adding new Prowler security checks. It is not part of Prowler Cloud, Prowler App, or Prowler CLI.
|
||||
</Info>
|
||||
|
||||
<Warning>
|
||||
**Preview Feature**: Prowler Studio is under active development and breaking changes are expected. Please report issues or share feedback on [GitHub](https://github.com/prowler-cloud/prowler-studio/issues) or in the [Slack community](https://goto.prowler.com/slack).
|
||||
</Warning>
|
||||
|
||||
<Card title="Prowler Studio Repository" icon="github" href="https://github.com/prowler-cloud/prowler-studio" horizontal>
|
||||
Clone the source code, install Prowler Studio, and explore the agent workflow in detail.
|
||||
</Card>
|
||||
|
||||
## The Problem
|
||||
|
||||
Adding a new check to [Prowler](https://github.com/prowler-cloud/prowler) is more than writing detection logic. A correct check has to:
|
||||
|
||||
- Match Prowler's exact service and check folder structure and naming conventions
|
||||
- Wire up metadata, severity, remediation, tests, and compliance mappings
|
||||
- Mirror the patterns used by the hundreds of existing checks in the same provider
|
||||
- Actually load when Prowler scans for available checks — silent structural mistakes are easy to make
|
||||
|
||||
Asking a general-purpose AI assistant to do this usually means guessing. It misses conventions, skips tests, or invents structure that looks right but does not load. The result is a half-correct PR that needs to be reviewed line by line or rewritten.
|
||||
|
||||
## The Solution
|
||||
|
||||
Prowler Studio enforces the workflow end-to-end. Describe the check once — a markdown ticket, a Jira issue, or a GitHub issue — and the workflow:
|
||||
|
||||
1. **Loads Prowler-specific skills into every agent.** Every step starts with the same context an experienced Prowler engineer would have in mind. See [AI Skills System](/developer-guide/ai-skills) for how skills are structured.
|
||||
2. **Runs specialized agents in sequence.** Implementation → testing → compliance mapping → review → PR creation. Each agent has one job and a tight scope.
|
||||
3. **Verifies as it goes.** The check must load in Prowler. Tests must pass. If something fails, the agent fixes it and re-runs (up to a bounded number of attempts) before moving on.
|
||||
4. **Produces a complete pull request.** Branch, passing check, tests, compliance mappings, and a pull request waiting for human review.
|
||||
|
||||
The result is a consistent starting point, every time, on every supported provider.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Install
|
||||
|
||||
Prowler Studio requires [`uv`](https://docs.astral.sh/uv/getting-started/installation/) — see the official [installation guide](https://docs.astral.sh/uv/getting-started/installation/).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/prowler-cloud/prowler-studio
|
||||
cd prowler-studio
|
||||
uv sync
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
### Describe the Check
|
||||
|
||||
A ticket is a structured markdown description of the check to create. It is the only input the workflow needs; every agent (implementation, testing, compliance mapping, review, PR creation) uses it as the source of truth, so the more concrete it is, the closer the first PR will land to the desired outcome.
|
||||
|
||||
The ticket can be supplied in three ways:
|
||||
|
||||
- **Local markdown file** → `--ticket path/to/ticket.md`
|
||||
- **Jira issue** → `--jira-url https://...` (uses the issue body)
|
||||
- **GitHub issue** → `--github-url https://...` (uses the issue body)
|
||||
|
||||
The content should follow the **New Check Request** template:
|
||||
|
||||
- The local copy at [`check_ticket_template.md`](https://github.com/prowler-cloud/prowler-studio/blob/main/check_ticket_template.md) covers `--ticket` and Jira tickets.
|
||||
- A prefilled GitHub form is also available: [Create a New Check Request issue](https://github.com/prowler-cloud/prowler/issues/new?template=new-check-request.yml).
|
||||
|
||||
Sections marked *Optional* can be skipped; everything else helps the agents make the right decisions.
|
||||
|
||||
### Run the Workflow
|
||||
|
||||
From a local markdown ticket:
|
||||
|
||||
```bash
|
||||
prowler-studio --ticket check_ticket.md
|
||||
```
|
||||
|
||||
From a Jira ticket:
|
||||
|
||||
```bash
|
||||
prowler-studio --jira-url https://mycompany.atlassian.net/browse/PROJ-123
|
||||
```
|
||||
|
||||
From a GitHub issue:
|
||||
|
||||
```bash
|
||||
prowler-studio --github-url https://github.com/owner/repo/issues/123
|
||||
```
|
||||
|
||||
<Note>
|
||||
Provide exactly one of `--ticket`, `--jira-url`, or `--github-url`.
|
||||
</Note>
|
||||
|
||||
Keep changes local (no push, no pull request):
|
||||
|
||||
```bash
|
||||
prowler-studio -b feat/my-check --ticket check_ticket.md --local
|
||||
```
|
||||
|
||||
### What You Get
|
||||
|
||||
After a successful run the working environment contains:
|
||||
|
||||
- A new branch on a clean Prowler worktree containing the check, metadata, tests, and compliance mappings
|
||||
- A pull request opened against Prowler (skipped with `--local`)
|
||||
- A timestamped log file under `logs/` capturing every step the agents took
|
||||
|
||||
## CLI Options
|
||||
|
||||
| Option | Short | Description |
|
||||
|--------|-------|-------------|
|
||||
| `--branch` | `-b` | Branch name (default: `feat/<ticket>-<check_name>` or `feat/<check_name>`) |
|
||||
| `--ticket` | `-t` | Path to a markdown check ticket file |
|
||||
| `--jira-url` | `-j` | Jira ticket URL (e.g., `https://mycompany.atlassian.net/browse/PROJ-123`) |
|
||||
| `--github-url` | `-g` | GitHub issue URL (e.g., `https://github.com/owner/repo/issues/123`) |
|
||||
| `--working-dir` | `-w` | Working directory for the Prowler clone (default: `./working`) |
|
||||
| `--no-worktree` | | Legacy mode — work directly on the main clone instead of using worktrees |
|
||||
| `--cleanup-worktree` | | Remove the worktree after a successful pull request is created |
|
||||
| `--local` | | Keep changes local — skip push and pull request creation |
|
||||
|
||||
## Configuration
|
||||
|
||||
Set these environment variables depending on the input source:
|
||||
|
||||
| Variable | When Needed | Purpose |
|
||||
|----------|-------------|---------|
|
||||
| `GITHUB_TOKEN` | `--github-url` (recommended) | Higher GitHub API rate limits and access to private issues |
|
||||
| `JIRA_SITE_URL` | `--jira-url` | Jira site, e.g. `https://mycompany.atlassian.net` |
|
||||
| `JIRA_EMAIL` | `--jira-url` | Email of the Jira account used to fetch the ticket |
|
||||
| `JIRA_API_TOKEN` | `--jira-url` | API token for the Jira account |
|
||||
@@ -2,378 +2,47 @@
|
||||
title: 'Creating a New Security Compliance Framework in Prowler'
|
||||
---
|
||||
|
||||
This guide explains how to add a new security compliance framework to Prowler, end to end. It covers directory layout, the JSON schema, check mapping conventions, the Pydantic models that validate each framework, the CSV output formatter, local validation, testing, and the pull request process.
|
||||
|
||||
## Introduction
|
||||
|
||||
A compliance framework in Prowler maps a public or custom control catalog (for example CIS, NIST 800-53, PCI DSS, HIPAA, ENS, CCC) to the security checks that Prowler already runs. Each requirement links to zero, one or more Prowler checks. When a scan executes, findings are aggregated per requirement to produce the compliance report rendered by Prowler CLI and Prowler Cloud.
|
||||
To create or contribute a custom security framework for Prowler—or to integrate a public framework—you must ensure the necessary checks are available. If they are missing, they must be implemented before proceeding.
|
||||
|
||||
Prowler ships with 85+ compliance frameworks across All Providers. The catalog lives under `prowler/compliance/<provider>/` (or `prowler/compliance/` for universal compliance frameworks)
|
||||
Each framework is defined in a compliance file per provider. The file should follow the structure used in `prowler/compliance/<provider>/` and be named `<framework>_<version>_<provider>.json`. Follow the format below to create your own.
|
||||
|
||||
<Warning>
|
||||
A compliance framework must represent the **complete state** of the source catalog. Every requirement defined by the framework has to be present in the JSON file, even when none of the existing Prowler checks can automate it. In that case, leave `Checks` as an empty array, but do not omit the requirement.
|
||||
## Compliance Framework
|
||||
|
||||
Requirement coverage feeds the compliance percentage calculations and the metadata surfaces (dashboards, widgets, exports). Missing requirements skew those metrics and break the report as a faithful snapshot of the framework.
|
||||
</Warning>
|
||||
### Compliance Framework Structure
|
||||
|
||||
### Prerequisites
|
||||
Each compliance framework file consists of structured metadata that identifies the framework and maps security checks to requirements or controls. Please note that a single requirement can be linked to multiple Prowler checks:
|
||||
|
||||
Before adding a new framework, complete the following checks:
|
||||
|
||||
- **Verify the framework is not already supported.** Inspect `prowler/compliance/<provider>/` for an existing JSON file matching the name and version.
|
||||
- **Confirm the required checks exist.** Every requirement that can be automated must point to one or more existing Prowler checks. For each missing check, implement it first by following the [Prowler Checks](/developer-guide/checks) guide.
|
||||
- **Review a reference framework.** Use an existing framework with a similar structure as your template. `cis_2.0_aws.json` is the canonical reference for CIS-style frameworks. `ccc_aws.json`, `ens_rd2022_aws.json`, and `nist_800_53_revision_5_aws.json` illustrate other attribute shapes.
|
||||
|
||||
## Four-Layer Architecture
|
||||
|
||||
A compliance framework spans four layers. A complete contribution must touch each layer that applies.
|
||||
|
||||
- **Layer 1 – Schema validation:** The Pydantic models in `prowler/lib/check/compliance_models.py` define the canonical schema for each attribute shape (CIS, ENS, Mitre, CCC, C5, CSA CCM, ISO 27001, KISA ISMS-P, AWS Well-Architected, Prowler ThreatScore, and a generic fallback).
|
||||
- **Layer 2 – JSON catalog:** The framework JSON file in `prowler/compliance/<provider>/` lists every requirement and maps it to checks.
|
||||
- **Layer 3 – Output formatter:** The Python module in `prowler/lib/outputs/compliance/<framework>/` builds the CSV row model, the per-provider transformer, and the CLI summary table.
|
||||
- **Layer 4 – Output dispatchers:** The dispatchers in `prowler/lib/outputs/compliance/compliance.py` and `prowler/lib/outputs/compliance/compliance_output.py` route findings to the right formatter based on the framework identifier.
|
||||
|
||||
The rest of this guide walks each layer in order.
|
||||
|
||||
## Directory Structure and File Naming
|
||||
|
||||
Compliance frameworks live at:
|
||||
- `Framework`: string – The distinguished name of the framework (e.g., CIS).
|
||||
- `Provider`: string – The cloud provider where the framework applies (AWS, Azure, OCI).
|
||||
- `Version`: string – The framework version (e.g., 1.4 for CIS).
|
||||
- `Requirements`: array of objects. – Defines security requirements and their mapping to Prowler checks. All requirements or controls are to be included with the mapping to Prowler.
|
||||
- `Requirements_Id`: string – A unique identifier for each requirement within the framework
|
||||
- `Requirements_Description`: string – The requirement description as specified in the framework.
|
||||
- `Requirements_Attributes`: array of objects. – Contains relevant metadata such as security levels, sections, and any additional data needed for reporting with the result of the findings. Attributes should be derived directly from the framework’s own terminology, ensuring consistency with its established definitions.
|
||||
- `Requirements_Checks`: array. The Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case automation is not feasible, this can be empty.
|
||||
|
||||
```
|
||||
prowler/compliance/<provider>/<framework>_<version>_<provider>.json
|
||||
```
|
||||
|
||||
The filename conventions are:
|
||||
|
||||
- All lowercase, words separated with underscores.
|
||||
- `<provider>` is a supported provider identifier: `aws`, `azure`, `gcp`, `kubernetes`, `m365`, `github`, `googleworkspace`, `alibabacloud`, `oraclecloud`, `cloudflare`, `mongodbatlas`, `nhn`, `openstack`, `iac`, `llm`.
|
||||
- `<version>` is optional. Omit it when the framework has no versioning, as in `ccc_aws.json`.
|
||||
- The file basename (without `.json`) is the framework key that Prowler CLI accepts via `--compliance`.
|
||||
|
||||
Examples:
|
||||
|
||||
- `prowler/compliance/aws/cis_2.0_aws.json`
|
||||
- `prowler/compliance/aws/nist_800_53_revision_5_aws.json`
|
||||
- `prowler/compliance/azure/ens_rd2022_azure.json`
|
||||
- `prowler/compliance/kubernetes/cis_1.10_kubernetes.json`
|
||||
- `prowler/compliance/aws/ccc_aws.json`
|
||||
|
||||
The output formatter directory mirrors the framework name:
|
||||
|
||||
```
|
||||
prowler/lib/outputs/compliance/<framework>/
|
||||
├── <framework>.py # CLI summary-table dispatcher
|
||||
├── <framework>_<provider>.py # Per-provider transformer class
|
||||
├── models.py # Pydantic CSV row model
|
||||
└── __init__.py
|
||||
```
|
||||
|
||||
## JSON Schema Reference
|
||||
|
||||
Every compliance file is a JSON document with the following top-level keys.
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|---|---|---|---|
|
||||
| `Framework` | string | Yes | Canonical framework identifier, for example `CIS`, `NIST-800-53-Revision-5`, `ENS`, `CCC`. |
|
||||
| `Name` | string | Yes | Human-readable framework name displayed by Prowler App. |
|
||||
| `Version` | string | Yes | Framework version, for example `2.0`. Use an empty string only for frameworks without versioning. See [Version Handling](#version-handling). |
|
||||
| `Provider` | string | Yes | Upper-cased provider identifier: `AWS`, `AZURE`, `GCP`, `KUBERNETES`, `M365`, `GITHUB`, `GOOGLEWORKSPACE`, and so on. |
|
||||
| `Description` | string | Yes | Short description of the framework's scope and purpose. |
|
||||
| `Requirements` | array | Yes | List of [requirement objects](#requirement-object). |
|
||||
|
||||
### Requirement Object
|
||||
|
||||
Each entry in `Requirements` describes one control or requirement.
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|---|---|---|---|
|
||||
| `Id` | string | Yes | Unique identifier within the framework, for example `1.10` or `CCC.Core.CN01.AR01`. |
|
||||
| `Name` | string | No | Optional human-readable name used by frameworks that distinguish control name from description, such as NIST. |
|
||||
| `Description` | string | Yes | Verbatim description from the source framework. |
|
||||
| `Attributes` | array | Yes | List of [attribute objects](#attribute-objects). The shape depends on the framework. |
|
||||
| `Checks` | array of strings | Yes | Prowler check identifiers that automate the requirement. Leave the list empty when the control cannot be automated. |
|
||||
|
||||
### Attribute Objects
|
||||
|
||||
Attributes carry the metadata that Prowler App and the CSV output display for each requirement. The object shape is framework-specific and is validated by a dedicated Pydantic model in `prowler/lib/check/compliance_models.py`. The most common shapes are summarized below.
|
||||
|
||||
#### CIS_Requirement_Attribute
|
||||
|
||||
Used by every CIS benchmark.
|
||||
|
||||
| Field | Type | Required | Notes |
|
||||
|---|---|---|---|
|
||||
| `Section` | string | Yes | Top-level section, for example `1 Identity and Access Management`. |
|
||||
| `SubSection` | string | No | Optional second-level grouping. |
|
||||
| `Profile` | enum | Yes | One of `Level 1`, `Level 2`, `E3 Level 1`, `E3 Level 2`, `E5 Level 1`, `E5 Level 2`. |
|
||||
| `AssessmentStatus` | enum | Yes | `Manual` or `Automated`. |
|
||||
| `Description` | string | Yes | Control description. |
|
||||
| `RationaleStatement` | string | Yes | Reason the control exists. |
|
||||
| `ImpactStatement` | string | Yes | Impact of non-compliance. |
|
||||
| `RemediationProcedure` | string | Yes | Remediation steps. |
|
||||
| `AuditProcedure` | string | Yes | Audit steps. |
|
||||
| `AdditionalInformation` | string | Yes | Free-form notes. |
|
||||
| `DefaultValue` | string | No | Default configuration value, when relevant. |
|
||||
| `References` | string | Yes | Colon-separated list of reference URLs. |
|
||||
|
||||
#### ENS_Requirement_Attribute
|
||||
|
||||
Used by the Spanish ENS (Esquema Nacional de Seguridad) frameworks.
|
||||
|
||||
| Field | Type | Required | Notes |
|
||||
|---|---|---|---|
|
||||
| `IdGrupoControl` | string | Yes | Control group identifier. |
|
||||
| `Marco` | string | Yes | Framework block (`operacional`, `organizativo`, `proteccion`). |
|
||||
| `Categoria` | string | Yes | Control category. |
|
||||
| `DescripcionControl` | string | Yes | Control description in Spanish. |
|
||||
| `Tipo` | enum | Yes | `refuerzo`, `requisito`, `recomendacion`, `medida`. |
|
||||
| `Nivel` | enum | Yes | `opcional`, `bajo`, `medio`, `alto`. |
|
||||
| `Dimensiones` | array of enum | Yes | Subset of `confidencialidad`, `integridad`, `trazabilidad`, `autenticidad`, `disponibilidad`. |
|
||||
| `ModoEjecucion` | string | Yes | Execution mode (`manual`, `automático`, `híbrido`). |
|
||||
| `Dependencias` | array of strings | Yes | Ids of prerequisite controls. Empty list when none. |
|
||||
|
||||
#### CCC_Requirement_Attribute
|
||||
|
||||
Used by the Common Cloud Controls Catalog.
|
||||
|
||||
| Field | Type | Required | Notes |
|
||||
|---|---|---|---|
|
||||
| `FamilyName` | string | Yes | Control family, for example `Data`. |
|
||||
| `FamilyDescription` | string | Yes | Description of the family. |
|
||||
| `Section` | string | Yes | Section title. |
|
||||
| `SubSection` | string | Yes | Subsection title, or empty string. |
|
||||
| `SubSectionObjective` | string | Yes | Stated objective for the subsection. |
|
||||
| `Applicability` | array of strings | Yes | Applicability tags such as `tlp-green`, `tlp-amber`, `tlp-red`. |
|
||||
| `Recommendation` | string | Yes | Implementation recommendation. |
|
||||
| `SectionThreatMappings` | array of objects | Yes | Each entry has `ReferenceId` and `Identifiers`. |
|
||||
| `SectionGuidelineMappings` | array of objects | Yes | Each entry has `ReferenceId` and `Identifiers`. |
|
||||
|
||||
#### Generic_Compliance_Requirement_Attribute
|
||||
|
||||
The fallback attribute model used when no framework-specific schema applies (for example NIST 800-53, PCI DSS, GDPR, HIPAA).
|
||||
|
||||
| Field | Type | Required | Notes |
|
||||
|---|---|---|---|
|
||||
| `ItemId` | string | No | Item identifier. |
|
||||
| `Section` | string | No | Section name. |
|
||||
| `SubSection` | string | No | Subsection name. |
|
||||
| `SubGroup` | string | No | Subgroup name. |
|
||||
| `Service` | string | No | Affected service, for example `aws`, `iam`. |
|
||||
| `Type` | string | No | Control type. |
|
||||
| `Comment` | string | No | Free-form comment. |
|
||||
|
||||
Additional per-framework attribute models exist for `AWS_Well_Architected_Requirement_Attribute`, `ISO27001_2013_Requirement_Attribute`, `Mitre_Requirement_Attribute_<Provider>`, `KISA_ISMSP_Requirement_Attribute`, `Prowler_ThreatScore_Requirement_Attribute`, `C5Germany_Requirement_Attribute`, and `CSA_CCM_Requirement_Attribute`. Consult `prowler/lib/check/compliance_models.py` for their full field sets.
|
||||
|
||||
<Note>
|
||||
The `Attributes` field is a Pydantic `Union`. The generic attribute model must remain the last element of that Union, otherwise Pydantic v1 silently coerces every framework into the generic shape and your specialized fields are dropped.
|
||||
</Note>
|
||||
|
||||
## Minimal Working Example
|
||||
|
||||
The following snippet is a complete, valid framework file named `my_framework_1.0_aws.json`, saved at `prowler/compliance/aws/my_framework_1.0_aws.json`. It uses the generic attribute shape for simplicity.
|
||||
|
||||
```json title="prowler/compliance/aws/my_framework_1.0_aws.json"
|
||||
{
|
||||
"Framework": "My-Framework",
|
||||
"Name": "My Framework 1.0 for AWS",
|
||||
"Version": "1.0",
|
||||
"Provider": "AWS",
|
||||
"Description": "Internal baseline for AWS accounts.",
|
||||
"Framework": "<framework>-<provider>",
|
||||
"Version": "<version>",
|
||||
"Requirements": [
|
||||
{
|
||||
"Id": "MF-1.1",
|
||||
"Description": "Root account must have multi-factor authentication enabled.",
|
||||
"Id": "<unique-id>",
|
||||
"Description": "Full description of the requirement",
|
||||
"Checks": [
|
||||
"Here is the prowler check or checks that will be executed"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"ItemId": "MF-1.1",
|
||||
"Section": "Identity and Access Management",
|
||||
"SubSection": "Root Account",
|
||||
"Service": "iam"
|
||||
<Add here your custom attributes.>
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"iam_root_mfa_enabled",
|
||||
"iam_root_hardware_mfa_enabled"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "MF-2.1",
|
||||
"Description": "S3 buckets must block public access at the account level.",
|
||||
"Attributes": [
|
||||
{
|
||||
"ItemId": "MF-2.1",
|
||||
"Section": "Data Protection",
|
||||
"Service": "s3"
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"s3_account_level_public_access_blocks"
|
||||
]
|
||||
}
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Mapping Checks to Requirements
|
||||
|
||||
Each requirement links to the Prowler checks that, together, produce a PASS or FAIL verdict for that control.
|
||||
|
||||
- **Include every requirement from the source catalog.** The framework file must mirror the full control list, one-to-one. Compliance percentages, dashboards, and exported metadata are computed against the total requirement count, so omitting an unmappable control inflates coverage and misrepresents the framework.
|
||||
- List every check by its canonical identifier, the value of `CheckID` inside the check's `.metadata.json` file.
|
||||
- One requirement can reference multiple checks. The requirement is evaluated as FAIL when any referenced check produces a FAIL finding for a resource in scope.
|
||||
- Leave `Checks` as an empty array when the requirement cannot be automated. The requirement still appears in the report, contributes to the total, and resolves to `MANUAL`. An empty mapping is valid; a missing requirement is not.
|
||||
- Reuse checks across requirements when the same control applies in multiple places. Do not duplicate check logic to match framework structure.
|
||||
- Avoid referencing checks from a different provider. A compliance file is bound to one provider, and cross-provider checks will never match findings in the scan.
|
||||
|
||||
To discover available checks, run:
|
||||
|
||||
```bash
|
||||
poetry run python prowler-cli.py <provider> --list-checks
|
||||
```
|
||||
|
||||
## Supporting Multiple Providers
|
||||
|
||||
Each compliance file targets a single provider. To cover several providers with the same framework (for example CIS across AWS, Azure, and GCP), ship one JSON file per provider:
|
||||
|
||||
```
|
||||
prowler/compliance/aws/cis_2.0_aws.json
|
||||
prowler/compliance/azure/cis_2.0_azure.json
|
||||
prowler/compliance/gcp/cis_2.0_gcp.json
|
||||
```
|
||||
|
||||
Keep the `Framework` and `Version` values identical across the files so the dispatcher matches them, and change only the `Provider`, `Checks`, and provider-specific metadata.
|
||||
|
||||
The CIS output formatter already supports every provider listed above. For a brand-new framework that spans several providers, add one transformer per provider in `prowler/lib/outputs/compliance/<framework>/` and extend the summary-table dispatcher accordingly. See [Output Formatter](#output-formatter).
|
||||
|
||||
## Output Formatter
|
||||
|
||||
Prowler renders every compliance framework in two forms: a detailed CSV report written to disk, and a summary table printed in the CLI. Both are produced by the output formatter package for the framework.
|
||||
|
||||
For a new framework named `my_framework`, create:
|
||||
|
||||
```
|
||||
prowler/lib/outputs/compliance/my_framework/
|
||||
├── __init__.py
|
||||
├── my_framework.py # CLI summary table dispatcher
|
||||
├── my_framework_aws.py # Per-provider transformer
|
||||
└── models.py # CSV row Pydantic model
|
||||
```
|
||||
|
||||
### Step 1 – Define the CSV Row Model
|
||||
|
||||
In `models.py`, declare a Pydantic v1 model with one field per CSV column. Use existing models such as `AWSCISModel` in `prowler/lib/outputs/compliance/cis/models.py` as the reference. Fields typically include `Provider`, `Description`, `AccountId`, `Region`, `AssessmentDate`, `Requirements_Id`, `Requirements_Description`, one `Requirements_Attributes_*` field per attribute key, plus the finding fields `Status`, `StatusExtended`, `ResourceId`, `ResourceName`, `CheckId`, `Muted`, `Framework`, `Name`.
|
||||
|
||||
### Step 2 – Implement the Transformer Class
|
||||
|
||||
In `my_framework_aws.py`, subclass `ComplianceOutput` from `prowler.lib.outputs.compliance.compliance_output` and implement `transform(findings, compliance, compliance_name)`. Iterate over `findings`, match each finding to the requirements it satisfies through `finding.compliance.get(compliance_name, [])`, and append one row per attribute to `self._data`.
|
||||
|
||||
### Step 3 – Add the Summary-Table Dispatcher
|
||||
|
||||
In `my_framework.py`, implement `get_my_framework_table(findings, bulk_checks_metadata, compliance_framework, output_filename, output_directory, compliance_overview)` following the pattern in `prowler/lib/outputs/compliance/cis/cis.py`.
|
||||
|
||||
### Step 4 – Register the Framework in the Dispatchers
|
||||
|
||||
- Add the dispatcher call in `prowler/lib/outputs/compliance/compliance.py`, inside `display_compliance_table`, with a branch such as `elif "my_framework" in compliance_framework:`.
|
||||
- Register the CSV model and transformer in `prowler/lib/outputs/compliance/compliance_output.py` so the CSV file is emitted during the scan.
|
||||
|
||||
<Note>
|
||||
For NIST-style catalogs that use `Generic_Compliance_Requirement_Attribute`, no custom formatter is needed. The generic formatter in `prowler/lib/outputs/compliance/generic/` handles them automatically, provided the JSON validates against the generic attribute schema.
|
||||
</Note>
|
||||
|
||||
## Version Handling
|
||||
|
||||
Prowler matches frameworks by concatenating `Framework` and `Version`. A missing or empty `Version` collapses several frameworks to the same key and breaks CLI filtering with `--compliance`.
|
||||
|
||||
- Always set `Version` to a non-empty string, even for frameworks that rename editions rather than version them. Use the edition identifier (for example `RD2022`, `v2025.10`, `4.0`).
|
||||
- When the source catalog has no version, use the first year of adoption or the release date.
|
||||
- Make sure the version substring embedded in the filename matches `Version`, because the CLI dispatcher reads `compliance_framework.split("_")[1]` to select the correct version.
|
||||
|
||||
## Validating the Framework Locally
|
||||
|
||||
Follow the steps below before opening a pull request.
|
||||
|
||||
### 1. Run the Compliance Model Validator
|
||||
|
||||
```bash
|
||||
poetry run python prowler-cli.py <provider> --list-compliance
|
||||
```
|
||||
|
||||
The framework must appear in the output. A validation error indicates a schema mismatch between the JSON file and the attribute model.
|
||||
|
||||
### 2. Run a Scan Filtered by the Framework
|
||||
|
||||
```bash
|
||||
poetry run python prowler-cli.py <provider> \
|
||||
--compliance <framework>_<version>_<provider> \
|
||||
--log-level ERROR
|
||||
```
|
||||
|
||||
Verify that:
|
||||
|
||||
- Prowler produces a CSV file under `output/compliance/` with the expected name.
|
||||
- The CLI summary table lists every section in the framework.
|
||||
- Findings roll up under the expected requirements.
|
||||
|
||||
### 3. Inspect the CSV Output
|
||||
|
||||
Open the generated CSV and confirm:
|
||||
|
||||
- All columns defined in `models.py` appear.
|
||||
- Every requirement has at least one row per scanned resource.
|
||||
- Values such as `Requirements_Attributes_Section` reflect the JSON content.
|
||||
|
||||
### 4. Verify the Framework in Prowler App
|
||||
|
||||
Launch Prowler App locally (`docker compose up` from the repository root) and run a scan with the new compliance framework. Confirm the compliance page renders the requirements, sections, and status widgets correctly.
|
||||
|
||||
## Testing
|
||||
|
||||
Compliance contributions require two layers of tests.
|
||||
|
||||
- **Schema tests** exercise the Pydantic models. Extend `tests/lib/check/universal_compliance_models_test.py` with a case that loads the new JSON file and asserts the attribute type matches the expected model.
|
||||
- **Output tests** exercise the transformer. Mirror the structure under `tests/lib/outputs/compliance/<framework>/` with fixtures that feed synthetic findings through the transformer and assert the resulting CSV rows.
|
||||
|
||||
Run the suite with:
|
||||
|
||||
```bash
|
||||
poetry run pytest -n auto tests/lib/check/universal_compliance_models_test.py \
|
||||
tests/lib/outputs/compliance/
|
||||
```
|
||||
|
||||
For guidance on writing Prowler SDK tests, refer to [Unit Testing](/developer-guide/unit-testing).
|
||||
|
||||
## Submitting the Pull Request
|
||||
|
||||
Before opening the pull request:
|
||||
|
||||
1. Run the complete QA pipeline:
|
||||
```bash
|
||||
poetry run pre-commit run --all-files
|
||||
poetry run pytest -n auto
|
||||
```
|
||||
2. Add a changelog entry under the `### 🚀 Added` section of `prowler/CHANGELOG.md`, describing the new framework and the providers it covers.
|
||||
3. Follow the [Pull Request Template](https://github.com/prowler-cloud/prowler/blob/master/.github/pull_request_template.md) and set the PR title using Conventional Commits, for example `feat(compliance): add My Framework 1.0 for AWS`.
|
||||
4. Request review from the compliance codeowners listed in `.github/CODEOWNERS`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The following issues are the most common when contributing a compliance framework.
|
||||
|
||||
- **`ValidationError: field required` during scan.** The JSON is missing a required attribute field. Re-check the matching Pydantic model in `prowler/lib/check/compliance_models.py`.
|
||||
- **All attributes collapse to `Generic_Compliance_Requirement_Attribute` values.** The Pydantic `Union` is ordered incorrectly, or the JSON matches only the generic shape. Move the generic model to the last Union position and ensure every required field is present in the JSON.
|
||||
- **`--compliance` filter does not find the framework.** The filename does not match the expected pattern `<framework>_<version>_<provider>.json`, the version is empty, or the file lives outside `prowler/compliance/<provider>/`.
|
||||
- **CLI summary table is empty but the CSV is populated.** The dispatcher branch in `prowler/lib/outputs/compliance/compliance.py` is missing or its substring match does not catch the framework key.
|
||||
- **CSV file is missing after the scan.** The transformer class is not registered in `prowler/lib/outputs/compliance/compliance_output.py`, or `transform()` raises silently. Run the scan with `--log-level DEBUG`.
|
||||
- **Findings do not roll up under a requirement.** A check listed in `Checks` either does not exist for that provider or is spelled incorrectly. Run `--list-checks | grep <check_name>` to confirm.
|
||||
|
||||
## Reference Examples
|
||||
|
||||
Use the following files as templates when modeling a new contribution.
|
||||
|
||||
- `prowler/compliance/aws/cis_2.0_aws.json` – CIS attribute shape.
|
||||
- `prowler/compliance/aws/nist_800_53_revision_5_aws.json` – Generic attribute shape.
|
||||
- `prowler/compliance/aws/ccc_aws.json` – CCC attribute shape.
|
||||
- `prowler/compliance/azure/ens_rd2022_azure.json` – ENS attribute shape.
|
||||
- `prowler/lib/check/compliance_models.py` – Canonical Pydantic schemas.
|
||||
- `prowler/lib/outputs/compliance/cis/` – Reference implementation of a multi-provider output formatter.
|
||||
- `prowler/lib/outputs/compliance/generic/` – Reference implementation of a generic output formatter.
|
||||
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`. Also, you need to add a new conditional in `prowler/lib/outputs/file_descriptors.py` if creating a new CSV model.
|
||||
|
||||
@@ -119,7 +119,6 @@
|
||||
"user-guide/tutorials/prowler-app-multi-tenant",
|
||||
"user-guide/tutorials/prowler-app-api-keys",
|
||||
"user-guide/tutorials/prowler-app-import-findings",
|
||||
"user-guide/tutorials/prowler-app-alerts",
|
||||
{
|
||||
"group": "Mutelist",
|
||||
"expanded": true,
|
||||
@@ -177,6 +176,7 @@
|
||||
"pages": [
|
||||
"user-guide/cli/tutorials/misc",
|
||||
"user-guide/cli/tutorials/reporting",
|
||||
"user-guide/cli/tutorials/compliance",
|
||||
"user-guide/cli/tutorials/dashboard",
|
||||
"user-guide/cli/tutorials/configuration_file",
|
||||
"user-guide/cli/tutorials/logging",
|
||||
@@ -338,7 +338,6 @@
|
||||
{
|
||||
"group": "Compliance",
|
||||
"pages": [
|
||||
"user-guide/compliance/tutorials/compliance",
|
||||
"user-guide/compliance/tutorials/threatscore"
|
||||
]
|
||||
},
|
||||
@@ -366,8 +365,7 @@
|
||||
"developer-guide/security-compliance-framework",
|
||||
"developer-guide/lighthouse-architecture",
|
||||
"developer-guide/mcp-server",
|
||||
"developer-guide/ai-skills",
|
||||
"developer-guide/prowler-studio"
|
||||
"developer-guide/ai-skills"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -504,10 +502,6 @@
|
||||
}
|
||||
},
|
||||
"redirects": [
|
||||
{
|
||||
"source": "/user-guide/cli/tutorials/compliance",
|
||||
"destination": "/user-guide/compliance/tutorials/compliance"
|
||||
},
|
||||
{
|
||||
"source": "/projects/prowler-open-source/en/latest/tutorials/prowler-app-lighthouse",
|
||||
"destination": "/user-guide/tutorials/prowler-app-lighthouse"
|
||||
|
||||
@@ -32,11 +32,11 @@ Access Prowler App by logging in with **email and password**.
|
||||
|
||||
<img src="/images/log-in.png" alt="Log In" width="285" />
|
||||
|
||||
## Add Provider
|
||||
## Add Cloud Provider
|
||||
|
||||
Configure a provider for scanning:
|
||||
Configure a cloud provider for scanning:
|
||||
|
||||
1. Navigate to `Settings > Providers` and click `Add Provider`.
|
||||
1. Navigate to `Settings > Cloud Providers` and click `Add Account`.
|
||||
2. Select the cloud provider.
|
||||
3. Enter the provider's identifier (Optional: Add an alias):
|
||||
- **AWS**: Account ID
|
||||
|
||||
@@ -121,8 +121,8 @@ To update the environment file:
|
||||
Edit the `.env` file and change version values:
|
||||
|
||||
```env
|
||||
PROWLER_UI_VERSION="5.26.1"
|
||||
PROWLER_API_VERSION="5.26.1"
|
||||
PROWLER_UI_VERSION="5.25.1"
|
||||
PROWLER_API_VERSION="5.25.1"
|
||||
```
|
||||
|
||||
<Note>
|
||||
|
||||
|
Before Width: | Height: | Size: 38 KiB |
|
Before Width: | Height: | Size: 48 KiB |
|
Before Width: | Height: | Size: 534 KiB |
|
Before Width: | Height: | Size: 659 KiB |
|
Before Width: | Height: | Size: 759 KiB |
|
Before Width: | Height: | Size: 62 KiB |
|
Before Width: | Height: | Size: 534 KiB |
|
Before Width: | Height: | Size: 257 KiB |
|
Before Width: | Height: | Size: 399 KiB |
|
Before Width: | Height: | Size: 425 KiB |
|
Before Width: | Height: | Size: 88 KiB |
|
Before Width: | Height: | Size: 222 KiB |
@@ -1,17 +1,12 @@
|
||||
export const VersionBadge = ({ version }) => {
|
||||
return (
|
||||
<a
|
||||
href={`https://github.com/prowler-cloud/prowler/releases/tag/${version}`}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="version-badge-link"
|
||||
>
|
||||
<span className="version-badge-container">
|
||||
<span className="version-badge">
|
||||
<span className="version-badge-label">Added in:</span>
|
||||
<span className="version-badge-version">{version}</span>
|
||||
</span>
|
||||
</span>
|
||||
</a>
|
||||
<code className="version-badge-container">
|
||||
<p className="version-badge">
|
||||
<span className="version-badge-label">Added in:</span>
|
||||
<code className="version-badge-version">{version}</code>
|
||||
</p>
|
||||
</code>
|
||||
|
||||
|
||||
);
|
||||
};
|
||||
|
||||
@@ -1,21 +1,4 @@
|
||||
/* Version Badge Styling */
|
||||
.version-badge-link,
|
||||
.version-badge-link:hover,
|
||||
.version-badge-link:focus,
|
||||
.version-badge-link:active,
|
||||
.version-badge-link:visited {
|
||||
display: inline-block;
|
||||
text-decoration: none !important;
|
||||
background-image: none !important;
|
||||
border-bottom: none !important;
|
||||
color: inherit;
|
||||
transition: opacity 0.15s ease-in-out;
|
||||
}
|
||||
|
||||
.version-badge-link:hover {
|
||||
opacity: 0.85;
|
||||
}
|
||||
|
||||
.version-badge-container {
|
||||
display: inline-block;
|
||||
margin: 0 0 1rem 0;
|
||||
|
||||
@@ -159,40 +159,6 @@ When these environment variables are set, the API will use them directly instead
|
||||
A fix addressing this permission issue is being evaluated in [PR #9953](https://github.com/prowler-cloud/prowler/pull/9953).
|
||||
</Note>
|
||||
|
||||
### Scan Stuck in Executing State After Worker Crash
|
||||
|
||||
When running Prowler App via Docker Compose, a scan may remain indefinitely in the `executing` state if the worker process crashes (for example, due to an Out of Memory condition) before it can update the scan status. Since it is not currently possible to cancel a scan in `executing` state through the UI, the workaround is to manually update the scan record in the database.
|
||||
|
||||
**Root Cause:**
|
||||
|
||||
The Celery worker process terminates unexpectedly (OOM, node failure, etc.) before transitioning the scan state to `completed` or `failed`. The scan record remains in `executing` with no active process to advance it.
|
||||
|
||||
**Solution:**
|
||||
|
||||
Connect to the database using the `prowler_admin` user. Due to Row-Level Security (RLS), the default database user cannot see scan records — you must use `prowler_admin`:
|
||||
|
||||
```bash
|
||||
psql -U prowler_admin -d prowler_db
|
||||
```
|
||||
|
||||
Identify the stuck scan by filtering for scans in `executing` state:
|
||||
|
||||
```sql
|
||||
SELECT id, name, state, started_at FROM scans WHERE state = 'executing';
|
||||
```
|
||||
|
||||
Update the scan state to `failed` using the scan ID:
|
||||
|
||||
```sql
|
||||
UPDATE scans SET state = 'failed' WHERE id = '<scan-id>';
|
||||
```
|
||||
|
||||
After this change, the scan will appear as failed in the UI and you can launch a new scan.
|
||||
|
||||
<Note>
|
||||
A feature to cancel executing scans directly from the UI is being tracked in [GitHub Issue #6893](https://github.com/prowler-cloud/prowler/issues/6893).
|
||||
</Note>
|
||||
|
||||
### SAML/OAuth ACS URL Incorrect When Running Behind a Proxy or Load Balancer
|
||||
|
||||
See [GitHub Issue #9724](https://github.com/prowler-cloud/prowler/issues/9724) for more details.
|
||||
|
||||
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: 'Compliance'
|
||||
---
|
||||
|
||||
Prowler allows you to execute checks based on requirements defined in compliance frameworks. By default, it will execute and give you an overview of the status of each compliance framework:
|
||||
|
||||
<img src="/images/cli/compliance/compliance.png" />
|
||||
|
||||
You can find CSVs containing detailed compliance results in the compliance folder within Prowler's output folder.
|
||||
|
||||
## Execute Prowler based on Compliance Frameworks
|
||||
|
||||
Prowler can analyze your environment based on a specific compliance framework and get more details, to do it, you can use option `--compliance`:
|
||||
|
||||
```sh
|
||||
prowler <provider> --compliance <compliance_framework>
|
||||
```
|
||||
|
||||
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 2.0. For details a CSV file has been generated as well.
|
||||
|
||||
<img src="/images/cli/compliance/compliance-cis-sample1.png" />
|
||||
|
||||
<Note>
|
||||
**If Prowler can't find a resource related with a check from a compliance requirement, this requirement won't appear on the output**
|
||||
</Note>
|
||||
|
||||
## List Available Compliance Frameworks
|
||||
|
||||
To see which compliance frameworks are covered by Prowler, use the `--list-compliance` option:
|
||||
|
||||
```sh
|
||||
prowler <provider> --list-compliance
|
||||
```
|
||||
|
||||
Or you can visit [Prowler Hub](https://hub.prowler.com/compliance).
|
||||
|
||||
## List Requirements of Compliance Frameworks
|
||||
To list requirements for a compliance framework, use the `--list-compliance-requirements` option:
|
||||
|
||||
```sh
|
||||
prowler <provider> --list-compliance-requirements <compliance_framework(s)>
|
||||
```
|
||||
|
||||
Example for the first requirements of CIS 1.5 for AWS:
|
||||
|
||||
```
|
||||
Listing CIS 1.5 AWS Compliance Requirements:
|
||||
|
||||
Requirement Id: 1.1
|
||||
- Description: Maintain current contact details
|
||||
- Checks:
|
||||
account_maintain_current_contact_details
|
||||
|
||||
Requirement Id: 1.2
|
||||
- Description: Ensure security contact information is registered
|
||||
- Checks:
|
||||
account_security_contact_information_is_registered
|
||||
|
||||
Requirement Id: 1.3
|
||||
- Description: Ensure security questions are registered in the AWS account
|
||||
- Checks:
|
||||
account_security_questions_are_registered_in_the_aws_account
|
||||
|
||||
Requirement Id: 1.4
|
||||
- Description: Ensure no 'root' user account access key exists
|
||||
- Checks:
|
||||
iam_no_root_access_key
|
||||
|
||||
Requirement Id: 1.5
|
||||
- Description: Ensure MFA is enabled for the 'root' user account
|
||||
- Checks:
|
||||
iam_root_mfa_enabled
|
||||
|
||||
[redacted]
|
||||
|
||||
```
|
||||
|
||||
## Create and contribute adding other Security Frameworks
|
||||
|
||||
This information is part of the Developer Guide and can be found [here](/developer-guide/security-compliance-framework).
|
||||
@@ -56,7 +56,6 @@ The following list includes all the AWS checks with configurable variables that
|
||||
| `elb_is_in_multiple_az` | `elb_min_azs` | Integer |
|
||||
| `elbv2_is_in_multiple_az` | `elbv2_min_azs` | Integer |
|
||||
| `guardduty_is_enabled` | `mute_non_default_regions` | Boolean |
|
||||
| `iam_user_access_not_stale_to_sagemaker` | `max_unused_sagemaker_access_days` | Integer |
|
||||
| `iam_user_accesskey_unused` | `max_unused_access_keys_days` | Integer |
|
||||
| `iam_user_console_access_unused` | `max_console_access_days` | Integer |
|
||||
| `organizations_delegated_administrators` | `organizations_trusted_delegated_administrators` | List of Strings |
|
||||
@@ -187,8 +186,6 @@ aws:
|
||||
max_unused_access_keys_days: 45
|
||||
# aws.iam_user_console_access_unused --> CIS recommends 45 days
|
||||
max_console_access_days: 45
|
||||
# aws.iam_user_access_not_stale_to_sagemaker --> default 90 days
|
||||
max_unused_sagemaker_access_days: 90
|
||||
|
||||
# AWS EC2 Configuration
|
||||
# aws.ec2_elastic_ip_shodan
|
||||
|
||||
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: 'Prowler Check Kreator'
|
||||
---
|
||||
|
||||
<Note>
|
||||
Currently, this tool is only available for creating checks for the AWS provider.
|
||||
|
||||
</Note>
|
||||
<Note>
|
||||
If you are looking for a way to create new checks for all the supported providers, you can use [Prowler Studio](https://github.com/prowler-cloud/prowler-studio), it is an AI-powered toolkit for generating and managing security checks for Prowler (better version of the Check Kreator).
|
||||
|
||||
</Note>
|
||||
## Introduction
|
||||
|
||||
**Prowler Check Kreator** is a utility designed to streamline the creation of new checks for Prowler. This tool generates all necessary files required to add a new check to the Prowler repository. Specifically, it creates:
|
||||
|
||||
- A dedicated folder for the check.
|
||||
- The main check script.
|
||||
- A metadata file with essential details.
|
||||
- A folder and file structure for testing the check.
|
||||
|
||||
## Usage
|
||||
|
||||
To use the tool, execute the main script with the following command:
|
||||
|
||||
```bash
|
||||
python util/prowler_check_kreator/prowler_check_kreator.py <prowler_provider> <check_name>
|
||||
```
|
||||
|
||||
Parameters:
|
||||
|
||||
- `<prowler_provider>`: Currently only AWS is supported.
|
||||
- `<check_name>`: The name you wish to assign to the new check.
|
||||
|
||||
## AI integration
|
||||
|
||||
This tool optionally integrates AI to assist in generating the check code and metadata file content. When AI assistance is chosen, the tool uses [Gemini](https://gemini.google.com/) to produce preliminary code and metadata.
|
||||
|
||||
<Note>
|
||||
For this feature to work, you must have the library `google-generativeai` installed in your Python environment.
|
||||
|
||||
</Note>
|
||||
<Warning>
|
||||
AI-generated code and metadata might contain errors or require adjustments to align with specific Prowler requirements. Carefully review all AI-generated content before committing.
|
||||
|
||||
</Warning>
|
||||
To enable AI assistance, simply confirm when prompted by the tool. Additionally, ensure that the `GEMINI_API_KEY` environment variable is set with a valid Gemini API key. For instructions on obtaining your API key, refer to the [Gemini documentation](https://ai.google.dev/gemini-api/docs/api-key).
|
||||
@@ -1,267 +0,0 @@
|
||||
---
|
||||
title: 'Compliance'
|
||||
description: 'Run security checks against compliance frameworks, review posture across providers, and download CSV or PDF reports from Prowler Cloud, Prowler App, and Prowler CLI.'
|
||||
---
|
||||
|
||||
Prowler maps every security check to one or more industry-standard compliance frameworks, so a single scan produces both technical findings and framework-aligned evidence. The same evaluation runs identically whether scans are launched from Prowler Cloud, Prowler App, or Prowler CLI.
|
||||
|
||||
Out of the box, Prowler covers frameworks such as CIS Benchmarks, NIST 800-53, NIST CSF, NIS2, ENS RD2022, ISO 27001, PCI-DSS, SOC 2, GDPR, HIPAA, AWS Well-Architected, BSI C5, CSA CCM, MITRE ATT&CK, KISA ISMS-P, FedRAMP, and Prowler ThreatScore. The full catalog is available at [Prowler Hub](https://hub.prowler.com/compliance).
|
||||
|
||||
<Note>
|
||||
For the unified compliance score methodology used across frameworks, see [Prowler ThreatScore Documentation](/user-guide/compliance/tutorials/threatscore).
|
||||
</Note>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Prowler Cloud" icon="cloud" href="#prowler-cloud">
|
||||
Review compliance posture using Prowler Cloud
|
||||
</Card>
|
||||
<Card title="Prowler CLI" icon="terminal" href="#prowler-cli">
|
||||
Run compliance scans using Prowler CLI
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Prowler Cloud
|
||||
|
||||
The Compliance section in Prowler Cloud and Prowler App centralizes compliance posture across every connected provider. It aggregates scan results, surfaces Prowler ThreatScore, and exposes detailed requirement-level evidence for each supported framework.
|
||||
|
||||
### Accessing the Compliance Section
|
||||
|
||||
To open the compliance overview, follow these steps:
|
||||
|
||||
1. Sign in to Prowler Cloud at [cloud.prowler.com](https://cloud.prowler.com/sign-in) or to a self-hosted Prowler App instance.
|
||||
2. Select **Compliance** from the left navigation.
|
||||
|
||||
The page lists every framework evaluated by the most recent completed scan of the selected provider.
|
||||
|
||||
<img src="/images/compliance/prowler-app-compliance-overview.png" alt="Compliance overview page in Prowler Cloud and App showing filters, the Prowler ThreatScore card, and the framework grid" width="900" />
|
||||
|
||||
<Note>
|
||||
Compliance results require at least one completed scan. If no scan has finished yet, Prowler Cloud and App display a notice prompting to launch or wait for a scan to complete.
|
||||
</Note>
|
||||
|
||||
### Filtering Compliance Results
|
||||
|
||||
The filters bar at the top of the overview controls which scan and which regions feed every card on the page.
|
||||
|
||||
#### Scan Selector
|
||||
|
||||
The scan selector lists completed scans across all connected providers. Each entry includes the provider type, alias, and completion timestamp. Selecting a scan updates the entire page, including ThreatScore and every framework card.
|
||||
|
||||
#### Region Filter
|
||||
|
||||
The region multi-select narrows results to one or more regions detected in the selected scan. Use it to evaluate compliance posture for a specific geography or account boundary. The filter applies to:
|
||||
|
||||
* The framework grid scores and pass/fail counts.
|
||||
* The detailed requirement view inside each framework.
|
||||
|
||||
<Note>
|
||||
Region filters apply only to providers that report a region attribute (for example, AWS, Azure, and Google Cloud). Providers without regions ignore the filter.
|
||||
</Note>
|
||||
|
||||
#### Clearing Filters
|
||||
|
||||
Select **Clear filters** to reset both the region filter and any other applied filter to its default state. The scan selector is preserved.
|
||||
|
||||
### Reviewing the Prowler ThreatScore Card
|
||||
|
||||
When the selected scan includes Prowler ThreatScore data, a dedicated card appears at the top of the overview, showing:
|
||||
|
||||
* The overall ThreatScore (0–100) with a color-coded indicator.
|
||||
* A progress bar reflecting current posture.
|
||||
* Per-pillar bars for IAM, Attack Surface, and Logging and Monitoring.
|
||||
|
||||
<img src="/images/compliance/prowler-app-compliance-threatscore-card.png" alt="Prowler ThreatScore badge on the Compliance overview showing the overall score and per-pillar bars" width="900" />
|
||||
|
||||
Selecting the card opens the ThreatScore framework detail page, covered in [Working With the Framework Detail Page](#working-with-the-framework-detail-page).
|
||||
|
||||
For a complete explanation of the methodology, formula, and weighting, see [Prowler ThreatScore Documentation](/user-guide/compliance/tutorials/threatscore).
|
||||
|
||||
### Exploring the Framework Grid
|
||||
|
||||
Below ThreatScore, the framework grid shows one card per supported compliance framework. Each card includes:
|
||||
|
||||
* **Framework logo and name:** Identifies the standard (CIS, NIST, ENS, ISO 27001, PCI-DSS, SOC 2, NIS2, CSA CCM, MITRE ATT&CK, and more).
|
||||
* **Version:** Indicates the framework version applied to the scan.
|
||||
* **Score:** The percentage of passing requirements over the total evaluated.
|
||||
* **Passing Requirements:** A `passed / total` counter for additional context.
|
||||
* **Download dropdown:** Quick access to the CSV report and, when supported, the PDF report.
|
||||
|
||||
<img src="/images/compliance/prowler-app-compliance-card-download.png" alt="Download dropdown on a framework card showing CSV and PDF report options" width="500" />
|
||||
|
||||
Select any card to open the framework detail page.
|
||||
|
||||
<Note>
|
||||
Score color coding follows three thresholds: red for severely low compliance, amber for partial compliance, and green for healthy posture. Hover over the score for the exact percentage.
|
||||
</Note>
|
||||
|
||||
### Working With the Framework Detail Page
|
||||
|
||||
The detail page provides everything needed to evaluate a single framework: aggregate metrics, top failure sections, and a requirement-by-requirement view.
|
||||
|
||||
#### Header, Summary Cards, and Download Actions
|
||||
|
||||
The header shows the framework name, version, the provider scan being reviewed, and CSV / PDF download buttons. Below the header, summary cards condense the framework state at a glance:
|
||||
|
||||
* **Requirements Status:** Donut chart with `Pass`, `Fail`, and `Manual` counts plus the total number of requirements.
|
||||
* **Top Failed Sections:** Ranks the sections or pillars with the highest number of failing requirements.
|
||||
* **ThreatScore Breakdown:** Appears only on the ThreatScore framework. It shows the overall score and per-pillar scores aligned with the ThreatScore pillars (IAM, Attack Surface, Logging and Monitoring, Encryption).
|
||||
|
||||
The same layout applies to every compliance framework. ThreatScore is the only framework that includes the extra Breakdown card on the left; for any other framework, the Requirements Status and Top Failed Sections cards span the full row.
|
||||
|
||||
<img src="/images/compliance/prowler-app-compliance-threatscore-detail.png" alt="Prowler ThreatScore detail page including the extra Breakdown card alongside Requirements Status and Top Failed Sections" width="900" />
|
||||
|
||||
<img src="/images/compliance/prowler-app-compliance-detail-header.png" alt="CIS framework detail page showing only the Requirements Status donut and the Top Failed Sections card, without the ThreatScore Breakdown" width="900" />
|
||||
|
||||
#### Requirements Accordion
|
||||
|
||||
Below the summary cards, an accordion organizes every requirement of the framework. Expand a section to see:
|
||||
|
||||
* **Requirement ID and title:** Reflect the official identifier from the framework.
|
||||
* **Pass / Fail / Manual badges:** Indicate the status of each requirement based on the underlying checks.
|
||||
* **Custom details panel:** Opens additional context tailored to the framework. For frameworks with custom layouts, the panel surfaces fields such as control objectives, severity, attack tactics, regulatory references, or required evidence.
|
||||
|
||||
Select a requirement to open the detail panel and review the failing checks, the resources affected, and remediation guidance.
|
||||
|
||||
<img src="/images/compliance/prowler-app-compliance-requirements-accordion.png" alt="Expanded CIS requirement showing description, rationale, remediation procedure, audit procedure, profile and assessment tags, references, and the underlying check" width="900" />
|
||||
|
||||
##### Frameworks With Custom Detail Layouts
|
||||
|
||||
Several frameworks include enriched detail panels that highlight fields specific to the standard:
|
||||
|
||||
* ASD Essential Eight
|
||||
* AWS Well-Architected Framework
|
||||
* BSI C5
|
||||
* Cloud Controls Matrix (CSA CCM)
|
||||
* CIS Benchmarks
|
||||
* CCC (Common Cloud Controls)
|
||||
* ENS RD2022
|
||||
* ISO 27001
|
||||
* KISA ISMS-P
|
||||
* MITRE ATT&CK
|
||||
* Prowler ThreatScore
|
||||
|
||||
Frameworks without a custom layout fall back to the generic details panel, which still exposes the official requirement metadata captured by Prowler.
|
||||
|
||||
### Downloading Compliance Reports
|
||||
|
||||
Prowler Cloud and App expose two formats:
|
||||
|
||||
* **CSV report:** Every requirement, every check, and every finding for the selected scan and filters. Available for all supported frameworks.
|
||||
* **PDF report:** Curated executive-style report. Currently supported for Prowler ThreatScore, ENS RD2022, NIS2, and CSA CCM. Additional PDF reports are added in subsequent Prowler releases.
|
||||
|
||||
<Note>
|
||||
**PDF detail section is capped at the first 100 failed findings per check.** The PDF is intended as an executive/auditor document, not a raw data dump: when a check produces more than 100 failed findings the report renders the first 100 and shows a banner pointing the reader to the CSV or JSON export for the complete list. The CSV and the ZIP scan output are never truncated.
|
||||
|
||||
The cap is configurable per deployment via the `DJANGO_PDF_MAX_FINDINGS_PER_CHECK` environment variable on the Prowler API workers; set it to `0` to disable truncation entirely. The default value of `100` keeps the PDF readable and bounded in size on enterprise-scale scans (hundreds of thousands of findings) without affecting smaller scans, where the cap is rarely reached.
|
||||
|
||||
Only **failed** findings are rendered in the detail section. PASS findings for the same check are excluded at query time. The PDF surfaces what needs attention, and the CSV/JSON exports surface everything for forensic review.
|
||||
</Note>
|
||||
|
||||
#### Downloading From the Detail Page
|
||||
|
||||
Inside any framework detail page, the **CSV** and **PDF** buttons in the header trigger the same downloads as the overview dropdown. The PDF button only appears for frameworks that support it.
|
||||
|
||||
<img src="/images/compliance/prowler-app-compliance-detail-download.png" alt="Top of a framework detail page showing the CSV and PDF download buttons in the header" width="900" />
|
||||
|
||||
<Note>
|
||||
Region filters disable the per-card download dropdown to avoid generating partial reports. Open the framework detail page when downloads scoped to a region are required, or remove the region filter to download the full report.
|
||||
</Note>
|
||||
|
||||
#### Downloading the Full Scan Output
|
||||
|
||||
To export every framework, finding, and resource at once, use the **Scan Jobs** section instead. The ZIP archive contains the CSV, JSON-OCSF, and HTML reports plus a `compliance/` subfolder with one CSV per framework. See [Prowler App — Getting Started](/user-guide/tutorials/prowler-app) for details.
|
||||
|
||||
### API Access
|
||||
|
||||
Every report available in the UI is also reachable through the Prowler API. The following endpoints are the most relevant:
|
||||
|
||||
* [Retrieve a scan compliance report as CSV](https://api.prowler.com/api/v1/docs#tag/Scan/operation/scans_compliance_retrieve)
|
||||
* [Download a complete scan output (ZIP)](https://api.prowler.com/api/v1/docs#tag/Scan/operation/scans_report_retrieve)
|
||||
|
||||
Use the API to integrate compliance evidence into ticketing systems, executive dashboards, or downstream pipelines.
|
||||
|
||||
## Prowler CLI
|
||||
|
||||
Prowler CLI evaluates the same compliance frameworks as Prowler Cloud and App, and produces detailed CSV outputs alongside the standard scan results. By default, it runs every supported framework and prints a status summary at the end of the scan:
|
||||
|
||||
<img src="/images/cli/compliance/compliance.png" />
|
||||
|
||||
Detailed compliance results are stored as CSV files under the `compliance/` subfolder of Prowler's output directory.
|
||||
|
||||
### Scan a Specific Compliance Framework
|
||||
|
||||
To scope a scan to a single framework and get the framework-specific summary, use the `--compliance` option:
|
||||
|
||||
```sh
|
||||
prowler <provider> --compliance <compliance_framework>
|
||||
```
|
||||
|
||||
Standard results plus the framework breakdown are printed to the terminal. A dedicated CSV is also generated under the `compliance/` output folder. Sample output for CIS AWS 2.0:
|
||||
|
||||
<img src="/images/cli/compliance/compliance-cis-sample1.png" />
|
||||
|
||||
<Note>
|
||||
If Prowler cannot find a resource related with a check from a compliance requirement, that requirement is omitted from the output.
|
||||
</Note>
|
||||
|
||||
### List Available Compliance Frameworks
|
||||
|
||||
To see which compliance frameworks are covered by a given provider, use the `--list-compliance` option:
|
||||
|
||||
```sh
|
||||
prowler <provider> --list-compliance
|
||||
```
|
||||
|
||||
The full catalog is also browsable at [Prowler Hub](https://hub.prowler.com/compliance).
|
||||
|
||||
### List Requirements of a Compliance Framework
|
||||
|
||||
To inspect the requirements that compose a specific framework, use the `--list-compliance-requirements` option:
|
||||
|
||||
```sh
|
||||
prowler <provider> --list-compliance-requirements <compliance_framework(s)>
|
||||
```
|
||||
|
||||
Sample output for the first requirements of CIS 1.5 for AWS:
|
||||
|
||||
```
|
||||
Listing CIS 1.5 AWS Compliance Requirements:
|
||||
|
||||
Requirement Id: 1.1
|
||||
- Description: Maintain current contact details
|
||||
- Checks:
|
||||
account_maintain_current_contact_details
|
||||
|
||||
Requirement Id: 1.2
|
||||
- Description: Ensure security contact information is registered
|
||||
- Checks:
|
||||
account_security_contact_information_is_registered
|
||||
|
||||
Requirement Id: 1.3
|
||||
- Description: Ensure security questions are registered in the AWS account
|
||||
- Checks:
|
||||
account_security_questions_are_registered_in_the_aws_account
|
||||
|
||||
Requirement Id: 1.4
|
||||
- Description: Ensure no 'root' user account access key exists
|
||||
- Checks:
|
||||
iam_no_root_access_key
|
||||
|
||||
Requirement Id: 1.5
|
||||
- Description: Ensure MFA is enabled for the 'root' user account
|
||||
- Checks:
|
||||
iam_root_mfa_enabled
|
||||
|
||||
[redacted]
|
||||
|
||||
```
|
||||
|
||||
## Contributing New Compliance Frameworks
|
||||
|
||||
To request a new framework or contribute one, see [Creating a New Security Compliance Framework in Prowler](/developer-guide/security-compliance-framework). The developer guide covers the Pydantic schema, JSON catalog, output formatter, and PR submission steps required to ship a new framework end to end.
|
||||
|
||||
## Related Documentation
|
||||
|
||||
* [Prowler ThreatScore Documentation](/user-guide/compliance/tutorials/threatscore)
|
||||
* [Creating a New Security Compliance Framework in Prowler](/developer-guide/security-compliance-framework)
|
||||
* [Prowler App — Getting Started](/user-guide/tutorials/prowler-app)
|
||||
@@ -40,13 +40,13 @@ Before you begin, make sure you have:
|
||||
### Step 2: Access Prowler Cloud
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Go to "Configuration" > "Providers"
|
||||
2. Go to "Configuration" > "Cloud Providers"
|
||||
|
||||

|
||||

|
||||
|
||||
3. Click "Add Provider"
|
||||
3. Click "Add Cloud Provider"
|
||||
|
||||

|
||||

|
||||
|
||||
4. Select "Alibaba Cloud"
|
||||
|
||||
|
||||
@@ -19,13 +19,13 @@ title: 'Getting Started With AWS on Prowler'
|
||||
### Step 2: Access Prowler Cloud
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Go to "Configuration" > "Providers"
|
||||
2. Go to "Configuration" > "Cloud Providers"
|
||||
|
||||

|
||||

|
||||
|
||||
3. Click "Add Provider"
|
||||
3. Click "Add Cloud Provider"
|
||||
|
||||

|
||||

|
||||
|
||||
4. Select "Amazon Web Services"
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ title: 'Check Mapping Prowler v4/v3 to v2'
|
||||
|
||||
Prowler v3 and v4 introduce distinct identifiers while preserving the checks originally implemented in v2. This change was made because, in previous versions, check names were primarily derived from the CIS Benchmark for AWS. Starting with v3 and v4, all checks are independent of any security framework and have unique names and IDs.
|
||||
|
||||
For more details on the updated compliance implementation in Prowler v4 and v3, refer to the [Compliance](/user-guide/compliance/tutorials/compliance) section.
|
||||
For more details on the updated compliance implementation in Prowler v4 and v3, refer to the [Compliance](/user-guide/cli/tutorials/compliance) section.
|
||||
|
||||
```
|
||||
checks_v4_v3_to_v2_mapping = {
|
||||
|
||||
@@ -35,13 +35,13 @@ For detailed instructions on how to create the Service Principal and configure p
|
||||
### Step 2: Access Prowler Cloud
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Navigate to `Configuration` > `Providers`
|
||||
2. Navigate to `Configuration` > `Cloud Providers`
|
||||
|
||||

|
||||

|
||||
|
||||
3. Click on `Add Provider`
|
||||
3. Click on `Add Cloud Provider`
|
||||
|
||||

|
||||

|
||||
|
||||
4. Select `Microsoft Azure`
|
||||
|
||||
|
||||
@@ -42,13 +42,13 @@ The Account ID is a 32-character hexadecimal string (e.g., `372e67954025e0ba6aaa
|
||||
### Step 2: Open Prowler Cloud
|
||||
|
||||
1. Go to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app).
|
||||
2. Navigate to "Configuration" > "Providers".
|
||||
2. Navigate to "Configuration" > "Cloud Providers".
|
||||
|
||||

|
||||

|
||||
|
||||
3. Click "Add Provider".
|
||||
3. Click "Add Cloud Provider".
|
||||
|
||||

|
||||

|
||||
|
||||
4. Select "Cloudflare".
|
||||
|
||||
|
||||
@@ -14,13 +14,13 @@ title: 'Getting Started With GCP on Prowler'
|
||||
### Step 2: Access Prowler Cloud
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Go to "Configuration" > "Providers"
|
||||
2. Go to "Configuration" > "Cloud Providers"
|
||||
|
||||

|
||||

|
||||
|
||||
3. Click "Add Provider"
|
||||
3. Click "Add Cloud Provider"
|
||||
|
||||

|
||||

|
||||
|
||||
4. Select "Google Cloud Platform"
|
||||
|
||||
|
||||
@@ -275,7 +275,7 @@ For step-by-step setup instructions for Prowler Cloud, see the [Getting Started
|
||||
|
||||
### Using Personal Access Token
|
||||
|
||||
1. In Prowler Cloud, navigate to **Configuration** > **Providers** > **Add Provider** > **GitHub**.
|
||||
1. In Prowler Cloud, navigate to **Configuration** > **Cloud Providers** > **Add Cloud Provider** > **GitHub**.
|
||||
|
||||
2. Enter your GitHub Account ID (username or organization name).
|
||||
|
||||
|
||||
@@ -49,13 +49,13 @@ Before adding GitHub to Prowler Cloud/App, ensure you have:
|
||||
### Step 1: Access Prowler Cloud/App
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Go to **Configuration** → **Providers**
|
||||
2. Go to **Configuration** → **Cloud Providers**
|
||||
|
||||

|
||||

|
||||
|
||||
3. Click **Add Provider**
|
||||
3. Click **Add Cloud Provider**
|
||||
|
||||

|
||||

|
||||
|
||||
4. Select **GitHub**
|
||||
|
||||
|
||||