Compare commits

...

46 Commits

Author SHA1 Message Date
prowler-bot 85e8f9e0c7 feat(aws): update regions for AWS services 2026-04-27 09:42:31 +00:00
Adrián Peña fb6da427f8 fix(api): prevent /tmp saturation from compliance report generation (#10874) 2026-04-27 11:05:34 +02:00
Adrián Peña 65fd3335d3 fix(api): reaggregate resource inventory and attack surface after muting findings (#10843) 2026-04-27 11:03:28 +02:00
César Arroba d6288be472 chore(ci): align sdk-bump-version PR titles with other bump workflows (#10897) 2026-04-27 10:20:56 +02:00
César Arroba 0cddb71d1c fix(ci): simplify docs-bump-version to a single master-only PR (#10896) 2026-04-27 10:20:47 +02:00
Andoni Alonso af2930130c fix(check): break circular import between config and check.utils (#10895) 2026-04-27 10:11:50 +02:00
Andoni Alonso b668770480 feat(github): add zizmor GitHub Actions scanning as a service of the GitHub provider (#10607) 2026-04-27 08:55:07 +02:00
Andoni Alonso f31c5717e9 chore(devex): add worktrunk worktree bootstrap config (#10867) 2026-04-27 08:45:04 +02:00
Alejandro Bailo 4788dcade2 fix(ui): polish shared table pagination and provider spacing (#10891) 2026-04-24 15:40:40 +02:00
Alejandro Bailo 22a6cc9e73 fix(ui): align resources filters and resource drawer behavior (#10861) 2026-04-24 15:03:47 +02:00
Pablo Fernandez Guerra (PFE) 06bb382f8e chore(ui): add knip for dead code detection (#10654)
Co-authored-by: Pablo F.G <pablo.fernandez@prowler.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 14:45:59 +02:00
Pedro Martín d4ece2b43e feat(sdk): add multi-provider compliance framework JSONs (#10300)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2026-04-24 13:27:31 +02:00
César Arroba b97d68fbd5 fix(ci): also gate cache-dependency-path on enable-cache in setup-python-poetry (#10885) 2026-04-24 12:38:13 +02:00
César Arroba ca79300440 fix(ci): poetry cache post-step failure on release workflows (#10881) 2026-04-24 11:57:30 +02:00
Pepe Fagoaga 7a0e107617 chore(api): changelog for v5.24.4 (#10882) 2026-04-24 11:57:02 +02:00
César Arroba 6d3fcec5da ci: bump docs version against master on patch releases (#10879) 2026-04-24 11:49:14 +02:00
César Arroba ce1cf51d37 fix(ci): allow github.com egress in backport workflow (#10876) 2026-04-24 10:00:55 +02:00
Pablo Fernandez Guerra (PFE) 3554859a5c fix(ui): load every Attack Paths scan before displaying the selector (#10864)
Co-authored-by: Pablo F.G <pablo.fernandez@prowler.com>
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-24 09:41:47 +02:00
Daniel Barranquero 80d62f355f fix(alibabacloud): fix CS service SDK compatibility and harden Alibaba provider (#10871) 2026-04-24 09:26:09 +02:00
Josema Camacho 0df24eeff6 fix(api): make Neo4j connection acquisition timeout configurable and enable Sentry tracing (#10873) 2026-04-23 17:52:14 +02:00
Alejandro Bailo d1fc482832 feat(ui): improve Mutelist UX and mute modal (#10846) 2026-04-23 17:36:32 +02:00
Andoni Alonso ffb1bb89e1 feat(ci): add official Prowler GitHub Action (#10872) 2026-04-23 16:15:13 +02:00
Alejandro Bailo d877bea0e3 chore(ui): unify filter search and batch patterns (#10859) 2026-04-23 16:03:33 +02:00
Pedro Martín 2304bf0093 feat(compliance): add CIS pdf reporting (#10650) 2026-04-23 13:28:30 +02:00
Pepe Fagoaga 2ca74102a9 chore(poetry): lock poetry with 2.3.4 and install git as required (#10868) 2026-04-23 12:30:14 +02:00
Pablo Fernandez Guerra (PFE) 6ae129fcc0 chore: remove unused submodule (#10869)
Co-authored-by: Pablo F.G <pablo.fernandez@prowler.com>
2026-04-23 12:13:35 +02:00
Alejandro Bailo e9731f53ad chore(ui): reorganize changelog and open 1.24.4 section (#10866) 2026-04-23 11:22:32 +02:00
Pablo Fernandez Guerra (PFE) db2f92e6d5 chore: add prowler-openspec-opensource as git submodule (#10680)
Co-authored-by: Pablo F.G <pablo.fernandez@prowler.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 10:52:17 +02:00
Alejandro Bailo f4b0f8fa22 fix(ui): prevent rescheduling scans during credential update (#10851) 2026-04-23 09:45:16 +02:00
Pedro Martín dff5541e11 fix(ci): improve compliance check action (#10850) 2026-04-22 16:31:05 +02:00
Mathisdjango 927be17fb7 feat(github): add check for dismissing stale PR approvals on default branch (CIS 1.1.4) (#10569)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-04-22 16:14:10 +02:00
Pepe Fagoaga c27cb28a2a chore(safety): define policy for high and critical (#10845) 2026-04-22 13:28:59 +02:00
Pepe Fagoaga 94ee24071a refactor: unify filtering and sorting for finding (#10803)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2026-04-22 13:11:50 +02:00
Josema Camacho 1093f6c99b fix(api): merge Attack Paths findings on short UIDs for AWS resources (#10839) 2026-04-22 12:19:03 +02:00
Hugo Pereira Brito 48060c47ba fix(ui): improve Resource Inventory cards light mode (#10757)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2026-04-22 12:05:09 +02:00
Pedro Martín 72acc2119d fix(aws): disallow me-south-1 & me-central-1 avoid stuck scans (#10837) 2026-04-22 11:16:41 +02:00
Rubén De la Torre Vico b1ebea4a7e chore(pre-commit): scope hooks per monorepo component (#10815) 2026-04-22 11:04:31 +02:00
dependabot[bot] 001057644e chore(deps): bump pyasn1 from 0.6.2 to 0.6.3 (#10365)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: pedrooot <pedromarting3@gmail.com>
Co-authored-by: Hugo P.Brito <hugopbrit@gmail.com>
2026-04-22 10:53:39 +02:00
Adrián Peña 1456def7d4 fix(api): reaggregate overview summaries after muting findings (#10827) 2026-04-22 10:44:21 +02:00
dependabot[bot] 12d475e7af chore(deps-dev): bump pygments from 2.19.2 to 2.20.0 (#10521)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
2026-04-22 10:09:06 +02:00
Andoni Alonso 43bd1083e0 feat(sdk): add SARIF output format for IaC provider (#10626)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-22 09:32:20 +02:00
dependabot[bot] bbd4ce7565 chore(deps): bump pygments from 2.19.2 to 2.20.0 in /mcp_server (#10523)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 09:31:04 +02:00
Davidm4r 97a085bf21 feat(ui): Add user expulsion from tenants with JWT authentication fix (#10787)
Co-authored-by: Pablo F.G <pablo.fernandez@prowler.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-22 09:28:39 +02:00
Pablo Fernandez Guerra (PFE) 29a2f8fac8 chore: remove legacy ui-checks hook from root pre-commit config (#10834)
Co-authored-by: Pablo F.G <pablo.fernandez@prowler.com>
2026-04-22 09:18:39 +02:00
Pedro Martín a24869fc26 feat(sdk): add universal compliance output modules (CSV, OCSF, table) (#10299) 2026-04-22 09:01:45 +02:00
dependabot[bot] 72c94db1cf chore(deps): bump pygments from 2.19.2 to 2.20.0 in /api (#10522)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 08:59:21 +02:00
245 changed files with 25512 additions and 1864 deletions
+23
View File
@@ -0,0 +1,23 @@
# Prowler worktree automation for worktrunk (wt CLI).
# Runs automatically on `wt switch --create`.
# Block 1: setup + copy gitignored env files (.envrc, ui/.env.local)
# from the primary worktree — patterns selected via .worktreeinclude.
[[pre-start]]
skills = "./skills/setup.sh --claude"
python = "poetry env use python3.12"
envs = "wt step copy-ignored"
# Block 2: install Python deps (requires `poetry env use` from block 1).
[[pre-start]]
deps = "poetry install --with dev"
# Block 3: reminder — last visible output before `wt switch` returns.
# Hooks can't mutate the parent shell, so venv activation is manual.
[[pre-start]]
reminder = "echo '>> Reminder: activate the venv in this shell with: eval $(poetry env activate)'"
# Background: pnpm install runs while you start working.
# Tail logs via `wt config state logs`.
[post-start]
ui = "cd ui && pnpm install"
@@ -22,6 +22,10 @@ inputs:
description: 'Run `poetry lock` during setup. Only enable when a prior step mutates pyproject.toml (e.g. API `@master` VCS rewrite). Default: false.'
required: false
default: 'false'
enable-cache:
description: 'Whether to enable Poetry dependency caching via actions/setup-python'
required: false
default: 'true'
runs:
using: 'composite'
@@ -74,8 +78,10 @@ runs:
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ inputs.python-version }}
cache: 'poetry'
cache-dependency-path: ${{ inputs.working-directory }}/poetry.lock
# Disable cache when callers skip dependency install: Poetry 2.3.4 creates
# the venv in a path setup-python can't hash, breaking the post-step save-cache.
cache: ${{ inputs.enable-cache == 'true' && 'poetry' || '' }}
cache-dependency-path: ${{ inputs.enable-cache == 'true' && format('{0}/poetry.lock', inputs.working-directory) || '' }}
- name: Install Python dependencies
if: inputs.install-dependencies == 'true'
+3 -4
View File
@@ -60,6 +60,7 @@ jobs:
files: |
api/**
.github/workflows/api-security.yml
.safety-policy.yml
files_ignore: |
api/docs/**
api/README.md
@@ -80,10 +81,8 @@ jobs:
- name: Safety
if: steps.check-changes.outputs.any_changed == 'true'
run: poetry run safety check --ignore 79023,79027,86217,71600
# TODO: 79023 & 79027 knack ReDoS until `azure-cli-core` (via `cartography`) allows `knack` >=0.13.0
# TODO: 86217 because `alibabacloud-tea-openapi == 0.4.3` don't let us upgrade `cryptography >= 46.0.0`
# TODO: 71600 CVE-2024-1135 false positive - fixed in gunicorn 22.0.0, project uses 23.0.0
# Accepted CVEs, severity threshold, and ignore expirations live in ../.safety-policy.yml
run: poetry run safety check --policy-file ../.safety-policy.yml
- name: Vulture
if: steps.check-changes.outputs.any_changed == 'true'
+1
View File
@@ -35,6 +35,7 @@ jobs:
egress-policy: block
allowed-endpoints: >
api.github.com:443
github.com:443
- name: Check labels
id: label_check
+38 -225
View File
@@ -12,74 +12,12 @@ concurrency:
env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
DOCS_FILE: docs/getting-started/installation/prowler-app.mdx
permissions: {}
jobs:
detect-release-type:
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
outputs:
is_minor: ${{ steps.detect.outputs.is_minor }}
is_patch: ${{ steps.detect.outputs.is_patch }}
major_version: ${{ steps.detect.outputs.major_version }}
minor_version: ${{ steps.detect.outputs.minor_version }}
patch_version: ${{ steps.detect.outputs.patch_version }}
current_docs_version: ${{ steps.get_docs_version.outputs.current_docs_version }}
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Get current documentation version
id: get_docs_version
run: |
CURRENT_DOCS_VERSION=$(grep -oP 'PROWLER_UI_VERSION="\K[^"]+' docs/getting-started/installation/prowler-app.mdx)
echo "current_docs_version=${CURRENT_DOCS_VERSION}" >> "${GITHUB_OUTPUT}"
echo "Current documentation version: $CURRENT_DOCS_VERSION"
- name: Detect release type and parse version
id: detect
run: |
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
MAJOR_VERSION=${BASH_REMATCH[1]}
MINOR_VERSION=${BASH_REMATCH[2]}
PATCH_VERSION=${BASH_REMATCH[3]}
echo "major_version=${MAJOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "minor_version=${MINOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "patch_version=${PATCH_VERSION}" >> "${GITHUB_OUTPUT}"
if (( MAJOR_VERSION != 5 )); then
echo "::error::Releasing another Prowler major version, aborting..."
exit 1
fi
if (( PATCH_VERSION == 0 )); then
echo "is_minor=true" >> "${GITHUB_OUTPUT}"
echo "is_patch=false" >> "${GITHUB_OUTPUT}"
echo "✓ Minor release detected: $PROWLER_VERSION"
else
echo "is_minor=false" >> "${GITHUB_OUTPUT}"
echo "is_patch=true" >> "${GITHUB_OUTPUT}"
echo "✓ Patch release detected: $PROWLER_VERSION"
fi
else
echo "::error::Invalid version syntax: '$PROWLER_VERSION' (must be X.Y.Z)"
exit 1
fi
bump-minor-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_minor == 'true'
bump-version:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
@@ -91,185 +29,60 @@ jobs:
with:
egress-policy: audit
- name: Checkout repository
- name: Validate release version
run: |
if [[ ! $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
echo "::error::Invalid version syntax: '$PROWLER_VERSION' (must be X.Y.Z)"
exit 1
fi
if (( ${BASH_REMATCH[1]} != 5 )); then
echo "::error::Releasing another Prowler major version, aborting..."
exit 1
fi
- name: Checkout master branch
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ env.BASE_BRANCH }}
persist-credentials: false
- name: Calculate next minor version
- name: Read current docs version on master
id: docs_version
run: |
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
NEXT_MINOR_VERSION=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).0
CURRENT_DOCS_VERSION=$(grep -oP 'PROWLER_UI_VERSION="\K[^"]+' "${DOCS_FILE}")
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "NEXT_MINOR_VERSION=${NEXT_MINOR_VERSION}" >> "${GITHUB_ENV}"
echo "Current docs version on master: $CURRENT_DOCS_VERSION"
echo "Target release version: $PROWLER_VERSION"
echo "Current documentation version: $CURRENT_DOCS_VERSION"
echo "Current release version: $PROWLER_VERSION"
echo "Next minor version: $NEXT_MINOR_VERSION"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
# Skip if master is already at or ahead of the release version
# (re-run, or patch shipped against an older minor line)
HIGHEST=$(printf '%s\n%s\n' "${CURRENT_DOCS_VERSION}" "${PROWLER_VERSION}" | sort -V | tail -n1)
if [[ "${CURRENT_DOCS_VERSION}" == "${PROWLER_VERSION}" || "${HIGHEST}" != "${PROWLER_VERSION}" ]]; then
echo "skip=true" >> "${GITHUB_OUTPUT}"
echo "Skipping bump: current ($CURRENT_DOCS_VERSION) >= release ($PROWLER_VERSION)"
else
echo "skip=false" >> "${GITHUB_OUTPUT}"
fi
- name: Bump versions in documentation for master
- name: Bump versions in documentation
if: steps.docs_version.outputs.skip == 'false'
run: |
set -e
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" "${DOCS_FILE}"
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" "${DOCS_FILE}"
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to master
if: steps.docs_version.outputs.skip == 'false'
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8.1.0
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: master
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Update Prowler documentation version references to v${{ env.PROWLER_VERSION }} after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `docs/getting-started/installation/prowler-app.mdx`: `PROWLER_UI_VERSION` and `PROWLER_API_VERSION`
- All `*.mdx` files with `<VersionBadge>` components
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Checkout version branch
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
persist-credentials: false
- name: Calculate first patch version
run: |
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
FIRST_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.1
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "FIRST_PATCH_VERSION=${FIRST_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "First patch version: $FIRST_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
- name: Bump versions in documentation for version branch
run: |
set -e
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to version branch
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8.1.0
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}-branch
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Update Prowler documentation version references to v${{ env.PROWLER_VERSION }} in version branch after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `docs/getting-started/installation/prowler-app.mdx`: `PROWLER_UI_VERSION` and `PROWLER_API_VERSION`
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
bump-patch-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_patch == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Calculate next patch version
run: |
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
PATCH_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
NEXT_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.$((PATCH_VERSION + 1))
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "NEXT_PATCH_VERSION=${NEXT_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "Current documentation version: $CURRENT_DOCS_VERSION"
echo "Current release version: $PROWLER_VERSION"
echo "Next patch version: $NEXT_PATCH_VERSION"
echo "Target branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION: ${{ needs.detect-release-type.outputs.patch_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
- name: Bump versions in documentation for patch version
run: |
set -e
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to version branch
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8.1.0
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
base: ${{ env.BASE_BRANCH }}
commit-message: 'chore(docs): Bump version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-bump-to-v${{ env.PROWLER_VERSION }}
title: 'chore(docs): Bump version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
@@ -20,7 +20,13 @@ permissions: {}
jobs:
check-compliance-mapping:
if: contains(github.event.pull_request.labels.*.name, 'no-compliance-check') == false
if: >-
github.event.pull_request.state == 'open' &&
contains(github.event.pull_request.labels.*.name, 'no-compliance-check') == false &&
(
(github.event.action != 'labeled' && github.event.action != 'unlabeled')
|| github.event.label.name == 'no-compliance-check'
)
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
+1
View File
@@ -45,6 +45,7 @@ jobs:
with:
python-version: '3.12'
install-dependencies: 'false'
enable-cache: 'false'
- name: Configure Git
run: |
+9 -9
View File
@@ -113,9 +113,9 @@ jobs:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: master
commit-message: 'chore(release): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
branch: version-bump-to-v${{ env.NEXT_MINOR_VERSION }}
title: 'chore(release): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
commit-message: 'chore(sdk): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
branch: sdk-version-bump-to-v${{ env.NEXT_MINOR_VERSION }}
title: 'chore(sdk): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
@@ -165,9 +165,9 @@ jobs:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'chore(release): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
branch: version-bump-to-v${{ env.FIRST_PATCH_VERSION }}
title: 'chore(release): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
commit-message: 'chore(sdk): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
branch: sdk-version-bump-to-v${{ env.FIRST_PATCH_VERSION }}
title: 'chore(sdk): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
@@ -233,9 +233,9 @@ jobs:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'chore(release): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
branch: version-bump-to-v${{ env.NEXT_PATCH_VERSION }}
title: 'chore(release): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
commit-message: 'chore(sdk): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
branch: sdk-version-bump-to-v${{ env.NEXT_PATCH_VERSION }}
title: 'chore(sdk): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
@@ -81,6 +81,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
enable-cache: 'false'
- name: Inject poetry-bumpversion plugin
run: pipx inject poetry poetry-bumpversion
+2
View File
@@ -80,6 +80,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
enable-cache: 'false'
- name: Build Prowler package
run: poetry build
@@ -116,6 +117,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
enable-cache: 'false'
- name: Install toml package
run: pip install toml
+2 -1
View File
@@ -83,7 +83,8 @@ jobs:
- name: Security scan with Safety
if: steps.check-changes.outputs.any_changed == 'true'
run: poetry run safety check -r pyproject.toml
# Accepted CVEs, severity threshold, and ignore expirations live in .safety-policy.yml
run: poetry run safety check -r pyproject.toml --policy-file .safety-policy.yml
- name: Dead code detection with Vulture
if: steps.check-changes.outputs.any_changed == 'true'
+2
View File
@@ -151,6 +151,8 @@ node_modules
# Persistent data
_data/
/openspec/
/.gitmodules
# AI Instructions (generated by skills/setup.sh from AGENTS.md)
CLAUDE.md
+57 -30
View File
@@ -1,12 +1,11 @@
repos:
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
## GENERAL (prek built-in — no external repo needed)
- repo: builtin
hooks:
- id: check-merge-conflict
- id: check-yaml
args: ["--unsafe"]
exclude: prowler/config/llm_config.yaml
args: ["--allow-multiple-documents"]
exclude: (prowler/config/llm_config.yaml|contrib/)
- id: check-json
- id: end-of-file-fixer
- id: trailing-whitespace
@@ -36,12 +35,13 @@ repos:
- id: shellcheck
exclude: contrib
## PYTHON
## PYTHON — SDK (prowler/, tests/, dashboard/, util/, scripts/)
- repo: https://github.com/myint/autoflake
rev: v2.3.3
hooks:
- id: autoflake
exclude: ^skills/
name: "SDK - autoflake"
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
args:
[
"--in-place",
@@ -53,97 +53,124 @@ repos:
rev: 8.0.1
hooks:
- id: isort
exclude: ^skills/
name: "SDK - isort"
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
args: ["--profile", "black"]
- repo: https://github.com/psf/black
rev: 26.3.1
hooks:
- id: black
exclude: ^skills/
name: "SDK - black"
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
- repo: https://github.com/pycqa/flake8
rev: 7.3.0
hooks:
- id: flake8
exclude: (contrib|^skills/)
name: "SDK - flake8"
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
args: ["--ignore=E266,W503,E203,E501,W605"]
## PYTHON — API + MCP Server (ruff)
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.11
hooks:
- id: ruff
name: "API + MCP - ruff check"
files: { glob: ["{api,mcp_server}/**/*.py"] }
args: ["--fix"]
- id: ruff-format
name: "API + MCP - ruff format"
files: { glob: ["{api,mcp_server}/**/*.py"] }
## PYTHON — Poetry
- repo: https://github.com/python-poetry/poetry
rev: 2.3.4
hooks:
- id: poetry-check
name: API - poetry-check
args: ["--directory=./api"]
files: { glob: ["api/{pyproject.toml,poetry.lock}"] }
pass_filenames: false
- id: poetry-lock
name: API - poetry-lock
args: ["--directory=./api"]
files: { glob: ["api/{pyproject.toml,poetry.lock}"] }
pass_filenames: false
- id: poetry-check
name: SDK - poetry-check
args: ["--directory=./"]
files: { glob: ["{pyproject.toml,poetry.lock}"] }
pass_filenames: false
- id: poetry-lock
name: SDK - poetry-lock
args: ["--directory=./"]
files: { glob: ["{pyproject.toml,poetry.lock}"] }
pass_filenames: false
## CONTAINERS
- repo: https://github.com/hadolint/hadolint
rev: v2.14.0
hooks:
- id: hadolint
args: ["--ignore=DL3013"]
## LOCAL HOOKS
- repo: local
hooks:
- id: pylint
name: pylint
entry: bash -c 'pylint --disable=W,C,R,E -j 0 -rn -sn prowler/'
name: "SDK - pylint"
entry: pylint --disable=W,C,R,E -j 0 -rn -sn
language: system
files: '.*\.py'
types: [python]
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
entry: bash -c 'trufflehog --no-update git file://. --only-verified --fail'
entry: bash -c 'trufflehog --no-update git file://. --since-commit HEAD --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
# entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
pass_filenames: false
stages: ["pre-commit", "pre-push"]
- id: bandit
name: bandit
description: "Bandit is a tool for finding common security issues in Python code"
entry: bash -c 'bandit -q -lll -x '*_test.py,./contrib/,./.venv/,./skills/' -r .'
entry: bandit -q -lll
language: system
types: [python]
files: '.*\.py'
exclude:
{ glob: ["{contrib,skills}/**", "**/.venv/**", "**/*_test.py"] }
- id: safety
name: safety
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
# TODO: 79023 & 79027 knack ReDoS until `azure-cli-core` (via `cartography`) allows `knack` >=0.13.0
# TODO: 86217 because `alibabacloud-tea-openapi == 0.4.3` don't let us upgrade `cryptography >= 46.0.0`
# TODO: 71600 CVE-2024-1135 false positive - fixed in gunicorn 22.0.0, project uses 23.0.0
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745,79023,79027,86217,71600'
# Accepted CVEs, severity threshold, and ignore expirations live in .safety-policy.yml
entry: safety check --policy-file .safety-policy.yml
language: system
pass_filenames: false
files:
{
glob:
[
"**/pyproject.toml",
"**/poetry.lock",
"**/requirements*.txt",
".safety-policy.yml",
],
}
- id: vulture
name: vulture
description: "Vulture finds unused code in Python programs."
entry: bash -c 'vulture --exclude "contrib,.venv,api/src/backend/api/tests/,api/src/backend/conftest.py,api/src/backend/tasks/tests/,skills/" --min-confidence 100 .'
entry: vulture --min-confidence 100
language: system
types: [python]
files: '.*\.py'
- id: ui-checks
name: UI - Husky Pre-commit
description: "Run UI pre-commit checks (Claude Code validation + healthcheck)"
entry: bash -c 'cd ui && .husky/pre-commit'
language: system
files: '^ui/.*\.(ts|tsx|js|jsx|json|css)$'
pass_filenames: false
verbose: true
+58
View File
@@ -0,0 +1,58 @@
# Safety policy for `safety check` (Safety CLI 3.x, v2 schema).
# Applied in: .pre-commit-config.yaml, .github/workflows/api-security.yml,
# .github/workflows/sdk-security.yml via `--policy-file`.
#
# Validate: poetry run safety validate policy_file --path .safety-policy.yml
security:
# Scan unpinned requirements too. Prowler pins via poetry.lock, so this is
# defensive against accidental unpinned entries.
ignore-unpinned-requirements: False
# CVSS severity filter. 7 = report only HIGH (7.08.9) and CRITICAL (9.010.0).
# Reference: 9=CRITICAL only, 7=CRITICAL+HIGH, 4=CRITICAL+HIGH+MEDIUM.
ignore-cvss-severity-below: 7
# Unknown severity is unrated, not safe. Keep False so unrated CVEs still fail
# the build and get a human eye. Flip to True only if noise is unmanageable.
ignore-cvss-unknown-severity: False
# Fail the build when a non-ignored vulnerability is found.
continue-on-vulnerability-error: False
# Explicit accepted vulnerabilities. Each entry MUST have a reason and an
# expiry. Expired entries fail the scan, forcing re-audit.
ignore-vulnerabilities:
77744:
reason: "Botocore requires urllib3 1.X. Remove once upgraded to urllib3 2.X."
expires: '2026-10-22'
77745:
reason: "Botocore requires urllib3 1.X. Remove once upgraded to urllib3 2.X."
expires: '2026-10-22'
79023:
reason: "knack ReDoS; blocked until azure-cli-core (via cartography) allows knack >=0.13.0."
expires: '2026-10-22'
79027:
reason: "knack ReDoS; blocked until azure-cli-core (via cartography) allows knack >=0.13.0."
expires: '2026-10-22'
86217:
reason: "alibabacloud-tea-openapi==0.4.3 blocks upgrade to cryptography >=46.0.0."
expires: '2026-10-22'
71600:
reason: "CVE-2024-1135 false positive. Fixed in gunicorn 22.0.0; project uses 23.0.0."
expires: '2026-10-22'
70612:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
66963:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
74429:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
76352:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
76353:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
+2
View File
@@ -0,0 +1,2 @@
.envrc
ui/.env.local
+19
View File
@@ -9,6 +9,9 @@ ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
ARG TRIVY_VERSION=0.69.2
ENV TRIVY_VERSION=${TRIVY_VERSION}
ARG ZIZMOR_VERSION=1.24.1
ENV ZIZMOR_VERSION=${ZIZMOR_VERSION}
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget libicu72 libunwind8 libssl3 libcurl4 ca-certificates apt-transport-https gnupg \
@@ -48,6 +51,22 @@ RUN ARCH=$(uname -m) && \
mkdir -p /tmp/.cache/trivy && \
chmod 777 /tmp/.cache/trivy
# Install zizmor for GitHub Actions workflow scanning
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
ZIZMOR_ARCH="x86_64-unknown-linux-gnu" ; \
elif [ "$ARCH" = "aarch64" ]; then \
ZIZMOR_ARCH="aarch64-unknown-linux-gnu" ; \
else \
echo "Unsupported architecture for zizmor: $ARCH" && exit 1 ; \
fi && \
wget --progress=dot:giga "https://github.com/zizmorcore/zizmor/releases/download/v${ZIZMOR_VERSION}/zizmor-${ZIZMOR_ARCH}.tar.gz" -O /tmp/zizmor.tar.gz && \
mkdir -p /tmp/zizmor-extract && \
tar zxf /tmp/zizmor.tar.gz -C /tmp/zizmor-extract && \
mv /tmp/zizmor-extract/zizmor /usr/local/bin/zizmor && \
chmod +x /usr/local/bin/zizmor && \
rm -rf /tmp/zizmor.tar.gz /tmp/zizmor-extract
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
+30
View File
@@ -300,6 +300,36 @@ python prowler-cli.py -v
> If your Poetry version is below v2.0.0, continue using `poetry shell` to activate your environment.
> For further guidance, refer to the Poetry Environment Activation Guide https://python-poetry.org/docs/managing-environments/#activating-the-environment.
# 🛡️ GitHub Action
The official **Prowler GitHub Action** runs Prowler scans in your GitHub workflows using the official [`prowlercloud/prowler`](https://hub.docker.com/r/prowlercloud/prowler) Docker image. Scans run on any [supported provider](https://docs.prowler.com/user-guide/providers/), with optional [`--push-to-cloud`](https://docs.prowler.com/user-guide/tutorials/prowler-app-import-findings) to send findings to Prowler Cloud and optional SARIF upload so findings show up in the repo's **Security → Code scanning** tab and as inline PR annotations.
```yaml
name: Prowler IaC Scan
on:
pull_request:
permissions:
contents: read
security-events: write
actions: read
jobs:
prowler:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: prowler-cloud/prowler@5.25
with:
provider: iac
output-formats: sarif json-ocsf
upload-sarif: true
flags: --severity critical high
```
Full configuration, per-provider authentication, and SARIF examples: [Prowler GitHub Action tutorial](docs/user-guide/tutorials/prowler-app-github-action.mdx). Marketplace listing: [Prowler Security Scan](https://github.com/marketplace/actions/prowler-security-scan).
# ✏️ High level architecture
## Prowler App
+307
View File
@@ -0,0 +1,307 @@
name: Prowler Security Scan
description: Run Prowler cloud security scanner using the official Docker image
branding:
icon: cloud
color: green
inputs:
provider:
description: Cloud provider to scan (e.g. aws, azure, gcp, github, kubernetes, iac). See https://docs.prowler.com for supported providers.
required: true
image-tag:
description: >
Docker image tag for prowlercloud/prowler.
Default is "stable" (latest release). Available tags:
"stable" (latest release), "latest" (master branch, not stable),
"<x.y.z>" (pinned release version).
See all tags at https://hub.docker.com/r/prowlercloud/prowler/tags
required: false
default: stable
output-formats:
description: Output format(s) for scan results (e.g. "json-ocsf", "sarif json-ocsf")
required: false
default: json-ocsf
push-to-cloud:
description: Push scan findings to Prowler Cloud. Requires the PROWLER_CLOUD_API_KEY environment variable. See https://docs.prowler.com/user-guide/tutorials/prowler-app-import-findings#using-the-cli
required: false
default: "false"
flags:
description: 'Additional CLI flags passed to the Prowler scan (e.g. "--severity critical high --compliance cis_aws"). Values containing spaces can be quoted, e.g. "--resource-tag ''Environment=My Server''".'
required: false
default: ""
extra-env:
description: >
Space-, newline-, or comma-separated list of host environment variable NAMES to forward to the Prowler container
(e.g. "AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN" for AWS,
"GITHUB_PERSONAL_ACCESS_TOKEN" for GitHub, "CLOUDFLARE_API_TOKEN" for Cloudflare).
List names only; set the values via `env:` at the workflow or job level (typically from `secrets.*`).
See the README for per-provider examples.
required: false
default: ""
upload-sarif:
description: 'Upload SARIF results to GitHub Code Scanning (requires "sarif" in output-formats and both `security-events: write` and `actions: read` permissions)'
required: false
default: "false"
sarif-file:
description: Path to the SARIF file to upload (auto-detected from output/ if not set)
required: false
default: ""
sarif-category:
description: Category for the SARIF upload (used to distinguish multiple analyses)
required: false
default: prowler
fail-on-findings:
description: Fail the workflow step when Prowler detects findings (exit code 3). By default the action tolerates findings and succeeds.
required: false
default: "false"
runs:
using: composite
steps:
- name: Validate inputs
shell: bash
env:
INPUT_IMAGE_TAG: ${{ inputs.image-tag }}
INPUT_UPLOAD_SARIF: ${{ inputs.upload-sarif }}
INPUT_OUTPUT_FORMATS: ${{ inputs.output-formats }}
run: |
# Validate image tag format (alphanumeric, dots, hyphens, underscores only)
if [[ ! "$INPUT_IMAGE_TAG" =~ ^[a-zA-Z0-9._-]+$ ]]; then
echo "::error::Invalid image-tag '${INPUT_IMAGE_TAG}'. Must contain only alphanumeric characters, dots, hyphens, and underscores."
exit 1
fi
# Warn if upload-sarif is enabled but sarif not in output-formats
if [ "$INPUT_UPLOAD_SARIF" = "true" ]; then
if [[ ! "$INPUT_OUTPUT_FORMATS" =~ (^|[[:space:]])sarif($|[[:space:]]) ]]; then
echo "::warning::upload-sarif is enabled but 'sarif' is not included in output-formats ('${INPUT_OUTPUT_FORMATS}'). SARIF upload will fail unless you add 'sarif' to output-formats."
fi
fi
- name: Run Prowler scan
shell: bash
env:
INPUT_PROVIDER: ${{ inputs.provider }}
INPUT_IMAGE_TAG: ${{ inputs.image-tag }}
INPUT_OUTPUT_FORMATS: ${{ inputs.output-formats }}
INPUT_PUSH_TO_CLOUD: ${{ inputs.push-to-cloud }}
INPUT_FLAGS: ${{ inputs.flags }}
INPUT_EXTRA_ENV: ${{ inputs.extra-env }}
INPUT_FAIL_ON_FINDINGS: ${{ inputs.fail-on-findings }}
run: |
set -e
# Parse space-separated inputs with shlex so values with spaces can be quoted
# (e.g. `--resource-tag 'Environment=My Server'`).
mapfile -t OUTPUT_FORMATS < <(python3 -c 'import shlex, os; [print(t) for t in shlex.split(os.environ.get("INPUT_OUTPUT_FORMATS", ""))]')
mapfile -t EXTRA_FLAGS < <(python3 -c 'import shlex, os; [print(t) for t in shlex.split(os.environ.get("INPUT_FLAGS", ""))]')
mapfile -t EXTRA_ENV_NAMES < <(python3 -c 'import shlex, os; [print(t) for t in shlex.split(os.environ.get("INPUT_EXTRA_ENV", "").replace(",", " "))]')
env_args=()
for var in "${EXTRA_ENV_NAMES[@]}"; do
[ -z "$var" ] && continue
if [[ ! "$var" =~ ^[A-Za-z_][A-Za-z0-9_]*$ ]]; then
echo "::error::Invalid env var name '${var}' in extra-env. Names must match ^[A-Za-z_][A-Za-z0-9_]*$."
exit 1
fi
env_args+=("-e" "$var")
done
push_args=()
if [ "$INPUT_PUSH_TO_CLOUD" = "true" ]; then
push_args=("--push-to-cloud")
env_args+=("-e" "PROWLER_CLOUD_API_KEY")
fi
mkdir -p "$GITHUB_WORKSPACE/output"
chmod 777 "$GITHUB_WORKSPACE/output"
set +e
docker run --rm \
"${env_args[@]}" \
-v "$GITHUB_WORKSPACE:/home/prowler/workspace" \
-v "$GITHUB_WORKSPACE/output:/home/prowler/workspace/output" \
-w /home/prowler/workspace \
"prowlercloud/prowler:${INPUT_IMAGE_TAG}" \
"$INPUT_PROVIDER" \
--output-formats "${OUTPUT_FORMATS[@]}" \
"${push_args[@]}" \
"${EXTRA_FLAGS[@]}"
exit_code=$?
set -e
# Exit code 3 = findings detected
if [ "$exit_code" -eq 3 ] && [ "$INPUT_FAIL_ON_FINDINGS" != "true" ]; then
echo "::notice::Prowler detected findings (exit code 3). Set fail-on-findings to 'true' to fail the workflow on findings."
exit 0
fi
exit $exit_code
- name: Upload scan results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: prowler-${{ inputs.provider }}
path: output/
retention-days: 30
if-no-files-found: warn
- name: Find SARIF file
if: always() && inputs.upload-sarif == 'true'
id: find-sarif
shell: bash
env:
INPUT_SARIF_FILE: ${{ inputs.sarif-file }}
run: |
if [ -n "$INPUT_SARIF_FILE" ]; then
echo "sarif_path=$INPUT_SARIF_FILE" >> "$GITHUB_OUTPUT"
else
sarif_file=$(find output/ -name '*.sarif' -type f | head -1)
if [ -z "$sarif_file" ]; then
echo "::warning::No .sarif file found in output/. Ensure 'sarif' is included in output-formats."
echo "sarif_path=" >> "$GITHUB_OUTPUT"
else
echo "sarif_path=$sarif_file" >> "$GITHUB_OUTPUT"
fi
fi
- name: Upload SARIF to GitHub Code Scanning
if: always() && inputs.upload-sarif == 'true' && steps.find-sarif.outputs.sarif_path != ''
uses: github/codeql-action/upload-sarif@d4b3ca9fa7f69d38bfcd667bdc45bc373d16277e # v4
with:
sarif_file: ${{ steps.find-sarif.outputs.sarif_path }}
category: ${{ inputs.sarif-category }}
- name: Write scan summary
if: always()
shell: bash
env:
INPUT_PROVIDER: ${{ inputs.provider }}
INPUT_UPLOAD_SARIF: ${{ inputs.upload-sarif }}
INPUT_PUSH_TO_CLOUD: ${{ inputs.push-to-cloud }}
RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
REPO_URL: ${{ github.server_url }}/${{ github.repository }}
BRANCH: ${{ github.head_ref || github.ref_name }}
GH_TOKEN: ${{ github.token }}
run: |
set +e
# Build a link to the scan step in the workflow logs. Requires `actions: read`
# on the caller's GITHUB_TOKEN; silently skips the link if unavailable.
scan_step_url=""
if [ -n "${GH_TOKEN:-}" ] && command -v gh >/dev/null 2>&1; then
job_info=$(gh api \
"repos/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}/attempts/${GITHUB_RUN_ATTEMPT:-1}/jobs" \
--jq ".jobs[] | select(.runner_name == \"${RUNNER_NAME:-}\")" 2>/dev/null)
if [ -n "$job_info" ]; then
job_id=$(jq -r '.id // empty' <<<"$job_info")
step_number=$(jq -r '[.steps[]? | select((.name // "") | test("Run Prowler scan"; "i")) | .number] | first // empty' <<<"$job_info")
if [ -z "$step_number" ]; then
step_number=$(jq -r '[.steps[]? | select(.status == "in_progress") | .number] | first // empty' <<<"$job_info")
fi
if [ -n "$job_id" ] && [ -n "$step_number" ]; then
scan_step_url="${REPO_URL}/actions/runs/${GITHUB_RUN_ID}/job/${job_id}#step:${step_number}:1"
elif [ -n "$job_id" ]; then
scan_step_url="${REPO_URL}/actions/runs/${GITHUB_RUN_ID}/job/${job_id}"
fi
fi
fi
# Map provider code to a properly-cased display name.
case "$INPUT_PROVIDER" in
alibabacloud) provider_name="Alibaba Cloud" ;;
aws) provider_name="AWS" ;;
azure) provider_name="Azure" ;;
cloudflare) provider_name="Cloudflare" ;;
gcp) provider_name="GCP" ;;
github) provider_name="GitHub" ;;
googleworkspace) provider_name="Google Workspace" ;;
iac) provider_name="IaC" ;;
image) provider_name="Container Image" ;;
kubernetes) provider_name="Kubernetes" ;;
llm) provider_name="LLM" ;;
m365) provider_name="Microsoft 365" ;;
mongodbatlas) provider_name="MongoDB Atlas" ;;
nhn) provider_name="NHN" ;;
openstack) provider_name="OpenStack" ;;
oraclecloud) provider_name="Oracle Cloud" ;;
vercel) provider_name="Vercel" ;;
*) provider_name="${INPUT_PROVIDER^}" ;;
esac
ocsf_file=$(find output/ -name '*.ocsf.json' -type f 2>/dev/null | head -1)
{
echo "## Prowler ${provider_name} Scan Summary"
echo ""
counts=""
if [ -n "$ocsf_file" ] && [ -s "$ocsf_file" ]; then
counts=$(jq -r '[
length,
([.[] | select(.status_code == "FAIL")] | length),
([.[] | select(.status_code == "PASS")] | length),
([.[] | select(.status_code == "MUTED")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "Critical")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "High")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "Medium")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "Low")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "Informational")] | length)
] | @tsv' "$ocsf_file" 2>/dev/null)
fi
if [ -n "$counts" ]; then
read -r total fail pass muted critical high medium low info <<<"$counts"
line="**${fail:-0} failing** · ${pass:-0} passing"
[ "${muted:-0}" -gt 0 ] && line="${line} · ${muted} muted"
echo "${line} — ${total:-0} checks total"
echo ""
echo "| Severity | Failing |"
echo "|----------|---------|"
echo "| ‼️ Critical | ${critical:-0} |"
echo "| 🔴 High | ${high:-0} |"
echo "| 🟠 Medium | ${medium:-0} |"
echo "| 🔵 Low | ${low:-0} |"
echo "| ⚪ Informational | ${info:-0} |"
echo ""
else
echo "_No findings report was produced. Check the scan logs above._"
echo ""
fi
if [ -n "$scan_step_url" ]; then
echo "**Scan logs:** [view in workflow run](${scan_step_url})"
echo ""
fi
echo "**Get the full report:** [\`prowler-${INPUT_PROVIDER}\` artifact](${RUN_URL}#artifacts)"
if [ "$INPUT_UPLOAD_SARIF" = "true" ] && [ -n "$BRANCH" ]; then
encoded_branch=$(jq -nr --arg b "$BRANCH" '$b|@uri')
echo ""
echo "**See results in GitHub Code Security:** [open alerts on \`${BRANCH}\`](${REPO_URL}/security/code-scanning?query=is%3Aopen+branch%3A${encoded_branch})"
fi
if [ "$INPUT_PUSH_TO_CLOUD" != "true" ]; then
echo ""
echo "---"
echo ""
echo "### Scale ${provider_name} security with Prowler Cloud ☁️"
echo ""
echo "Send this scan's findings to **[Prowler Cloud](https://cloud.prowler.com)** and get:"
echo ""
echo "- **Unified findings** across every cloud, SaaS provider (M365, Google Workspace, GitHub, MongoDB Atlas), IaC repo, Kubernetes cluster, and container image"
echo "- **Posture over time** with alerts, and notifications"
echo "- **Prowler Lighthouse AI**: agentic assistant that triages findings, explains root cause and helps with remediation"
echo "- **50+ Compliance frameworks** mapped automatically"
echo "- **Enterprise-ready platform**: SOC 2 Type 2, SSO/SAML, AWS Security Hub, S3 and Jira integrations"
echo ""
echo "**Get started in 3 steps:**"
echo "1. Create an account at [cloud.prowler.com](https://cloud.prowler.com)"
echo "2. Generate a Prowler Cloud API key ([docs](https://docs.prowler.com/user-guide/tutorials/prowler-app-import-findings#using-the-cli))"
echo "3. Add \`PROWLER_CLOUD_API_KEY\` to your GitHub secrets and set \`push-to-cloud: true\` on this action"
echo ""
echo "See [prowler.com/pricing](https://prowler.com/pricing) for plan details."
fi
} >> "$GITHUB_STEP_SUMMARY"
+39 -1
View File
@@ -2,11 +2,49 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.26.0] (Prowler UNRELEASED)
### 🚀 Added
- `/overviews/resource-groups` (resource inventory), `/overviews/categories` and `/overviews/attack-surfaces` now reflect newly-muted findings without waiting for the next scan. The post-mute `reaggregate-all-finding-group-summaries` task now also dispatches `aggregate_scan_resource_group_summaries_task`, `aggregate_scan_category_summaries_task` and `aggregate_attack_surface_task` per latest scan of every `(provider, day)` pair, rebuilding `ScanGroupSummary`, `ScanCategorySummary` and `AttackSurfaceOverview` alongside the tables already covered in #10827 [(#10843)](https://github.com/prowler-cloud/prowler/pull/10843)
- CIS Benchmark PDF report generation for scans, exposing the latest CIS version per provider via `GET /scans/{id}/cis/{name}/` and picking the variant dynamically via `_pick_latest_cis_variant` (no hard-coded provider → version mapping) [(#10650)](https://github.com/prowler-cloud/prowler/pull/10650)
- Install zizmor v1.24.1 in API Docker image for GitHub Actions workflow scanning [(#10607)](https://github.com/prowler-cloud/prowler/pull/10607)
### 🔄 Changed
- Allows tenant owners to expel users from their organizations [(#10787)](https://github.com/prowler-cloud/prowler/pull/10787)
- `aggregate_findings`, `aggregate_attack_surface`, `aggregate_scan_resource_group_summaries` and `aggregate_scan_category_summaries` now upsert via `bulk_create(update_conflicts=True, ...)` instead of the prior `ignore_conflicts=True` / plain INSERT / `already backfilled` short-circuit. Re-runs triggered by the post-mute reaggregation pipeline no longer trip the `unique_*_per_scan` constraints nor silently drop updates, and are race-safe under concurrent writers (e.g. scan completion overlapping with a fresh mute rule) [(#10843)](https://github.com/prowler-cloud/prowler/pull/10843)
- Rename the scan-category and scan-resource-group summary aggregators from `backfill_*` to `aggregate_*` (`backfill_scan_category_summaries` -> `aggregate_scan_category_summaries`, `backfill_scan_resource_group_summaries` -> `aggregate_scan_resource_group_summaries`; Celery task names `backfill-scan-category-summaries` -> `scan-category-summaries`, `backfill-scan-resource-group-summaries` -> `scan-resource-group-summaries`) and move them to the `overview` queue, matching the sibling per-scan aggregators (`perform_scan_summary_task`, `aggregate_daily_severity_task`, `aggregate_finding_group_summaries_task`, `aggregate_attack_surface_task`). The old names had no dispatchers outside the post-mute reaggregation chain, so no task-registry migration is required [(#10843)](https://github.com/prowler-cloud/prowler/pull/10843)
---
## [1.25.4] (Prowler v5.24.4)
### 🚀 Added
- `DJANGO_SENTRY_TRACES_SAMPLE_RATE` env var (default `0.02`) enables Sentry performance tracing for the API [(#10873)](https://github.com/prowler-cloud/prowler/pull/10873)
### 🔄 Changed
- Attack Paths: Neo4j driver `connection_acquisition_timeout` is now configurable via `NEO4J_CONN_ACQUISITION_TIMEOUT` (default lowered from 120 s to 15 s) [(#10873)](https://github.com/prowler-cloud/prowler/pull/10873)
### 🐞 Fixed
- `/tmp/prowler_api_output` saturation in compliance report workers: the final `rmtree` in `generate_compliance_reports` now only waits on frameworks actually generated for the provider (so unsupported frameworks no longer leave a placeholder `results` entry that blocks cleanup), output directories are created lazily per enabled framework, and both `generate_compliance_reports` and `generate_outputs_task` run an opportunistic stale cleanup at task start with a 48h age threshold, a per-host `fcntl` throttle, a 50-deletions-per-run cap, and guards that protect EXECUTING scans and scans whose `output_location` still points to a local path (metadata lookups routed through the admin DB so RLS does not hide those rows) [(#10874)](https://github.com/prowler-cloud/prowler/pull/10874)
---
## [1.25.3] (Prowler v5.24.3)
### 🚀 Added
- `/overviews/findings`, `/overviews/findings-severity` and `/overviews/services` now reflect newly-muted findings without waiting for the next scan. The post-mute `reaggregate-all-finding-group-summaries` task was extended to re-run the same per-scan pipeline that scan completion runs (`ScanSummary`, `DailySeveritySummary`, `FindingGroupDailySummary`) on the latest scan of every `(provider, day)` pair, keeping the pre-aggregated tables in sync with `Finding.muted` updates [(#10827)](https://github.com/prowler-cloud/prowler/pull/10827)
### 🐞 Fixed
- Finding groups aggregated `status` now treats muted findings as resolved: a group is `FAIL` only while at least one non-muted FAIL remains, otherwise it is `PASS` (including fully-muted groups). The `filter[status]` filter and the `sort=status` ordering share the same semantics, keeping `status` consistent with `fail_count` and the orthogonal `muted` flag [(#10825)](https://github.com/prowler-cloud/prowler/pull/10825)
- `aggregate_findings` is now idempotent: it deletes the scan's existing `ScanSummary` rows before `bulk_create`, so re-runs (such as the post-mute reaggregation pipeline) no longer violate the `unique_scan_summary` constraint and no longer abort the downstream `DailySeveritySummary` / `FindingGroupDailySummary` recomputation for the affected scan [(#10827)](https://github.com/prowler-cloud/prowler/pull/10827)
- Attack Paths: Findings on AWS were silently dropped during the Neo4j merge for resources whose Cartography node is keyed by a short identifier (e.g. EC2 instances) rather than the full ARN [(#10839)](https://github.com/prowler-cloud/prowler/pull/10839)
---
@@ -20,7 +58,6 @@ All notable changes to the **Prowler API** are documented in this file.
- `/finding-groups/latest/<check_id>/resources` now selects the latest completed scan per provider by `-completed_at` (then `-inserted_at`) instead of `-inserted_at`, matching the `/finding-groups/latest` summary path and the daily-summary upsert so overlapping scans no longer produce diverging `delta`/`new_count` between the two endpoints [(#10802)](https://github.com/prowler-cloud/prowler/pull/10802)
---
## [1.25.1] (Prowler v5.24.1)
@@ -34,6 +71,7 @@ All notable changes to the **Prowler API** are documented in this file.
- Attack Paths: Missing `tenant_id` filter while getting related findings after scan completes [(#10722)](https://github.com/prowler-cloud/prowler/pull/10722)
- Finding group counters `pass_count`, `fail_count` and `manual_count` now exclude muted findings [(#10753)](https://github.com/prowler-cloud/prowler/pull/10753)
- Silent data loss in `ResourceFindingMapping` bulk insert that left findings orphaned when `INSERT ... ON CONFLICT DO NOTHING` dropped rows without raising; added explicit `unique_fields` [(#10724)](https://github.com/prowler-cloud/prowler/pull/10724)
- `DELETE /tenants/{tenant_pk}/memberships/{id}` now deletes the expelled user's account when the removed membership was their last one, and blacklists every outstanding refresh token for that user so their existing sessions can no longer mint new access tokens [(#10787)](https://github.com/prowler-cloud/prowler/pull/10787)
---
+20
View File
@@ -8,6 +8,9 @@ ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
ARG TRIVY_VERSION=0.69.2
ENV TRIVY_VERSION=${TRIVY_VERSION}
ARG ZIZMOR_VERSION=1.24.1
ENV ZIZMOR_VERSION=${ZIZMOR_VERSION}
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
@@ -22,6 +25,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libtool \
libxslt1-dev \
python3-dev \
git \
&& rm -rf /var/lib/apt/lists/*
# Install PowerShell
@@ -57,6 +61,22 @@ RUN ARCH=$(uname -m) && \
mkdir -p /tmp/.cache/trivy && \
chmod 777 /tmp/.cache/trivy
# Install zizmor for GitHub Actions workflow scanning
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
ZIZMOR_ARCH="x86_64-unknown-linux-gnu" ; \
elif [ "$ARCH" = "aarch64" ]; then \
ZIZMOR_ARCH="aarch64-unknown-linux-gnu" ; \
else \
echo "Unsupported architecture for zizmor: $ARCH" && exit 1 ; \
fi && \
wget --progress=dot:giga "https://github.com/zizmorcore/zizmor/releases/download/v${ZIZMOR_VERSION}/zizmor-${ZIZMOR_ARCH}.tar.gz" -O /tmp/zizmor.tar.gz && \
mkdir -p /tmp/zizmor-extract && \
tar zxf /tmp/zizmor.tar.gz -C /tmp/zizmor-extract && \
mv /tmp/zizmor-extract/zizmor /usr/local/bin/zizmor && \
chmod +x /usr/local/bin/zizmor && \
rm -rf /tmp/zizmor.tar.gz /tmp/zizmor-extract
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
+14 -14
View File
@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.3.4 and should not be changed by hand.
[[package]]
name = "about-time"
@@ -2974,7 +2974,7 @@ files = [
[package.dependencies]
autopep8 = "*"
Django = ">=4.2"
gprof2dot = ">=2017.09.19"
gprof2dot = ">=2017.9.19"
sqlparse = "*"
[[package]]
@@ -4582,7 +4582,7 @@ files = [
[package.dependencies]
attrs = ">=22.2.0"
jsonschema-specifications = ">=2023.03.6"
jsonschema-specifications = ">=2023.3.6"
referencing = ">=0.28.4"
rpds-py = ">=0.7.1"
@@ -4790,7 +4790,7 @@ librabbitmq = ["librabbitmq (>=2.0.0) ; python_version < \"3.11\""]
mongodb = ["pymongo (==4.15.3)"]
msgpack = ["msgpack (==1.1.2)"]
pyro = ["pyro4 (==4.82)"]
qpid = ["qpid-python (==1.36.0-1)", "qpid-tools (==1.36.0-1)"]
qpid = ["qpid-python (==1.36.0.post1)", "qpid-tools (==1.36.0.post1)"]
redis = ["redis (>=4.5.2,!=4.5.5,!=5.0.2,<6.5)"]
slmq = ["softlayer_messaging (>=1.0.3)"]
sqlalchemy = ["sqlalchemy (>=1.4.48,<2.1)"]
@@ -4811,7 +4811,7 @@ files = [
]
[package.dependencies]
certifi = ">=14.05.14"
certifi = ">=14.5.14"
durationpy = ">=0.7"
google-auth = ">=1.0.1"
oauthlib = ">=3.2.2"
@@ -6964,11 +6964,11 @@ description = "C parser in Python"
optional = false
python-versions = ">=3.10"
groups = ["main", "dev"]
markers = "platform_python_implementation != \"PyPy\" and implementation_name != \"PyPy\""
files = [
{file = "pycparser-3.0-py3-none-any.whl", hash = "sha256:b727414169a36b7d524c1c3e31839a521725078d7b2ff038656844266160a992"},
{file = "pycparser-3.0.tar.gz", hash = "sha256:600f49d217304a5902ac3c37e1281c9fe94e4d0489de643a9504c5cdfdfc6b29"},
]
markers = {main = "implementation_name != \"PyPy\" and platform_python_implementation != \"PyPy\"", dev = "platform_python_implementation != \"PyPy\" and implementation_name != \"PyPy\""}
[[package]]
name = "pydantic"
@@ -7147,14 +7147,14 @@ urllib3 = ">=1.26.0"
[[package]]
name = "pygments"
version = "2.19.2"
version = "2.20.0"
description = "Pygments is a syntax highlighting package written in Python."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
{file = "pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176"},
{file = "pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f"},
]
[package.extras]
@@ -7194,7 +7194,7 @@ files = [
]
[package.dependencies]
astroid = ">=3.2.2,<=3.3.0-dev0"
astroid = ">=3.2.2,<=3.3.0.dev0"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
dill = [
{version = ">=0.3.7", markers = "python_version >= \"3.12\""},
@@ -7216,7 +7216,7 @@ description = "The MSALRuntime Python Interop Package"
optional = false
python-versions = ">=3.6"
groups = ["main"]
markers = "(platform_system == \"Windows\" or platform_system == \"Darwin\" or platform_system == \"Linux\") and sys_platform == \"win32\""
markers = "sys_platform == \"win32\" and (platform_system == \"Windows\" or platform_system == \"Darwin\" or platform_system == \"Linux\")"
files = [
{file = "pymsalruntime-0.18.1-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:0c22e2e83faa10de422bbfaacc1bb2887c9025ee8a53f0fc2e4f7db01c4a7b66"},
{file = "pymsalruntime-0.18.1-cp310-cp310-macosx_14_0_x86_64.whl", hash = "sha256:8ce2944a0f944833d047bb121396091e00287e2b6373716106da86ea99abf379"},
@@ -8209,10 +8209,10 @@ files = [
]
[package.dependencies]
botocore = ">=1.37.4,<2.0a.0"
botocore = ">=1.37.4,<2.0a0"
[package.extras]
crt = ["botocore[crt] (>=1.37.4,<2.0a.0)"]
crt = ["botocore[crt] (>=1.37.4,<2.0a0)"]
[[package]]
name = "safety"
+2 -1
View File
@@ -28,6 +28,7 @@ READ_QUERY_TIMEOUT_SECONDS = env.int(
"ATTACK_PATHS_READ_QUERY_TIMEOUT_SECONDS", default=30
)
MAX_CUSTOM_QUERY_NODES = env.int("ATTACK_PATHS_MAX_CUSTOM_QUERY_NODES", default=250)
CONN_ACQUISITION_TIMEOUT = env.int("NEO4J_CONN_ACQUISITION_TIMEOUT", default=15)
READ_EXCEPTION_CODES = [
"Neo.ClientError.Statement.AccessMode",
"Neo.ClientError.Procedure.ProcedureNotFound",
@@ -62,7 +63,7 @@ def init_driver() -> neo4j.Driver:
auth=(config["USER"], config["PASSWORD"]),
keep_alive=True,
max_connection_lifetime=7200,
connection_acquisition_timeout=120,
connection_acquisition_timeout=CONN_ACQUISITION_TIMEOUT,
max_connection_pool_size=50,
)
_driver.verify_connectivity()
+1
View File
@@ -330,6 +330,7 @@ class MembershipFilter(FilterSet):
model = Membership
fields = {
"tenant": ["exact"],
"user": ["exact"],
"role": ["exact"],
"date_joined": ["date", "gte", "lte"],
}
@@ -12,6 +12,8 @@ from unittest.mock import MagicMock, patch
import neo4j
import pytest
import api.attack_paths.database as db_module
class TestLazyInitialization:
"""Test that Neo4j driver is initialized lazily on first use."""
@@ -19,8 +21,6 @@ class TestLazyInitialization:
@pytest.fixture(autouse=True)
def reset_module_state(self):
"""Reset module-level singleton state before each test."""
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
@@ -31,8 +31,6 @@ class TestLazyInitialization:
def test_driver_not_initialized_at_import(self):
"""Driver should be None after module import (no eager connection)."""
import api.attack_paths.database as db_module
assert db_module._driver is None
@patch("api.attack_paths.database.settings")
@@ -41,8 +39,6 @@ class TestLazyInitialization:
self, mock_driver_factory, mock_settings
):
"""init_driver() should create connection only when called."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver_factory.return_value = mock_driver
mock_settings.DATABASES = {
@@ -69,8 +65,6 @@ class TestLazyInitialization:
self, mock_driver_factory, mock_settings
):
"""Subsequent calls should return cached driver without reconnecting."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver_factory.return_value = mock_driver
mock_settings.DATABASES = {
@@ -99,8 +93,6 @@ class TestLazyInitialization:
self, mock_driver_factory, mock_settings
):
"""get_driver() should use init_driver() for lazy initialization."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver_factory.return_value = mock_driver
mock_settings.DATABASES = {
@@ -118,14 +110,50 @@ class TestLazyInitialization:
mock_driver_factory.assert_called_once()
class TestConnectionAcquisitionTimeout:
"""Test that the connection acquisition timeout is configurable."""
@pytest.fixture(autouse=True)
def reset_module_state(self):
original_driver = db_module._driver
original_timeout = db_module.CONN_ACQUISITION_TIMEOUT
db_module._driver = None
yield
db_module._driver = original_driver
db_module.CONN_ACQUISITION_TIMEOUT = original_timeout
@patch("api.attack_paths.database.settings")
@patch("api.attack_paths.database.neo4j.GraphDatabase.driver")
def test_driver_receives_configured_timeout(
self, mock_driver_factory, mock_settings
):
"""init_driver() should pass CONN_ACQUISITION_TIMEOUT to the neo4j driver."""
mock_driver_factory.return_value = MagicMock()
mock_settings.DATABASES = {
"neo4j": {
"HOST": "localhost",
"PORT": 7687,
"USER": "neo4j",
"PASSWORD": "password",
}
}
db_module.CONN_ACQUISITION_TIMEOUT = 42
db_module.init_driver()
_, kwargs = mock_driver_factory.call_args
assert kwargs["connection_acquisition_timeout"] == 42
class TestAtexitRegistration:
"""Test that atexit cleanup handler is registered correctly."""
@pytest.fixture(autouse=True)
def reset_module_state(self):
"""Reset module-level singleton state before each test."""
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
@@ -141,8 +169,6 @@ class TestAtexitRegistration:
self, mock_driver_factory, mock_atexit_register, mock_settings
):
"""atexit.register should be called on first initialization."""
import api.attack_paths.database as db_module
mock_driver_factory.return_value = MagicMock()
mock_settings.DATABASES = {
"neo4j": {
@@ -168,8 +194,6 @@ class TestAtexitRegistration:
The double-checked locking on _driver ensures the atexit registration
block only executes once (when _driver is first created).
"""
import api.attack_paths.database as db_module
mock_driver_factory.return_value = MagicMock()
mock_settings.DATABASES = {
"neo4j": {
@@ -194,8 +218,6 @@ class TestCloseDriver:
@pytest.fixture(autouse=True)
def reset_module_state(self):
"""Reset module-level singleton state before each test."""
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
@@ -206,8 +228,6 @@ class TestCloseDriver:
def test_close_driver_closes_and_clears_driver(self):
"""close_driver() should close the driver and set it to None."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
db_module._driver = mock_driver
@@ -218,8 +238,6 @@ class TestCloseDriver:
def test_close_driver_handles_none_driver(self):
"""close_driver() should handle case where driver is None."""
import api.attack_paths.database as db_module
db_module._driver = None
# Should not raise
@@ -229,8 +247,6 @@ class TestCloseDriver:
def test_close_driver_clears_driver_even_on_close_error(self):
"""Driver should be cleared even if close() raises an exception."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver.close.side_effect = Exception("Connection error")
db_module._driver = mock_driver
@@ -246,8 +262,6 @@ class TestExecuteReadQuery:
"""Test read query execution helper."""
def test_execute_read_query_calls_read_session_and_returns_result(self):
import api.attack_paths.database as db_module
tx = MagicMock()
expected_graph = MagicMock()
run_result = MagicMock()
@@ -289,8 +303,6 @@ class TestExecuteReadQuery:
assert result is expected_graph
def test_execute_read_query_defaults_parameters_to_empty_dict(self):
import api.attack_paths.database as db_module
tx = MagicMock()
run_result = MagicMock()
run_result.graph.return_value = MagicMock()
@@ -325,8 +337,6 @@ class TestGetSessionReadOnly:
@pytest.fixture(autouse=True)
def reset_module_state(self):
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
yield
@@ -341,8 +351,6 @@ class TestGetSessionReadOnly:
)
def test_get_session_raises_write_query_not_allowed(self, neo4j_code):
"""Read-mode Neo4j errors should raise `WriteQueryNotAllowedException`."""
import api.attack_paths.database as db_module
mock_session = MagicMock()
neo4j_error = neo4j.exceptions.Neo4jError._hydrate_neo4j(
code=neo4j_code,
@@ -362,8 +370,6 @@ class TestGetSessionReadOnly:
def test_get_session_raises_generic_exception_for_other_errors(self):
"""Non-read-mode Neo4j errors should raise GraphDatabaseQueryException."""
import api.attack_paths.database as db_module
mock_session = MagicMock()
neo4j_error = neo4j.exceptions.Neo4jError._hydrate_neo4j(
code="Neo.ClientError.Statement.SyntaxError",
@@ -388,8 +394,6 @@ class TestThreadSafety:
@pytest.fixture(autouse=True)
def reset_module_state(self):
"""Reset module-level singleton state before each test."""
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
@@ -404,8 +408,6 @@ class TestThreadSafety:
self, mock_driver_factory, mock_settings
):
"""Multiple threads calling init_driver() should create only one driver."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver_factory.return_value = mock_driver
mock_settings.DATABASES = {
@@ -448,8 +450,6 @@ class TestHasProviderData:
"""Test has_provider_data helper for checking provider nodes in Neo4j."""
def test_returns_true_when_nodes_exist(self):
import api.attack_paths.database as db_module
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.single.return_value = MagicMock() # non-None record
@@ -468,8 +468,6 @@ class TestHasProviderData:
mock_session.run.assert_called_once()
def test_returns_false_when_no_nodes(self):
import api.attack_paths.database as db_module
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.single.return_value = None
@@ -486,8 +484,6 @@ class TestHasProviderData:
assert db_module.has_provider_data("db-tenant-abc", "provider-123") is False
def test_returns_false_when_database_not_found(self):
import api.attack_paths.database as db_module
session_ctx = MagicMock()
session_ctx.__enter__.side_effect = db_module.GraphDatabaseQueryException(
message="Database does not exist",
@@ -503,8 +499,6 @@ class TestHasProviderData:
)
def test_raises_on_other_errors(self):
import api.attack_paths.database as db_module
session_ctx = MagicMock()
session_ctx.__enter__.side_effect = db_module.GraphDatabaseQueryException(
message="Connection refused",
+328
View File
@@ -32,6 +32,11 @@ from django_celery_results.models import TaskResult
from rest_framework import status
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework_simplejwt.token_blacklist.models import (
BlacklistedToken,
OutstandingToken,
)
from rest_framework_simplejwt.tokens import RefreshToken
from api.attack_paths import (
AttackPathsQueryDefinition,
@@ -47,6 +52,7 @@ from api.models import (
Finding,
Integration,
Invitation,
InvitationRoleRelationship,
LighthouseProviderConfiguration,
LighthouseProviderModels,
LighthouseTenantConfiguration,
@@ -746,6 +752,39 @@ class TestTenantViewSet:
# Test user + 2 extra users for tenant 2
assert len(response.json()["data"]) == 3
def test_tenants_list_memberships_filter_by_user(
self, authenticated_client, tenants_fixture, extra_users
):
_, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
response = authenticated_client.get(
reverse("tenant-membership-list", kwargs={"tenant_pk": tenant2.id}),
{"filter[user]": str(user3.id)},
)
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert len(data) == 1
assert data[0]["id"] == str(membership3.id)
def test_tenants_list_memberships_filter_by_user_no_match(
self, authenticated_client, tenants_fixture, extra_users
):
_, tenant2, _ = tenants_fixture
unrelated_user = User.objects.create_user(
name="unrelated",
password=TEST_PASSWORD,
email="unrelated@gmail.com",
)
response = authenticated_client.get(
reverse("tenant-membership-list", kwargs={"tenant_pk": tenant2.id}),
{"filter[user]": str(unrelated_user.id)},
)
assert response.status_code == status.HTTP_200_OK
assert response.json()["data"] == []
def test_tenants_list_memberships_as_member(
self, authenticated_client, tenants_fixture, extra_users
):
@@ -803,6 +842,7 @@ class TestTenantViewSet:
):
_, tenant2, _ = tenants_fixture
user_membership = Membership.objects.get(tenant=tenant2, user__email=TEST_USER)
user_id = user_membership.user_id
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
@@ -811,6 +851,127 @@ class TestTenantViewSet:
)
assert response.status_code == status.HTTP_403_FORBIDDEN
assert Membership.objects.filter(id=user_membership.id).exists()
assert User.objects.filter(id=user_id).exists()
def test_expel_user_deletes_account_if_last_membership(
self, authenticated_client, tenants_fixture, extra_users
):
# TEST_USER is OWNER of tenant2; user3 is MEMBER only in tenant2
_, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
assert Membership.objects.filter(user=user3).count() == 1
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership3.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
assert not Membership.objects.filter(id=membership3.id).exists()
assert not User.objects.filter(id=user3.id).exists()
def test_expel_user_blacklists_refresh_tokens(
self, authenticated_client, tenants_fixture, extra_users
):
_, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
# Issue two refresh tokens to simulate active sessions
RefreshToken.for_user(user3)
RefreshToken.for_user(user3)
outstanding_ids = list(
OutstandingToken.objects.filter(user=user3).values_list("id", flat=True)
)
assert len(outstanding_ids) == 2
assert not BlacklistedToken.objects.filter(
token_id__in=outstanding_ids
).exists()
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership3.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
assert (
BlacklistedToken.objects.filter(token_id__in=outstanding_ids).count() == 2
)
def test_expel_user_blacklists_refresh_tokens_is_idempotent(
self, authenticated_client, tenants_fixture, extra_users
):
# Regression test for the bulk blacklisting path: if one of the
# user's refresh tokens is already blacklisted when the expel
# endpoint runs, the remaining tokens must still be blacklisted
# and the already-blacklisted one must not be duplicated.
tenant1, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
# Keep the user alive after the expel so the assertions below can
# still query OutstandingToken by user_id.
Membership.objects.create(
user=user3,
tenant=tenant1,
role=Membership.RoleChoices.MEMBER,
)
RefreshToken.for_user(user3)
RefreshToken.for_user(user3)
outstanding_ids = list(
OutstandingToken.objects.filter(user=user3).values_list("id", flat=True)
)
assert len(outstanding_ids) == 2
# Pre-blacklist one of the two tokens to simulate a prior revocation.
BlacklistedToken.objects.create(token_id=outstanding_ids[0])
assert (
BlacklistedToken.objects.filter(token_id__in=outstanding_ids).count() == 1
)
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership3.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
blacklisted = BlacklistedToken.objects.filter(token_id__in=outstanding_ids)
assert blacklisted.count() == 2
assert set(blacklisted.values_list("token_id", flat=True)) == set(
outstanding_ids
)
def test_expel_user_keeps_account_if_has_other_memberships(
self, authenticated_client, tenants_fixture, extra_users
):
tenant1, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
# Give user3 an additional membership in tenant1 so they are not orphaned
other_membership = Membership.objects.create(
user=user3,
tenant=tenant1,
role=Membership.RoleChoices.MEMBER,
)
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership3.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
assert not Membership.objects.filter(id=membership3.id).exists()
assert User.objects.filter(id=user3.id).exists()
assert Membership.objects.filter(id=other_membership.id).exists()
def test_tenants_delete_another_membership_as_owner(
self, authenticated_client, tenants_fixture, extra_users
@@ -882,6 +1043,128 @@ class TestTenantViewSet:
assert response.status_code == status.HTTP_404_NOT_FOUND
assert Membership.objects.filter(id=other_membership.id).exists()
def test_delete_membership_cleans_up_orphaned_role_grants(
self, authenticated_client, tenants_fixture
):
"""Test that deleting a membership removes UserRoleRelationship records
for that tenant while preserving grants in other tenants."""
tenant1, tenant2, _ = tenants_fixture
# Create a user with memberships in both tenants
user = User.objects.create_user(
name="Multi-tenant User",
password=TEST_PASSWORD,
email="multitenant@test.com",
)
# Create memberships in both tenants
Membership.objects.create(
user=user, tenant=tenant1, role=Membership.RoleChoices.MEMBER
)
membership2 = Membership.objects.create(
user=user, tenant=tenant2, role=Membership.RoleChoices.MEMBER
)
# Create roles in both tenants
role1 = Role.objects.create(
name="Test Role 1", tenant=tenant1, manage_providers=True
)
role2 = Role.objects.create(
name="Test Role 2", tenant=tenant2, manage_scans=True
)
# Create user role relationships for both tenants
UserRoleRelationship.objects.create(user=user, role=role1, tenant=tenant1)
UserRoleRelationship.objects.create(user=user, role=role2, tenant=tenant2)
# Verify initial state
assert UserRoleRelationship.objects.filter(user=user, tenant=tenant1).exists()
assert UserRoleRelationship.objects.filter(user=user, tenant=tenant2).exists()
assert Role.objects.filter(id=role1.id).exists()
assert Role.objects.filter(id=role2.id).exists()
# Delete membership from tenant2 (authenticated user is owner of tenant2)
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership2.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
# Verify the membership was deleted
assert not Membership.objects.filter(id=membership2.id).exists()
# Verify UserRoleRelationship for tenant2 was deleted
assert not UserRoleRelationship.objects.filter(
user=user, tenant=tenant2
).exists()
# Verify UserRoleRelationship for tenant1 is preserved
assert UserRoleRelationship.objects.filter(user=user, tenant=tenant1).exists()
# Verify orphaned role2 was deleted (no more user or invitation relationships)
assert not Role.objects.filter(id=role2.id).exists()
# Verify role1 is preserved (still has user relationship)
assert Role.objects.filter(id=role1.id).exists()
# Verify the user still exists (has other memberships)
assert User.objects.filter(id=user.id).exists()
def test_delete_membership_preserves_role_with_invitation_relationship(
self, authenticated_client, tenants_fixture
):
"""Test that roles are not deleted if they have invitation relationships."""
_, tenant2, _ = tenants_fixture
# Create a user with membership
user = User.objects.create_user(
name="Test User", password=TEST_PASSWORD, email="testuser@test.com"
)
membership = Membership.objects.create(
user=user, tenant=tenant2, role=Membership.RoleChoices.MEMBER
)
# Create a role and user relationship
role = Role.objects.create(
name="Shared Role", tenant=tenant2, manage_providers=True
)
UserRoleRelationship.objects.create(user=user, role=role, tenant=tenant2)
# Create an invitation with the same role
invitation = Invitation.objects.create(email="pending@test.com", tenant=tenant2)
InvitationRoleRelationship.objects.create(
invitation=invitation, role=role, tenant=tenant2
)
# Verify initial state
assert UserRoleRelationship.objects.filter(user=user, role=role).exists()
assert InvitationRoleRelationship.objects.filter(
invitation=invitation, role=role
).exists()
assert Role.objects.filter(id=role.id).exists()
# Delete the membership
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
# Verify UserRoleRelationship was deleted
assert not UserRoleRelationship.objects.filter(user=user, role=role).exists()
# Verify role is preserved because invitation relationship exists
assert Role.objects.filter(id=role.id).exists()
assert InvitationRoleRelationship.objects.filter(
invitation=invitation, role=role
).exists()
def test_tenants_list_no_permissions(
self, authenticated_client_no_permissions_rbac, tenants_fixture
):
@@ -3830,6 +4113,51 @@ class TestScanViewSet:
assert cd.startswith('attachment; filename="')
assert cd.endswith(f'filename="{fname.name}"')
def test_cis_no_output(self, authenticated_client, scans_fixture):
"""CIS PDF endpoint must 404 when the scan has no output_location."""
scan = scans_fixture[0]
scan.state = StateChoices.COMPLETED
scan.output_location = ""
scan.save()
url = reverse("scan-cis", kwargs={"pk": scan.id})
resp = authenticated_client.get(url)
assert resp.status_code == status.HTTP_404_NOT_FOUND
assert (
resp.json()["errors"]["detail"]
== "The scan has no reports, or the CIS report generation task has not started yet."
)
def test_cis_local_file(self, authenticated_client, scans_fixture, monkeypatch):
"""CIS PDF endpoint must serve the latest generated PDF."""
scan = scans_fixture[0]
scan.state = StateChoices.COMPLETED
with tempfile.TemporaryDirectory() as tmp:
tmp_path = Path(tmp)
base = tmp_path / "reports"
cis_dir = base / "cis"
cis_dir.mkdir(parents=True, exist_ok=True)
fname = cis_dir / "prowler-output-aws-20260101000000_cis_report.pdf"
fname.write_bytes(b"%PDF-1.4 fake pdf")
scan.output_location = str(base / "scan.zip")
scan.save()
monkeypatch.setattr(
glob,
"glob",
lambda p: [str(fname)] if p.endswith("*_cis_report.pdf") else [],
)
url = reverse("scan-cis", kwargs={"pk": scan.id})
resp = authenticated_client.get(url)
assert resp.status_code == status.HTTP_200_OK
assert resp["Content-Type"] == "application/pdf"
cd = resp["Content-Disposition"]
assert cd.startswith('attachment; filename="')
assert cd.endswith(f'filename="{fname.name}"')
@patch("api.v1.views.Task.objects.get")
@patch("api.v1.views.TaskSerializer")
def test__get_task_status_returns_none_if_task_not_executing(
+152 -4
View File
@@ -83,6 +83,10 @@ from rest_framework.permissions import SAFE_METHODS
from rest_framework_json_api import filters as jsonapi_filters
from rest_framework_json_api.views import RelationshipView, Response
from rest_framework_simplejwt.exceptions import InvalidToken, TokenError
from rest_framework_simplejwt.token_blacklist.models import (
BlacklistedToken,
OutstandingToken,
)
from tasks.beat import schedule_provider_scan
from tasks.jobs.attack_paths import db_utils as attack_paths_db_utils
from tasks.jobs.export import get_s3_client
@@ -169,6 +173,7 @@ from api.models import (
FindingGroupDailySummary,
Integration,
Invitation,
InvitationRoleRelationship,
LighthouseConfiguration,
LighthouseProviderConfiguration,
LighthouseProviderModels,
@@ -1330,9 +1335,11 @@ class MembershipViewSet(BaseTenantViewset):
),
destroy=extend_schema(
summary="Delete tenant memberships",
description="Delete the membership details of users in a tenant. You need to be one of the owners to delete a "
"membership that is not yours. If you are the last owner of a tenant, you cannot delete your own "
"membership.",
description="Delete a user's membership from a tenant. This action: (1) removes the membership, "
"(2) revokes all refresh tokens for the expelled user, (3) removes their role grants for this tenant, "
"(4) cleans up orphaned roles, and (5) deletes the user account if this was their last membership. "
"You must be a tenant owner to delete another user's membership. The last owner of a tenant cannot "
"delete their own membership.",
tags=["Tenant"],
),
)
@@ -1341,6 +1348,7 @@ class TenantMembersViewSet(BaseTenantViewset):
http_method_names = ["get", "delete"]
serializer_class = MembershipSerializer
queryset = Membership.objects.none()
filterset_class = MembershipFilter
# Authorization is handled by get_requesting_membership (owner/member checks),
# not by RBAC, since the target tenant differs from the JWT tenant.
required_permissions = []
@@ -1398,7 +1406,84 @@ class TenantMembersViewSet(BaseTenantViewset):
"You do not have permission to delete this membership."
)
membership_to_delete.delete()
user_to_check_id = membership_to_delete.user_id
tenant_id = membership_to_delete.tenant_id
# All writes run on the admin connection so that the uncommitted
# membership delete is visible to the subsequent "other memberships"
# check. Splitting the delete and the check across the default
# (prowler_user, RLS) and admin connections caused the admin side to
# miss the just-deleted row and leave the User row orphaned.
with transaction.atomic(using=MainRouter.admin_db):
Membership.objects.using(MainRouter.admin_db).filter(
id=membership_to_delete.id
).delete()
# Remove role grants for this user in this tenant to prevent
# orphaned permissions that could allow access after expulsion
deleted_role_relationships = UserRoleRelationship.objects.using(
MainRouter.admin_db
).filter(user_id=user_to_check_id, tenant_id=tenant_id)
# Collect role IDs that might become orphaned after deletion
role_ids_to_check = list(
deleted_role_relationships.values_list("role_id", flat=True)
)
# Delete the user role relationships for this tenant
deleted_role_relationships.delete()
# Clean up orphaned roles that have no remaining user or invitation relationships
if role_ids_to_check:
for role_id in role_ids_to_check:
has_user_relationships = (
UserRoleRelationship.objects.using(MainRouter.admin_db)
.filter(role_id=role_id)
.exists()
)
has_invitation_relationships = (
InvitationRoleRelationship.objects.using(MainRouter.admin_db)
.filter(role_id=role_id)
.exists()
)
if not has_user_relationships and not has_invitation_relationships:
Role.objects.using(MainRouter.admin_db).filter(
id=role_id
).delete()
# Revoke any refresh tokens the expelled user still holds so they
# cannot mint fresh access tokens. This must happen before the
# User row is deleted, because OutstandingToken.user is
# on_delete=SET_NULL in djangorestframework-simplejwt 5.5.1
# (see rest_framework_simplejwt/token_blacklist/models.py): once
# the user row is gone, user_id becomes NULL and we can no longer
# look up that user's outstanding tokens. Access tokens already
# issued remain valid until SIMPLE_JWT["ACCESS_TOKEN_LIFETIME"]
# expires.
outstanding_token_ids = list(
OutstandingToken.objects.using(MainRouter.admin_db)
.filter(user_id=user_to_check_id)
.values_list("id", flat=True)
)
if outstanding_token_ids:
BlacklistedToken.objects.using(MainRouter.admin_db).bulk_create(
[
BlacklistedToken(token_id=token_id)
for token_id in outstanding_token_ids
],
ignore_conflicts=True,
)
has_other_memberships = (
Membership.objects.using(MainRouter.admin_db)
.filter(user_id=user_to_check_id)
.exists()
)
if not has_other_memberships:
User.objects.using(MainRouter.admin_db).filter(
id=user_to_check_id
).delete()
return Response(status=status.HTTP_204_NO_CONTENT)
@@ -1841,6 +1926,27 @@ class ProviderViewSet(DisablePaginationMixin, BaseRLSViewSet):
),
},
),
cis=extend_schema(
tags=["Scan"],
summary="Retrieve CIS Benchmark compliance report",
description="Download the CIS Benchmark compliance report as a PDF file. "
"When a provider ships multiple CIS versions, the report is generated "
"for the highest available version.",
request=None,
responses={
200: OpenApiResponse(
description="PDF file containing the CIS compliance report"
),
202: OpenApiResponse(description="The task is in progress"),
401: OpenApiResponse(
description="API key missing or user not Authenticated"
),
403: OpenApiResponse(description="There is a problem with credentials"),
404: OpenApiResponse(
description="The scan has no CIS reports, or the CIS report generation task has not started yet"
),
},
),
)
@method_decorator(CACHE_DECORATOR, name="list")
@method_decorator(CACHE_DECORATOR, name="retrieve")
@@ -1909,6 +2015,9 @@ class ScanViewSet(BaseRLSViewSet):
elif self.action == "csa":
if hasattr(self, "response_serializer_class"):
return self.response_serializer_class
elif self.action == "cis":
if hasattr(self, "response_serializer_class"):
return self.response_serializer_class
return super().get_serializer_class()
def partial_update(self, request, *args, **kwargs):
@@ -2151,6 +2260,45 @@ class ScanViewSet(BaseRLSViewSet):
content, filename = loader
return self._serve_file(content, filename, "text/csv")
@action(
detail=True,
methods=["get"],
url_name="cis",
)
def cis(self, request, pk=None):
scan = self.get_object()
running_resp = self._get_task_status(scan)
if running_resp:
return running_resp
if not scan.output_location:
return Response(
{
"detail": "The scan has no reports, or the CIS report generation task has not started yet."
},
status=status.HTTP_404_NOT_FOUND,
)
if scan.output_location.startswith("s3://"):
bucket = env.str("DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET", "")
key_prefix = scan.output_location.removeprefix(f"s3://{bucket}/")
prefix = os.path.join(
os.path.dirname(key_prefix),
"cis",
"*_cis_report.pdf",
)
loader = self._load_file(prefix, s3=True, bucket=bucket, list_objects=True)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "cis", "*_cis_report.pdf")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, Response):
return loader
content, filename = loader
return self._serve_file(content, filename, "application/pdf")
@action(
detail=True,
methods=["get"],
@@ -120,6 +120,7 @@ sentry_sdk.init(
# see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info
before_send=before_send,
send_default_pii=True,
traces_sample_rate=env.float("DJANGO_SENTRY_TRACES_SAMPLE_RATE", default=0.02),
_experiments={
# Set continuous_profiling_auto_start to True
# to automatically start the profiler on when
+4 -4
View File
@@ -14,8 +14,8 @@ from rest_framework import status
from rest_framework.test import APIClient
from tasks.jobs.backfill import (
backfill_resource_scan_summaries,
backfill_scan_category_summaries,
backfill_scan_resource_group_summaries,
aggregate_scan_category_summaries,
aggregate_scan_resource_group_summaries,
)
from api.attack_paths import (
@@ -1445,8 +1445,8 @@ def latest_scan_finding_with_categories(
)
finding.add_resources([resource])
backfill_resource_scan_summaries(tenant_id, str(scan.id))
backfill_scan_category_summaries(tenant_id, str(scan.id))
backfill_scan_resource_group_summaries(tenant_id, str(scan.id))
aggregate_scan_category_summaries(tenant_id, str(scan.id))
aggregate_scan_resource_group_summaries(tenant_id, str(scan.id))
return finding
Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

@@ -313,3 +313,16 @@ def sync_aws_account(
)
return failed_syncs
def extract_short_uid(uid: str) -> str:
"""Return the short identifier from an AWS ARN or resource ID.
Supported inputs end in one of:
- `<type>/<id>` (e.g. `instance/i-xxx`)
- `<type>:<id>` (e.g. `function:name`)
- `<id>` (e.g. `bucket-name` or `i-xxx`)
If `uid` is already a short resource ID, it is returned unchanged.
"""
return uid.rsplit("/", 1)[-1].rsplit(":", 1)[-1]
@@ -37,6 +37,8 @@ class ProviderConfig:
# Label for resources connected to the account node, enabling indexed finding lookups.
resource_label: str # e.g., "_AWSResource"
ingestion_function: Callable
# Maps a Postgres resource UID (e.g. full ARN) to the short-id form Cartography stores on some node types (e.g. `i-xxx` for EC2Instance).
short_uid_extractor: Callable[[str], str]
# Provider Configurations
@@ -48,6 +50,7 @@ AWS_CONFIG = ProviderConfig(
uid_field="arn",
resource_label="_AWSResource",
ingestion_function=aws.start_aws_ingestion,
short_uid_extractor=aws.extract_short_uid,
)
PROVIDER_CONFIGS: dict[str, ProviderConfig] = {
@@ -116,6 +119,21 @@ def get_provider_resource_label(provider_type: str) -> str:
return config.resource_label if config else "_UnknownProviderResource"
def _identity_short_uid(uid: str) -> str:
"""Fallback short-uid extractor for providers without a custom mapping."""
return uid
def get_short_uid_extractor(provider_type: str) -> Callable[[str], str]:
"""Get the short-uid extractor for a provider type.
Returns an identity function when the provider is unknown, so callers can
rely on a callable always being returned.
"""
config = PROVIDER_CONFIGS.get(provider_type)
return config.short_uid_extractor if config else _identity_short_uid
# Dynamic Isolation Label Helpers
# --------------------------------
@@ -8,7 +8,7 @@ This module handles:
"""
from collections import defaultdict
from typing import Any, Generator
from typing import Any, Callable, Generator
from uuid import UUID
import neo4j
@@ -21,6 +21,7 @@ from tasks.jobs.attack_paths.config import (
get_node_uid_field,
get_provider_resource_label,
get_root_node_label,
get_short_uid_extractor,
)
from tasks.jobs.attack_paths.queries import (
ADD_RESOURCE_LABEL_TEMPLATE,
@@ -57,7 +58,9 @@ _DB_QUERY_FIELDS = [
]
def _to_neo4j_dict(record: dict[str, Any], resource_uid: str) -> dict[str, Any]:
def _to_neo4j_dict(
record: dict[str, Any], resource_uid: str, resource_short_uid: str
) -> dict[str, Any]:
"""Transform a Django `.values()` record into a `dict` ready for Neo4j ingestion."""
return {
"id": str(record["id"]),
@@ -75,6 +78,7 @@ def _to_neo4j_dict(record: dict[str, Any], resource_uid: str) -> dict[str, Any]:
"muted": record["muted"],
"muted_reason": record["muted_reason"],
"resource_uid": resource_uid,
"resource_short_uid": resource_short_uid,
}
@@ -170,6 +174,8 @@ def load_findings(
batch_num = 0
total_records = 0
edges_merged = 0
edges_dropped = 0
for batch in findings_batches:
batch_num += 1
batch_size = len(batch)
@@ -178,9 +184,15 @@ def load_findings(
parameters["findings_data"] = batch
logger.info(f"Loading findings batch {batch_num} ({batch_size} records)")
neo4j_session.run(query, parameters)
summary = neo4j_session.run(query, parameters).single()
if summary is not None:
edges_merged += summary.get("merged_count", 0)
edges_dropped += summary.get("dropped_count", 0)
logger.info(f"Finished loading {total_records} records in {batch_num} batches")
logger.info(
f"Finished loading {total_records} records in {batch_num} batches "
f"(edges_merged={edges_merged}, edges_dropped={edges_dropped})"
)
return total_records
@@ -205,8 +217,9 @@ def stream_findings_with_resources(
)
tenant_id = prowler_api_provider.tenant_id
short_uid_extractor = get_short_uid_extractor(prowler_api_provider.provider)
for batch in _paginate_findings(tenant_id, scan_id):
enriched = _enrich_batch_with_resources(batch, tenant_id)
enriched = _enrich_batch_with_resources(batch, tenant_id, short_uid_extractor)
if enriched:
yield enriched
@@ -269,6 +282,7 @@ def _fetch_findings_batch(
def _enrich_batch_with_resources(
findings_batch: list[dict[str, Any]],
tenant_id: str,
short_uid_extractor: Callable[[str], str],
) -> list[dict[str, Any]]:
"""
Enrich findings with their resource UIDs.
@@ -280,7 +294,7 @@ def _enrich_batch_with_resources(
resource_map = _build_finding_resource_map(finding_ids, tenant_id)
return [
_to_neo4j_dict(finding, resource_uid)
_to_neo4j_dict(finding, resource_uid, short_uid_extractor(resource_uid))
for finding in findings_batch
for resource_uid in resource_map.get(finding["id"], [])
]
@@ -35,46 +35,56 @@ INSERT_FINDING_TEMPLATE = f"""
UNWIND $findings_data AS finding_data
OPTIONAL MATCH (resource_by_uid:__RESOURCE_LABEL__ {{__NODE_UID_FIELD__: finding_data.resource_uid}})
WITH finding_data, resource_by_uid
OPTIONAL MATCH (resource_by_id:__RESOURCE_LABEL__ {{id: finding_data.resource_uid}})
WHERE resource_by_uid IS NULL
WITH finding_data, COALESCE(resource_by_uid, resource_by_id) AS resource
WHERE resource IS NOT NULL
OPTIONAL MATCH (resource_by_short:__RESOURCE_LABEL__ {{id: finding_data.resource_short_uid}})
WHERE resource_by_uid IS NULL AND resource_by_id IS NULL
WITH finding_data,
resource_by_uid,
resource_by_id,
head(collect(resource_by_short)) AS resource_by_short
WITH finding_data,
COALESCE(resource_by_uid, resource_by_id, resource_by_short) AS resource
MERGE (finding:{PROWLER_FINDING_LABEL} {{id: finding_data.id}})
ON CREATE SET
finding.id = finding_data.id,
finding.uid = finding_data.uid,
finding.inserted_at = finding_data.inserted_at,
finding.updated_at = finding_data.updated_at,
finding.first_seen_at = finding_data.first_seen_at,
finding.scan_id = finding_data.scan_id,
finding.delta = finding_data.delta,
finding.status = finding_data.status,
finding.status_extended = finding_data.status_extended,
finding.severity = finding_data.severity,
finding.check_id = finding_data.check_id,
finding.check_title = finding_data.check_title,
finding.muted = finding_data.muted,
finding.muted_reason = finding_data.muted_reason,
finding.firstseen = timestamp(),
finding.lastupdated = $last_updated,
finding._module_name = 'cartography:prowler',
finding._module_version = $prowler_version
ON MATCH SET
finding.status = finding_data.status,
finding.status_extended = finding_data.status_extended,
finding.lastupdated = $last_updated
FOREACH (_ IN CASE WHEN resource IS NOT NULL THEN [1] ELSE [] END |
MERGE (finding:{PROWLER_FINDING_LABEL} {{id: finding_data.id}})
ON CREATE SET
finding.id = finding_data.id,
finding.uid = finding_data.uid,
finding.inserted_at = finding_data.inserted_at,
finding.updated_at = finding_data.updated_at,
finding.first_seen_at = finding_data.first_seen_at,
finding.scan_id = finding_data.scan_id,
finding.delta = finding_data.delta,
finding.status = finding_data.status,
finding.status_extended = finding_data.status_extended,
finding.severity = finding_data.severity,
finding.check_id = finding_data.check_id,
finding.check_title = finding_data.check_title,
finding.muted = finding_data.muted,
finding.muted_reason = finding_data.muted_reason,
finding.firstseen = timestamp(),
finding.lastupdated = $last_updated,
finding._module_name = 'cartography:prowler',
finding._module_version = $prowler_version
ON MATCH SET
finding.status = finding_data.status,
finding.status_extended = finding_data.status_extended,
finding.lastupdated = $last_updated
MERGE (resource)-[rel:HAS_FINDING]->(finding)
ON CREATE SET
rel.firstseen = timestamp(),
rel.lastupdated = $last_updated,
rel._module_name = 'cartography:prowler',
rel._module_version = $prowler_version
ON MATCH SET
rel.lastupdated = $last_updated
)
MERGE (resource)-[rel:HAS_FINDING]->(finding)
ON CREATE SET
rel.firstseen = timestamp(),
rel.lastupdated = $last_updated,
rel._module_name = 'cartography:prowler',
rel._module_version = $prowler_version
ON MATCH SET
rel.lastupdated = $last_updated
WITH sum(CASE WHEN resource IS NOT NULL THEN 1 ELSE 0 END) AS merged_count,
sum(CASE WHEN resource IS NULL THEN 1 ELSE 0 END) AS dropped_count
RETURN merged_count, dropped_count
"""
# Internet queries (used by internet.py)
+46 -26
View File
@@ -297,12 +297,15 @@ def backfill_daily_severity_summaries(tenant_id: str, days: int = None):
}
def backfill_scan_category_summaries(tenant_id: str, scan_id: str):
def aggregate_scan_category_summaries(tenant_id: str, scan_id: str):
"""
Backfill ScanCategorySummary for a completed scan.
Aggregates category counts from all findings in the scan and creates
one ScanCategorySummary row per (category, severity) combination.
Idempotent: re-runs replace the scan's existing rows so counts stay in
sync with `Finding.muted` updates triggered outside scan completion
(e.g. mute rules).
Args:
tenant_id: Target tenant UUID
@@ -312,11 +315,6 @@ def backfill_scan_category_summaries(tenant_id: str, scan_id: str):
dict: Status indicating whether backfill was performed
"""
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
if ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists():
return {"status": "already backfilled"}
if not Scan.objects.filter(
tenant_id=tenant_id,
id=scan_id,
@@ -337,9 +335,6 @@ def backfill_scan_category_summaries(tenant_id: str, scan_id: str):
cache=category_counts,
)
if not category_counts:
return {"status": "no categories to backfill"}
category_summaries = [
ScanCategorySummary(
tenant_id=tenant_id,
@@ -353,20 +348,38 @@ def backfill_scan_category_summaries(tenant_id: str, scan_id: str):
for (category, severity), counts in category_counts.items()
]
with rls_transaction(tenant_id):
ScanCategorySummary.objects.bulk_create(
category_summaries, batch_size=500, ignore_conflicts=True
)
if category_summaries:
with rls_transaction(tenant_id):
# Upsert so re-runs (post-mute reaggregation) don't trip
# `unique_category_severity_per_scan`; race-safe under concurrent writers.
ScanCategorySummary.objects.bulk_create(
category_summaries,
batch_size=500,
update_conflicts=True,
unique_fields=["tenant_id", "scan_id", "category", "severity"],
update_fields=[
"total_findings",
"failed_findings",
"new_failed_findings",
],
)
if not category_counts:
return {"status": "no categories to backfill"}
return {"status": "backfilled", "categories_count": len(category_counts)}
def backfill_scan_resource_group_summaries(tenant_id: str, scan_id: str):
def aggregate_scan_resource_group_summaries(tenant_id: str, scan_id: str):
"""
Backfill ScanGroupSummary for a completed scan.
Aggregates resource group counts from all findings in the scan and creates
one ScanGroupSummary row per (resource_group, severity) combination.
Idempotent: re-runs replace the scan's existing rows so counts stay in
sync with `Finding.muted` updates triggered outside scan completion
(e.g. mute rules) and with resource-inventory views reading from this
table.
Args:
tenant_id: Target tenant UUID
@@ -376,11 +389,6 @@ def backfill_scan_resource_group_summaries(tenant_id: str, scan_id: str):
dict: Status indicating whether backfill was performed
"""
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
if ScanGroupSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists():
return {"status": "already backfilled"}
if not Scan.objects.filter(
tenant_id=tenant_id,
id=scan_id,
@@ -418,9 +426,6 @@ def backfill_scan_resource_group_summaries(tenant_id: str, scan_id: str):
group_resources_cache=group_resources_cache,
)
if not resource_group_counts:
return {"status": "no resource groups to backfill"}
# Compute group-level resource counts (same value for all severity rows in a group)
group_resource_counts = {
grp: len(uids) for grp, uids in group_resources_cache.items()
@@ -439,10 +444,25 @@ def backfill_scan_resource_group_summaries(tenant_id: str, scan_id: str):
for (grp, severity), counts in resource_group_counts.items()
]
with rls_transaction(tenant_id):
ScanGroupSummary.objects.bulk_create(
resource_group_summaries, batch_size=500, ignore_conflicts=True
)
if resource_group_summaries:
with rls_transaction(tenant_id):
# Upsert so re-runs (post-mute reaggregation) don't trip
# `unique_resource_group_severity_per_scan`; race-safe under concurrent writers.
ScanGroupSummary.objects.bulk_create(
resource_group_summaries,
batch_size=500,
update_conflicts=True,
unique_fields=["tenant_id", "scan_id", "resource_group", "severity"],
update_fields=[
"total_findings",
"failed_findings",
"new_failed_findings",
"resources_count",
],
)
if not resource_group_counts:
return {"status": "no resource groups to backfill"}
return {"status": "backfilled", "resource_groups_count": len(resource_group_counts)}
+636 -42
View File
@@ -1,11 +1,19 @@
import gc
import os
import re
import time
from collections.abc import Iterable
from pathlib import Path
from shutil import rmtree
from uuid import UUID
import fcntl
from celery.utils.log import get_task_logger
from config.django.base import DJANGO_TMP_OUTPUT_DIRECTORY
from tasks.jobs.export import _generate_compliance_output_directory, _upload_to_s3
from tasks.jobs.reports import (
FRAMEWORK_REGISTRY,
CISReportGenerator,
CSAReportGenerator,
ENSReportGenerator,
NIS2ReportGenerator,
@@ -14,12 +22,398 @@ from tasks.jobs.reports import (
from tasks.jobs.threatscore import compute_threatscore_metrics
from tasks.jobs.threatscore_utils import _aggregate_requirement_statistics_from_database
from api.db_router import READ_REPLICA_ALIAS
from api.db_router import READ_REPLICA_ALIAS, MainRouter
from api.db_utils import rls_transaction
from api.models import Provider, ScanSummary, ThreatScoreSnapshot
from api.models import Provider, Scan, ScanSummary, StateChoices, ThreatScoreSnapshot
from prowler.lib.check.compliance_models import Compliance
from prowler.lib.outputs.finding import Finding as FindingOutput
logger = get_task_logger(__name__)
STALE_TMP_OUTPUT_MAX_AGE_HOURS = 48
STALE_TMP_OUTPUT_MAX_DELETIONS_PER_RUN = 50
STALE_TMP_OUTPUT_THROTTLE_SECONDS = 60 * 60
STALE_TMP_OUTPUT_LOCK_FILE_NAME = ".stale_tmp_cleanup.lock"
# Refuse to ever run rmtree against shared system roots; the configured
# DJANGO_TMP_OUTPUT_DIRECTORY must be a dedicated subdirectory.
_FORBIDDEN_CLEANUP_ROOTS = frozenset(
Path(p).resolve()
for p in ("/", "/tmp", "/var", "/var/tmp", "/home", "/root", "/etc", "/usr")
)
def _resolve_stale_tmp_safe_root() -> Path | None:
"""Resolve the configured tmp output directory, rejecting unsafe roots."""
try:
configured_root = Path(DJANGO_TMP_OUTPUT_DIRECTORY).resolve()
except OSError:
return None
if configured_root in _FORBIDDEN_CLEANUP_ROOTS:
return None
return configured_root
STALE_TMP_OUTPUT_SAFE_ROOT = _resolve_stale_tmp_safe_root()
# Matches CIS compliance_ids like "cis_1.4_aws", "cis_5.0_azure",
# "cis_1.10_kubernetes", "cis_3.0.1_aws". Requires at least one dotted
# component so malformed inputs like "cis_._aws" or "cis_5._aws" are rejected
# at the regex stage, rather than by a later ValueError fallback.
_CIS_VARIANT_RE = re.compile(r"^cis_(?P<version>\d+(?:\.\d+)+)_(?P<provider>.+)$")
def _pick_latest_cis_variant(compliance_ids: Iterable[str]) -> str | None:
"""Return the CIS compliance_id with the highest semantic version.
CIS ships many variants per provider (e.g. cis_1.4_aws, ..., cis_6.0_aws).
A lexicographic sort is incorrect for version strings like ``1.10`` vs
``1.2``; this helper parses the version into a tuple of ints so ``1.10``
is correctly ordered after ``1.2``. Malformed names are skipped so a
broken JSON cannot crash the whole CIS pipeline.
Args:
compliance_ids: Iterable of CIS compliance identifiers. Expected to
belong to a single provider (callers should pass the already
filtered keys from ``Compliance.get_bulk(provider_type)``).
Returns:
The compliance_id with the highest parsed version, or ``None`` if no
well-formed CIS identifier was found.
"""
best_key: tuple[int, ...] | None = None
best_name: str | None = None
for name in compliance_ids:
match = _CIS_VARIANT_RE.match(name)
if not match:
continue
try:
key = tuple(int(part) for part in match.group("version").split("."))
except ValueError:
# Defensive: the regex already guarantees numeric chunks, but we
# keep the guard so a future regex change cannot crash callers.
continue
if best_key is None or key > best_key:
best_key = key
best_name = name
return best_name
def _should_run_stale_cleanup(
root_path: Path,
throttle_seconds: int = STALE_TMP_OUTPUT_THROTTLE_SECONDS,
) -> bool:
"""Throttle stale cleanup to at most once per hour per host."""
lock_file_path = root_path / STALE_TMP_OUTPUT_LOCK_FILE_NAME
now_timestamp = int(time.time())
try:
with lock_file_path.open("a+", encoding="ascii") as lock_file:
try:
fcntl.flock(lock_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError:
return False
lock_file.seek(0)
previous_value = lock_file.read().strip()
try:
last_run_timestamp = int(previous_value) if previous_value else 0
except ValueError:
last_run_timestamp = 0
if now_timestamp - last_run_timestamp < throttle_seconds:
return False
lock_file.seek(0)
lock_file.truncate()
lock_file.write(str(now_timestamp))
lock_file.flush()
os.fsync(lock_file.fileno())
except OSError as error:
logger.warning("Skipping stale tmp cleanup: lock file error (%s)", error)
return False
return True
def _is_scan_metadata_protected(
scan_path: Path,
scan_state: str | None,
output_location: str | None,
) -> bool:
"""
Return True when metadata indicates the directory must not be deleted.
Protected cases:
- Scan is still EXECUTING.
- Scan has a local output artifact path (non-S3) under this scan directory.
"""
if scan_state == StateChoices.EXECUTING.value:
return True
output_location = output_location or ""
if output_location and not output_location.startswith("s3://"):
try:
resolved_output_location = Path(output_location).resolve()
except OSError:
# Conservative fallback: if we cannot resolve a local output path,
# keep the directory to avoid deleting potentially needed artifacts.
return True
if (
resolved_output_location == scan_path
or scan_path in resolved_output_location.parents
):
return True
return False
def _is_scan_directory_protected(
tenant_id: str,
scan_id: str,
scan_path: Path,
) -> bool:
"""
DB-backed wrapper used when batch metadata is not already available.
"""
try:
scan_uuid = UUID(scan_id)
except ValueError:
return False
try:
scan = (
Scan.all_objects.using(MainRouter.admin_db)
.filter(tenant_id=tenant_id, id=scan_uuid)
.only("state", "output_location")
.first()
)
except Exception as error:
logger.warning(
"Skipping stale tmp cleanup for %s/%s due to scan lookup error: %s",
tenant_id,
scan_id,
error,
)
return True
if not scan:
return False
return _is_scan_metadata_protected(
scan_path=scan_path,
scan_state=scan.state,
output_location=scan.output_location,
)
def _cleanup_stale_tmp_output_directories(
tmp_output_root: str,
max_age_hours: int = STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan: tuple[str, str] | None = None,
max_deletions_per_run: int = STALE_TMP_OUTPUT_MAX_DELETIONS_PER_RUN,
) -> int:
"""
Opportunistically delete stale scan directories under the tmp output root.
Expected directory layout:
<tmp_output_root>/<tenant_id>/<scan_id>/...
Each run that wins the per-host throttle sweeps every tenant directory so
leftover artifacts cannot pile up for tenants whose own tasks happen to
lose the throttle race.
Args:
tmp_output_root: Base tmp output path.
max_age_hours: Directory max age before deletion.
exclude_scan: Optional (tenant_id, scan_id) that must never be deleted.
max_deletions_per_run: Max number of scan directories deleted per run.
Returns:
Number of deleted scan directories.
"""
try:
if max_age_hours <= 0:
return 0
try:
root_path = Path(tmp_output_root).resolve()
except OSError as error:
logger.warning(
"Skipping stale tmp cleanup: unable to resolve %s (%s)",
tmp_output_root,
error,
)
return 0
if (
STALE_TMP_OUTPUT_SAFE_ROOT is None
or root_path != STALE_TMP_OUTPUT_SAFE_ROOT
):
logger.warning(
"Skipping stale tmp cleanup: unsupported root %s (allowed: %s)",
root_path,
STALE_TMP_OUTPUT_SAFE_ROOT,
)
return 0
if not root_path.exists() or not root_path.is_dir():
return 0
if max_deletions_per_run <= 0:
return 0
if not _should_run_stale_cleanup(root_path):
return 0
cutoff_timestamp = time.time() - (max_age_hours * 60 * 60)
deleted_scan_dirs = 0
try:
tenant_dirs = list(root_path.iterdir())
except OSError as error:
logger.warning(
"Skipping stale tmp cleanup: unable to list %s (%s)",
root_path,
error,
)
return 0
for tenant_dir in tenant_dirs:
if deleted_scan_dirs >= max_deletions_per_run:
break
if not tenant_dir.is_dir() or tenant_dir.is_symlink():
continue
try:
scan_dirs = list(tenant_dir.iterdir())
except OSError:
continue
stale_candidates: list[tuple[str, Path, UUID | None]] = []
for scan_dir in scan_dirs:
if not scan_dir.is_dir() or scan_dir.is_symlink():
continue
if exclude_scan and (
tenant_dir.name == exclude_scan[0]
and scan_dir.name == exclude_scan[1]
):
continue
try:
if scan_dir.stat().st_mtime >= cutoff_timestamp:
continue
except OSError:
continue
try:
resolved_scan_dir = scan_dir.resolve()
except OSError:
continue
if root_path not in resolved_scan_dir.parents:
logger.warning(
"Skipping stale tmp cleanup for path outside root: %s",
resolved_scan_dir,
)
continue
try:
scan_uuid: UUID | None = UUID(scan_dir.name)
except ValueError:
scan_uuid = None
stale_candidates.append((scan_dir.name, resolved_scan_dir, scan_uuid))
if not stale_candidates:
continue
scan_metadata_by_id: dict[UUID, tuple[str | None, str | None]] = {}
metadata_preload_succeeded = False
candidate_scan_ids = [
candidate[2] for candidate in stale_candidates if candidate[2]
]
if candidate_scan_ids:
try:
scan_rows = (
Scan.all_objects.using(MainRouter.admin_db)
.filter(
tenant_id=tenant_dir.name,
id__in=candidate_scan_ids,
)
.values_list("id", "state", "output_location")
)
scan_metadata_by_id = {
scan_id: (scan_state, output_location)
for scan_id, scan_state, output_location in scan_rows
}
metadata_preload_succeeded = True
except Exception as error:
logger.warning(
"Skipping stale tmp cleanup metadata preload for tenant %s: %s",
tenant_dir.name,
error,
)
else:
metadata_preload_succeeded = True
for scan_name, resolved_scan_dir, scan_uuid in stale_candidates:
if deleted_scan_dirs >= max_deletions_per_run:
break
should_check_scan_fallback = True
if scan_uuid and metadata_preload_succeeded:
should_check_scan_fallback = False
scan_metadata = scan_metadata_by_id.get(scan_uuid)
if scan_metadata:
scan_state, output_location = scan_metadata
if _is_scan_metadata_protected(
scan_path=resolved_scan_dir,
scan_state=scan_state,
output_location=output_location,
):
continue
if should_check_scan_fallback and _is_scan_directory_protected(
tenant_id=tenant_dir.name,
scan_id=scan_name,
scan_path=resolved_scan_dir,
):
continue
try:
rmtree(resolved_scan_dir, ignore_errors=True)
deleted_scan_dirs += 1
except Exception as error:
logger.warning(
"Error cleaning stale tmp directory %s: %s",
resolved_scan_dir,
error,
)
if deleted_scan_dirs:
logger.info(
"Deleted %s stale tmp output directories older than %sh from %s",
deleted_scan_dirs,
max_age_hours,
root_path,
)
if deleted_scan_dirs >= max_deletions_per_run:
logger.info(
"Stale tmp cleanup hit deletion limit (%s) for root %s",
max_deletions_per_run,
root_path,
)
return deleted_scan_dirs
except Exception as error:
logger.warning(
"Skipping stale tmp cleanup due to unexpected error: %s",
error,
exc_info=True,
)
return 0
def generate_threatscore_report(
@@ -191,6 +585,53 @@ def generate_csa_report(
)
def generate_cis_report(
tenant_id: str,
scan_id: str,
compliance_id: str,
output_path: str,
provider_id: str,
only_failed: bool = True,
include_manual: bool = False,
provider_obj: Provider | None = None,
requirement_statistics: dict[str, dict[str, int]] | None = None,
findings_cache: dict[str, list[FindingOutput]] | None = None,
) -> None:
"""
Generate a PDF compliance report for a specific CIS Benchmark variant.
Unlike single-version frameworks (ENS, NIS2, CSA), CIS has multiple
variants per provider (e.g., cis_1.4_aws, cis_5.0_aws, cis_6.0_aws). This
wrapper is called once per variant, receiving the specific compliance_id.
Args:
tenant_id: The tenant ID for Row-Level Security context.
scan_id: ID of the scan executed by Prowler.
compliance_id: ID of the specific CIS variant (e.g., "cis_5.0_aws").
output_path: Output PDF file path.
provider_id: Provider ID for the scan.
only_failed: If True, only include failed requirements in detailed section.
include_manual: If True, include manual requirements in detailed section.
provider_obj: Pre-fetched Provider object to avoid duplicate queries.
requirement_statistics: Pre-aggregated requirement statistics.
findings_cache: Cache of already loaded findings to avoid duplicate queries.
"""
generator = CISReportGenerator(FRAMEWORK_REGISTRY["cis"])
generator.generate(
tenant_id=tenant_id,
scan_id=scan_id,
compliance_id=compliance_id,
output_path=output_path,
provider_id=provider_id,
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
only_failed=only_failed,
include_manual=include_manual,
)
def generate_compliance_reports(
tenant_id: str,
scan_id: str,
@@ -199,6 +640,7 @@ def generate_compliance_reports(
generate_ens: bool = True,
generate_nis2: bool = True,
generate_csa: bool = True,
generate_cis: bool = True,
only_failed_threatscore: bool = True,
min_risk_level_threatscore: int = 4,
include_manual_ens: bool = True,
@@ -206,6 +648,8 @@ def generate_compliance_reports(
only_failed_nis2: bool = True,
only_failed_csa: bool = True,
include_manual_csa: bool = False,
only_failed_cis: bool = True,
include_manual_cis: bool = False,
) -> dict[str, dict[str, bool | str]]:
"""
Generate multiple compliance reports with shared database queries.
@@ -215,6 +659,13 @@ def generate_compliance_reports(
- Aggregating requirement statistics once (shared across all reports)
- Reusing compliance framework data when possible
For CIS a single PDF is produced per run: the one matching the highest
available CIS version for the scan's provider (picked dynamically from
``Compliance.get_bulk`` via :func:`_pick_latest_cis_variant`). The
returned ``results["cis"]`` entry has the same flat shape as the other
single-version frameworks the picked variant is an internal detail,
not surfaced in the result.
Args:
tenant_id: The tenant ID for Row-Level Security context.
scan_id: The ID of the scan to generate reports for.
@@ -223,6 +674,8 @@ def generate_compliance_reports(
generate_ens: Whether to generate ENS report.
generate_nis2: Whether to generate NIS2 report.
generate_csa: Whether to generate CSA CCM report.
generate_cis: Whether to generate a CIS Benchmark report for the
latest CIS version available for the provider.
only_failed_threatscore: For ThreatScore, only include failed requirements.
min_risk_level_threatscore: Minimum risk level for ThreatScore critical requirements.
include_manual_ens: For ENS, include manual requirements.
@@ -230,22 +683,39 @@ def generate_compliance_reports(
only_failed_nis2: For NIS2, only include failed requirements.
only_failed_csa: For CSA CCM, only include failed requirements.
include_manual_csa: For CSA CCM, include manual requirements.
only_failed_cis: For CIS, only include failed requirements in detailed section.
include_manual_cis: For CIS, include manual requirements in detailed section.
Returns:
Dictionary with results for each report type.
Dictionary with results for each report type. Every value has the
same flat shape: ``{"upload": bool, "path": str, "error"?: str}``.
"""
logger.info(
"Generating compliance reports for scan %s with provider %s"
" (ThreatScore: %s, ENS: %s, NIS2: %s, CSA: %s)",
" (ThreatScore: %s, ENS: %s, NIS2: %s, CSA: %s, CIS: %s)",
scan_id,
provider_id,
generate_threatscore,
generate_ens,
generate_nis2,
generate_csa,
generate_cis,
)
results = {}
try:
_cleanup_stale_tmp_output_directories(
DJANGO_TMP_OUTPUT_DIRECTORY,
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan=(tenant_id, scan_id),
)
except Exception as error:
logger.warning(
"Skipping stale tmp cleanup before compliance reports for scan %s: %s",
scan_id,
error,
)
results: dict = {}
# Validate that the scan has findings and get provider info
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
@@ -259,6 +729,8 @@ def generate_compliance_reports(
results["nis2"] = {"upload": False, "path": ""}
if generate_csa:
results["csa"] = {"upload": False, "path": ""}
if generate_cis:
results["cis"] = {"upload": False, "path": ""}
return results
provider_obj = Provider.objects.get(id=provider_id)
@@ -299,11 +771,39 @@ def generate_compliance_reports(
results["csa"] = {"upload": False, "path": ""}
generate_csa = False
# For CIS we do NOT pre-check the provider against a hard-coded whitelist
# (that list drifts the moment a new CIS JSON ships). Instead, we inspect
# the dynamically loaded framework map and pick the latest available CIS
# version, if any.
latest_cis: str | None = None
if generate_cis:
try:
frameworks_bulk = Compliance.get_bulk(provider_type)
latest_cis = _pick_latest_cis_variant(
name for name in frameworks_bulk.keys() if name.startswith("cis_")
)
except Exception as e:
logger.error("Error discovering CIS variants for %s: %s", provider_type, e)
results["cis"] = {"upload": False, "path": "", "error": str(e)}
generate_cis = False
else:
if latest_cis is None:
logger.info("No CIS variants available for provider %s", provider_type)
results["cis"] = {"upload": False, "path": ""}
generate_cis = False
else:
logger.info(
"Selected latest CIS variant for provider %s: %s",
provider_type,
latest_cis,
)
if (
not generate_threatscore
and not generate_ens
and not generate_nis2
and not generate_csa
and not generate_cis
):
return results
@@ -319,38 +819,56 @@ def generate_compliance_reports(
findings_cache = {}
logger.info("Created shared findings cache for all reports")
# Generate output directories
generated_report_keys: list[str] = []
output_paths: dict[str, str] = {}
out_dir: str | None = None
# Generate output directories only for enabled and supported report types.
try:
logger.info("Generating output directories")
threatscore_path = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="threatscore",
)
ens_path = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="ens",
)
nis2_path = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="nis2",
)
csa_path = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="csa",
)
out_dir = str(Path(threatscore_path).parent.parent)
if generate_threatscore:
output_paths["threatscore"] = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="threatscore",
)
if generate_ens:
output_paths["ens"] = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="ens",
)
if generate_nis2:
output_paths["nis2"] = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="nis2",
)
if generate_csa:
output_paths["csa"] = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="csa",
)
if generate_cis and latest_cis:
output_paths["cis"] = _generate_compliance_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY,
provider_uid,
tenant_id,
scan_id,
compliance_framework="cis",
)
if output_paths:
first_output_path = next(iter(output_paths.values()))
out_dir = str(Path(first_output_path).parent.parent)
except Exception as e:
logger.error("Error generating output directory: %s", e)
error_dict = {"error": str(e), "upload": False, "path": ""}
@@ -362,10 +880,14 @@ def generate_compliance_reports(
results["nis2"] = error_dict.copy()
if generate_csa:
results["csa"] = error_dict.copy()
if generate_cis:
results["cis"] = error_dict.copy()
return results
# Generate ThreatScore report
if generate_threatscore:
generated_report_keys.append("threatscore")
threatscore_path = output_paths["threatscore"]
compliance_id_threatscore = f"prowler_threatscore_{provider_type}"
pdf_path_threatscore = f"{threatscore_path}_threatscore_report.pdf"
logger.info(
@@ -467,6 +989,8 @@ def generate_compliance_reports(
# Generate ENS report
if generate_ens:
generated_report_keys.append("ens")
ens_path = output_paths["ens"]
compliance_id_ens = f"ens_rd2022_{provider_type}"
pdf_path_ens = f"{ens_path}_ens_report.pdf"
logger.info("Generating ENS report with compliance %s", compliance_id_ens)
@@ -501,6 +1025,8 @@ def generate_compliance_reports(
# Generate NIS2 report
if generate_nis2:
generated_report_keys.append("nis2")
nis2_path = output_paths["nis2"]
compliance_id_nis2 = f"nis2_{provider_type}"
pdf_path_nis2 = f"{nis2_path}_nis2_report.pdf"
logger.info("Generating NIS2 report with compliance %s", compliance_id_nis2)
@@ -536,6 +1062,8 @@ def generate_compliance_reports(
# Generate CSA CCM report
if generate_csa:
generated_report_keys.append("csa")
csa_path = output_paths["csa"]
compliance_id_csa = f"csa_ccm_4.0_{provider_type}"
pdf_path_csa = f"{csa_path}_csa_report.pdf"
logger.info("Generating CSA CCM report with compliance %s", compliance_id_csa)
@@ -569,14 +1097,75 @@ def generate_compliance_reports(
logger.error("Error generating CSA CCM report: %s", e)
results["csa"] = {"upload": False, "path": "", "error": str(e)}
# Clean up temporary files if all reports were uploaded successfully
all_uploaded = all(
result.get("upload", False)
for result in results.values()
if result.get("upload") is not None
# Generate CIS Benchmark report for the latest available version only.
# CIS ships multiple versions per provider (e.g. cis_1.4_aws, cis_5.0_aws,
# cis_6.0_aws); we dynamically pick the highest semantic version at run
# time rather than hard-coding a per-provider mapping.
if generate_cis and latest_cis:
generated_report_keys.append("cis")
cis_path = output_paths["cis"]
if out_dir is None:
out_dir = str(Path(cis_path).parent.parent)
pdf_path_cis = f"{cis_path}_cis_report.pdf"
try:
generate_cis_report(
tenant_id=tenant_id,
scan_id=scan_id,
compliance_id=latest_cis,
output_path=pdf_path_cis,
provider_id=provider_id,
only_failed=only_failed_cis,
include_manual=include_manual_cis,
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
)
upload_uri_cis = _upload_to_s3(
tenant_id,
scan_id,
pdf_path_cis,
f"cis/{Path(pdf_path_cis).name}",
)
if upload_uri_cis:
results["cis"] = {
"upload": True,
"path": upload_uri_cis,
}
logger.info(
"CIS report %s uploaded to %s",
latest_cis,
upload_uri_cis,
)
else:
results["cis"] = {"upload": False, "path": out_dir}
logger.warning(
"CIS report %s saved locally at %s",
latest_cis,
out_dir,
)
except Exception as e:
logger.error("Error generating CIS report %s: %s", latest_cis, e)
results["cis"] = {
"upload": False,
"path": "",
"error": str(e),
}
finally:
# Free ReportLab/matplotlib memory before moving on.
gc.collect()
# Clean up temporary files only if all generated reports were
# uploaded successfully. Reports skipped for provider incompatibility
# or missing CIS variants must not block cleanup.
all_uploaded = bool(generated_report_keys) and all(
results.get(report_key, {}).get("upload", False)
for report_key in generated_report_keys
)
if all_uploaded:
if all_uploaded and out_dir:
try:
rmtree(Path(out_dir), ignore_errors=True)
logger.info("Cleaned up temporary files at %s", out_dir)
@@ -595,6 +1184,7 @@ def generate_compliance_reports_job(
generate_ens: bool = True,
generate_nis2: bool = True,
generate_csa: bool = True,
generate_cis: bool = True,
) -> dict[str, dict[str, bool | str]]:
"""
Celery task wrapper for generate_compliance_reports.
@@ -607,9 +1197,12 @@ def generate_compliance_reports_job(
generate_ens: Whether to generate ENS report.
generate_nis2: Whether to generate NIS2 report.
generate_csa: Whether to generate CSA CCM report.
generate_cis: Whether to generate the CIS Benchmark report for the
latest CIS version available for the provider.
Returns:
Dictionary with results for each report type.
Dictionary with results for each report type. Every entry shares the
same flat ``{"upload", "path", "error"?}`` shape.
"""
return generate_compliance_reports(
tenant_id=tenant_id,
@@ -619,4 +1212,5 @@ def generate_compliance_reports_job(
generate_ens=generate_ens,
generate_nis2=generate_nis2,
generate_csa=generate_csa,
generate_cis=generate_cis,
)
@@ -17,6 +17,9 @@ from .charts import (
get_chart_color_for_percentage,
)
# Framework-specific generators
from .cis import CISReportGenerator
# Reusable components
# Reusable components: Color helpers, Badge components, Risk component,
# Table components, Section components
@@ -31,10 +34,12 @@ from .components import (
create_section_header,
create_status_badge,
create_summary_table,
escape_html,
get_color_for_compliance,
get_color_for_risk_level,
get_color_for_weight,
get_status_color,
truncate_text,
)
# Framework configuration: Main configuration, Color constants, ENS colors,
@@ -90,8 +95,6 @@ from .config import (
FrameworkConfig,
get_framework_config,
)
# Framework-specific generators
from .csa import CSAReportGenerator
from .ens import ENSReportGenerator
from .nis2 import NIS2ReportGenerator
@@ -109,6 +112,7 @@ __all__ = [
"ENSReportGenerator",
"NIS2ReportGenerator",
"CSAReportGenerator",
"CISReportGenerator",
# Configuration
"FrameworkConfig",
"FRAMEWORK_REGISTRY",
@@ -182,6 +186,9 @@ __all__ = [
# Section components
"create_section_header",
"create_summary_table",
# Text helpers
"truncate_text",
"escape_html",
# Chart functions
"get_chart_color_for_percentage",
"create_vertical_bar_chart",
+755
View File
@@ -0,0 +1,755 @@
import os
import re
from collections import defaultdict
from typing import Any
from reportlab.lib.units import inch
from reportlab.platypus import Image, PageBreak, Paragraph, Spacer, Table, TableStyle
from api.models import StatusChoices
from .base import (
BaseComplianceReportGenerator,
ComplianceData,
RequirementData,
get_requirement_metadata,
)
from .charts import (
create_horizontal_bar_chart,
create_pie_chart,
create_stacked_bar_chart,
get_chart_color_for_percentage,
)
from .components import ColumnConfig, create_data_table, escape_html, truncate_text
from .config import (
CHART_COLOR_GREEN_1,
CHART_COLOR_RED,
CHART_COLOR_YELLOW,
COLOR_BG_BLUE,
COLOR_BLUE,
COLOR_BORDER_GRAY,
COLOR_DARK_GRAY,
COLOR_GRAY,
COLOR_GRID_GRAY,
COLOR_HIGH_RISK,
COLOR_LIGHT_BLUE,
COLOR_SAFE,
COLOR_WHITE,
)
# Ordered buckets used both in the executive summary tables and the charts
# section. Exposed as module constants so the two call sites never drift.
_PROFILE_BUCKET_ORDER: tuple[str, ...] = ("L1", "L2", "Other")
_ASSESSMENT_BUCKET_ORDER: tuple[str, ...] = ("Automated", "Manual")
# Anchored matchers for profile normalization — substring checks on "L1"/"L2"
# would happily match unrelated tokens like "CL2 Worker" or "HL2" coming from
# future CIS profile enum values.
_LEVEL_2_RE = re.compile(r"(?:\bLevel\s*2\b|\bL2\b|Level_2)")
_LEVEL_1_RE = re.compile(r"(?:\bLevel\s*1\b|\bL1\b|Level_1)")
def _normalize_profile(profile: Any) -> str:
"""Bucket a CIS Profile enum/string into one of: ``L1``, ``L2``, ``Other``.
The ``CIS_Requirement_Attribute_Profile`` enum has values like
``"Level 1"``, ``"Level 2"``, ``"E3 Level 1"``, ``"E5 Level 2"``. We
collapse them into three buckets to keep charts and badges readable
across CIS variants, using anchored regex matches so that future enum
values cannot accidentally promote e.g. ``"CL2 Worker"`` into ``L2``.
Args:
profile: The profile value (enum member, string, or ``None``).
Returns:
One of ``"L1"``, ``"L2"``, ``"Other"``.
"""
if profile is None:
return "Other"
value = getattr(profile, "value", None) or str(profile)
if _LEVEL_2_RE.search(value):
return "L2"
if _LEVEL_1_RE.search(value):
return "L1"
return "Other"
def _profile_badge_text(bucket: str) -> str:
"""Map a normalized profile bucket (L1/L2/Other) to a short badge label."""
return {"L1": "Level 1", "L2": "Level 2"}.get(bucket, "Other")
# =============================================================================
# CIS Report Generator
# =============================================================================
class CISReportGenerator(BaseComplianceReportGenerator):
"""
PDF report generator for CIS (Center for Internet Security) Benchmarks.
CIS differs from single-version frameworks (ENS, NIS2, CSA) in that:
- Each provider has multiple CIS versions (e.g. AWS: 1.4, 1.5, ..., 6.0).
- Section names differ across versions and providers and MUST be derived
at runtime from the loaded compliance data.
- Requirements carry Profile (Level 1/Level 2) and AssessmentStatus
(Automated/Manual) attributes that drive the executive summary and
charts.
This generator produces:
- Cover page with Prowler logo and dynamic CIS version/provider metadata
- Executive summary with overall compliance score, counts, and breakdowns
by Profile and AssessmentStatus
- Charts: overall status pie, pass rate by section (horizontal bar),
Level 1 vs Level 2 pass/fail distribution (stacked bar)
- Requirements index grouped by dynamic section
- Detailed findings for FAIL requirements with CIS-specific audit /
remediation / rationale details
"""
# Per-run memoization cache for ``_compute_statistics``. ``generate()``
# is the public entry point and is called once per PDF, so scoping the
# cache to the last seen ComplianceData instance is enough to avoid the
# double computation between executive summary and charts section.
_stats_cache_key: int | None = None
_stats_cache_value: dict | None = None
# Body section ordering — ensure every top-level section starts on its
# own clean page. The base class only puts a PageBreak AFTER Charts and
# Requirements Index, so Executive Summary and Charts end up sharing a
# page. This override prepends a PageBreak so Compliance Analysis always
# begins on a fresh page.
def _build_body_sections(self, data: ComplianceData) -> list:
return [PageBreak(), *super()._build_body_sections(data)]
# -------------------------------------------------------------------------
# Cover page override — shows dynamic CIS version + provider in the title
# -------------------------------------------------------------------------
def create_cover_page(self, data: ComplianceData) -> list:
"""Create the CIS report cover page with Prowler + CIS logos side by side."""
elements = []
# Create logos side by side (same pattern as NIS2 / ENS)
prowler_logo_path = os.path.join(
os.path.dirname(__file__), "../../assets/img/prowler_logo.png"
)
cis_logo_path = os.path.join(
os.path.dirname(__file__), "../../assets/img/cis_logo.png"
)
if os.path.exists(cis_logo_path):
prowler_logo = Image(prowler_logo_path, width=3.5 * inch, height=0.7 * inch)
cis_logo = Image(cis_logo_path, width=2.3 * inch, height=1.1 * inch)
logos_table = Table(
[[prowler_logo, cis_logo]], colWidths=[4 * inch, 2.5 * inch]
)
logos_table.setStyle(
TableStyle(
[
("ALIGN", (0, 0), (0, 0), "LEFT"),
("ALIGN", (1, 0), (1, 0), "RIGHT"),
("VALIGN", (0, 0), (0, 0), "MIDDLE"),
("VALIGN", (1, 0), (1, 0), "MIDDLE"),
]
)
)
elements.append(logos_table)
elif os.path.exists(prowler_logo_path):
# Fallback: only the Prowler logo if the CIS asset is missing
elements.append(Image(prowler_logo_path, width=5 * inch, height=1 * inch))
elements.append(Spacer(1, 0.5 * inch))
# Dynamic title: "CIS Benchmark v5.0 — AWS Compliance Report"
provider_label = ""
if data.provider_obj:
provider_label = f"{data.provider_obj.provider.upper()}"
title_text = (
f"CIS Benchmark v{data.version}{provider_label}<br/>Compliance Report"
)
elements.append(Paragraph(title_text, self.styles["title"]))
elements.append(Spacer(1, 0.5 * inch))
# Metadata table via base class helper
info_rows = self._build_info_rows(data, language=self.config.language)
metadata_data = []
for label, value in info_rows:
if label in ("Name:", "Description:") and value:
metadata_data.append(
[label, Paragraph(str(value), self.styles["normal_center"])]
)
else:
metadata_data.append([label, value])
metadata_table = Table(metadata_data, colWidths=[2 * inch, 4 * inch])
metadata_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (0, -1), COLOR_BLUE),
("TEXTCOLOR", (0, 0), (0, -1), COLOR_WHITE),
("FONTNAME", (0, 0), (0, -1), "FiraCode"),
("BACKGROUND", (1, 0), (1, -1), COLOR_BG_BLUE),
("TEXTCOLOR", (1, 0), (1, -1), COLOR_GRAY),
("FONTNAME", (1, 0), (1, -1), "PlusJakartaSans"),
("ALIGN", (0, 0), (-1, -1), "LEFT"),
("VALIGN", (0, 0), (-1, -1), "TOP"),
("FONTSIZE", (0, 0), (-1, -1), 11),
("GRID", (0, 0), (-1, -1), 1, COLOR_BORDER_GRAY),
("LEFTPADDING", (0, 0), (-1, -1), 10),
("RIGHTPADDING", (0, 0), (-1, -1), 10),
("TOPPADDING", (0, 0), (-1, -1), 8),
("BOTTOMPADDING", (0, 0), (-1, -1), 8),
]
)
)
elements.append(metadata_table)
return elements
# -------------------------------------------------------------------------
# Executive Summary
# -------------------------------------------------------------------------
def create_executive_summary(self, data: ComplianceData) -> list:
"""Create the CIS executive summary section."""
elements = []
elements.append(Paragraph("Executive Summary", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
stats = self._compute_statistics(data)
# --- Summary metrics table ---
summary_data = [
["Metric", "Value"],
["Total Requirements", str(stats["total"])],
["Passed", str(stats["passed"])],
["Failed", str(stats["failed"])],
["Manual", str(stats["manual"])],
["Overall Compliance", f"{stats['overall_compliance']:.1f}%"],
]
summary_table = Table(summary_data, colWidths=[3 * inch, 2 * inch])
summary_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (-1, 0), COLOR_BLUE),
("TEXTCOLOR", (0, 0), (-1, 0), COLOR_WHITE),
("BACKGROUND", (0, 2), (0, 2), COLOR_SAFE),
("TEXTCOLOR", (0, 2), (0, 2), COLOR_WHITE),
("BACKGROUND", (0, 3), (0, 3), COLOR_HIGH_RISK),
("TEXTCOLOR", (0, 3), (0, 3), COLOR_WHITE),
("BACKGROUND", (0, 4), (0, 4), COLOR_DARK_GRAY),
("TEXTCOLOR", (0, 4), (0, 4), COLOR_WHITE),
("FONTNAME", (0, 0), (-1, 0), "PlusJakartaSans"),
("FONTSIZE", (0, 0), (-1, 0), 12),
("FONTSIZE", (0, 1), (-1, -1), 10),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-1, -1), "MIDDLE"),
("GRID", (0, 0), (-1, -1), 0.5, COLOR_BORDER_GRAY),
("BOTTOMPADDING", (0, 0), (-1, 0), 10),
(
"ROWBACKGROUNDS",
(1, 1),
(1, -1),
[COLOR_WHITE, COLOR_BG_BLUE],
),
]
)
)
elements.append(summary_table)
elements.append(Spacer(1, 0.25 * inch))
# --- Profile breakdown table ---
elements.append(Paragraph("Breakdown by Profile", self.styles["h2"]))
elements.append(Spacer(1, 0.1 * inch))
profile_counts = stats["profile_counts"]
profile_table_data = [["Profile", "Passed", "Failed", "Manual", "Total"]]
for bucket in _PROFILE_BUCKET_ORDER:
counts = profile_counts.get(bucket, {"passed": 0, "failed": 0, "manual": 0})
total = counts["passed"] + counts["failed"] + counts["manual"]
if total == 0:
continue
profile_table_data.append(
[
_profile_badge_text(bucket),
str(counts["passed"]),
str(counts["failed"]),
str(counts["manual"]),
str(total),
]
)
profile_table = Table(
profile_table_data,
colWidths=[1.5 * inch, 1 * inch, 1 * inch, 1 * inch, 1 * inch],
)
profile_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (-1, 0), COLOR_BLUE),
("TEXTCOLOR", (0, 0), (-1, 0), COLOR_WHITE),
("FONTNAME", (0, 0), (-1, 0), "FiraCode"),
("FONTSIZE", (0, 0), (-1, 0), 10),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-1, -1), "MIDDLE"),
("FONTSIZE", (0, 1), (-1, -1), 9),
("GRID", (0, 0), (-1, -1), 0.5, COLOR_GRID_GRAY),
(
"ROWBACKGROUNDS",
(0, 1),
(-1, -1),
[COLOR_WHITE, COLOR_BG_BLUE],
),
]
)
)
elements.append(profile_table)
elements.append(Spacer(1, 0.25 * inch))
# --- Assessment status breakdown ---
elements.append(Paragraph("Breakdown by Assessment Status", self.styles["h2"]))
elements.append(Spacer(1, 0.1 * inch))
assessment_counts = stats["assessment_counts"]
assessment_table_data = [["Assessment", "Passed", "Failed", "Manual", "Total"]]
for bucket in _ASSESSMENT_BUCKET_ORDER:
counts = assessment_counts.get(
bucket, {"passed": 0, "failed": 0, "manual": 0}
)
total = counts["passed"] + counts["failed"] + counts["manual"]
if total == 0:
continue
assessment_table_data.append(
[
bucket,
str(counts["passed"]),
str(counts["failed"]),
str(counts["manual"]),
str(total),
]
)
assessment_table = Table(
assessment_table_data,
colWidths=[1.5 * inch, 1 * inch, 1 * inch, 1 * inch, 1 * inch],
)
assessment_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (-1, 0), COLOR_LIGHT_BLUE),
("TEXTCOLOR", (0, 0), (-1, 0), COLOR_WHITE),
("FONTNAME", (0, 0), (-1, 0), "FiraCode"),
("FONTSIZE", (0, 0), (-1, 0), 10),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-1, -1), "MIDDLE"),
("FONTSIZE", (0, 1), (-1, -1), 9),
("GRID", (0, 0), (-1, -1), 0.5, COLOR_GRID_GRAY),
(
"ROWBACKGROUNDS",
(0, 1),
(-1, -1),
[COLOR_WHITE, COLOR_BG_BLUE],
),
]
)
)
elements.append(assessment_table)
elements.append(Spacer(1, 0.25 * inch))
# --- Top 5 failing sections ---
top_failing = stats["top_failing_sections"]
if top_failing:
elements.append(
Paragraph("Top Sections with Lowest Compliance", self.styles["h2"])
)
elements.append(Spacer(1, 0.1 * inch))
top_table_data = [["Section", "Passed", "Failed", "Compliance"]]
for section_label, section_stats in top_failing:
passed = section_stats["passed"]
failed = section_stats["failed"]
total = passed + failed
pct = (passed / total * 100) if total > 0 else 100
top_table_data.append(
[
truncate_text(section_label, 55),
str(passed),
str(failed),
f"{pct:.1f}%",
]
)
top_table = Table(
top_table_data,
colWidths=[3.5 * inch, 0.9 * inch, 0.9 * inch, 1.2 * inch],
)
top_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (-1, 0), COLOR_HIGH_RISK),
("TEXTCOLOR", (0, 0), (-1, 0), COLOR_WHITE),
("FONTNAME", (0, 0), (-1, 0), "FiraCode"),
("FONTSIZE", (0, 0), (-1, 0), 10),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-1, -1), "MIDDLE"),
("FONTSIZE", (0, 1), (-1, -1), 9),
("GRID", (0, 0), (-1, -1), 0.5, COLOR_GRID_GRAY),
(
"ROWBACKGROUNDS",
(0, 1),
(-1, -1),
[COLOR_WHITE, COLOR_BG_BLUE],
),
]
)
)
elements.append(top_table)
return elements
# -------------------------------------------------------------------------
# Charts section
# -------------------------------------------------------------------------
def create_charts_section(self, data: ComplianceData) -> list:
"""Create the CIS charts section."""
elements = []
elements.append(Paragraph("Compliance Analysis", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
# --- Pie chart: overall Pass / Fail / Manual ---
stats = self._compute_statistics(data)
pie_labels = []
pie_values = []
pie_colors = []
if stats["passed"] > 0:
pie_labels.append(f"Pass ({stats['passed']})")
pie_values.append(stats["passed"])
pie_colors.append(CHART_COLOR_GREEN_1)
if stats["failed"] > 0:
pie_labels.append(f"Fail ({stats['failed']})")
pie_values.append(stats["failed"])
pie_colors.append(CHART_COLOR_RED)
if stats["manual"] > 0:
pie_labels.append(f"Manual ({stats['manual']})")
pie_values.append(stats["manual"])
pie_colors.append(CHART_COLOR_YELLOW)
if pie_values:
elements.append(Paragraph("Overall Status Distribution", self.styles["h2"]))
elements.append(Spacer(1, 0.1 * inch))
pie_buffer = create_pie_chart(
labels=pie_labels,
values=pie_values,
colors=pie_colors,
)
pie_buffer.seek(0)
elements.append(Image(pie_buffer, width=4.5 * inch, height=4.5 * inch))
elements.append(Spacer(1, 0.2 * inch))
# --- Horizontal bar: pass rate by section ---
section_stats = stats["section_stats"]
if section_stats:
elements.append(PageBreak())
elements.append(Paragraph("Compliance by Section", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
elements.append(
Paragraph(
"The following chart shows compliance percentage for each CIS "
"section based on automated checks:",
self.styles["normal_center"],
)
)
elements.append(Spacer(1, 0.1 * inch))
# Sort sections by pass rate descending for readability
sorted_sections = sorted(
section_stats.items(),
key=lambda item: (
(item[1]["passed"] / (item[1]["passed"] + item[1]["failed"]) * 100)
if (item[1]["passed"] + item[1]["failed"]) > 0
else 100
),
reverse=True,
)
bar_labels = []
bar_values = []
for section_label, section_data in sorted_sections:
total = section_data["passed"] + section_data["failed"]
if total == 0:
continue
pct = (section_data["passed"] / total) * 100
bar_labels.append(truncate_text(section_label, 60))
bar_values.append(pct)
if bar_values:
bar_buffer = create_horizontal_bar_chart(
labels=bar_labels,
values=bar_values,
xlabel="Compliance (%)",
color_func=get_chart_color_for_percentage,
label_fontsize=9,
)
bar_buffer.seek(0)
elements.append(Image(bar_buffer, width=6.5 * inch, height=5 * inch))
# --- Stacked bar: Level 1 vs Level 2 pass/fail ---
profile_counts = stats["profile_counts"]
has_profile_data = any(
(counts["passed"] + counts["failed"]) > 0
for counts in profile_counts.values()
)
if has_profile_data:
elements.append(PageBreak())
elements.append(Paragraph("Profile Breakdown", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
elements.append(
Paragraph(
"Distribution of Pass / Fail / Manual across CIS profile levels.",
self.styles["normal_center"],
)
)
elements.append(Spacer(1, 0.1 * inch))
profile_labels = []
pass_series = []
fail_series = []
manual_series = []
for bucket in _PROFILE_BUCKET_ORDER:
counts = profile_counts.get(bucket)
if not counts:
continue
total = counts["passed"] + counts["failed"] + counts["manual"]
if total == 0:
continue
profile_labels.append(_profile_badge_text(bucket))
pass_series.append(counts["passed"])
fail_series.append(counts["failed"])
manual_series.append(counts["manual"])
if profile_labels:
stacked_buffer = create_stacked_bar_chart(
labels=profile_labels,
data_series={
"Pass": pass_series,
"Fail": fail_series,
"Manual": manual_series,
},
xlabel="Profile",
ylabel="Requirements",
)
stacked_buffer.seek(0)
elements.append(Image(stacked_buffer, width=6 * inch, height=4 * inch))
return elements
# -------------------------------------------------------------------------
# Requirements Index
# -------------------------------------------------------------------------
def create_requirements_index(self, data: ComplianceData) -> list:
"""Create the CIS requirements index grouped by dynamic section."""
elements = []
elements.append(Paragraph("Requirements Index", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
sections = self._derive_sections(data)
by_section: dict[str, list[dict]] = defaultdict(list)
for req in data.requirements:
meta = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
section = "Other"
profile_bucket = "Other"
assessment = ""
if meta:
section = getattr(meta, "Section", "Other") or "Other"
profile_bucket = _normalize_profile(getattr(meta, "Profile", None))
assessment_enum = getattr(meta, "AssessmentStatus", None)
assessment = getattr(assessment_enum, "value", None) or str(
assessment_enum or ""
)
by_section[section].append(
{
"id": req.id,
"description": truncate_text(req.description, 80),
"profile": _profile_badge_text(profile_bucket),
"assessment": assessment or "-",
"status": (req.status or "").upper(),
}
)
columns = [
ColumnConfig("ID", 0.9 * inch, "id", align="LEFT"),
ColumnConfig("Description", 3.0 * inch, "description", align="LEFT"),
ColumnConfig("Profile", 0.9 * inch, "profile"),
ColumnConfig("Assessment", 1 * inch, "assessment"),
ColumnConfig("Status", 0.9 * inch, "status"),
]
for section in sections:
rows = by_section.get(section, [])
if not rows:
continue
elements.append(Paragraph(truncate_text(section, 90), self.styles["h2"]))
elements.append(Spacer(1, 0.05 * inch))
table = create_data_table(
data=rows,
columns=columns,
header_color=self.config.primary_color,
normal_style=self.styles["normal_center"],
)
elements.append(table)
elements.append(Spacer(1, 0.15 * inch))
return elements
# -------------------------------------------------------------------------
# Detailed findings hook — inject CIS-specific rationale / audit content
# -------------------------------------------------------------------------
def _render_requirement_detail_extras(
self, req: RequirementData, data: ComplianceData
) -> list:
"""Render CIS rationale, impact, audit, remediation and references."""
extras = []
meta = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
if meta is None:
return extras
field_map = [
("Rationale", "RationaleStatement"),
("Impact", "ImpactStatement"),
("Audit Procedure", "AuditProcedure"),
("Remediation", "RemediationProcedure"),
("References", "References"),
]
for label, attr_name in field_map:
value = getattr(meta, attr_name, None)
if not value:
continue
text = str(value).strip()
if not text:
continue
extras.append(Paragraph(f"<b>{label}:</b>", self.styles["h3"]))
extras.append(Paragraph(escape_html(text), self.styles["normal"]))
extras.append(Spacer(1, 0.08 * inch))
return extras
# -------------------------------------------------------------------------
# Private helpers
# -------------------------------------------------------------------------
def _derive_sections(self, data: ComplianceData) -> list[str]:
"""Extract ordered unique Section names from loaded compliance data."""
seen: dict[str, bool] = {}
for req in data.requirements:
meta = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
if meta is None:
continue
section = getattr(meta, "Section", None) or "Other"
if section not in seen:
seen[section] = True
return list(seen.keys())
def _compute_statistics(self, data: ComplianceData) -> dict:
"""Aggregate all statistics needed for summary and charts.
Memoized per-``ComplianceData`` instance via ``_stats_cache_*``: the
executive summary and the charts section both need the same numbers,
so they would otherwise re-iterate the requirements twice. We key on
``id(data)`` because ``ComplianceData`` is a dataclass and its
instances are not hashable.
Returns a dict with:
- total, passed, failed, manual: int
- overall_compliance: float (percentage)
- profile_counts: {"L1": {"passed", "failed", "manual"}, ...}
- assessment_counts: {"Automated": {...}, "Manual": {...}}
- section_stats: {section_name: {"passed", "failed", "manual"}, ...}
- top_failing_sections: list[(section_name, stats)] (up to 5)
"""
cache_key = id(data)
if self._stats_cache_key == cache_key and self._stats_cache_value is not None:
return self._stats_cache_value
stats = self._compute_statistics_uncached(data)
self._stats_cache_key = cache_key
self._stats_cache_value = stats
return stats
def _compute_statistics_uncached(self, data: ComplianceData) -> dict:
"""Actual aggregation kernel; call ``_compute_statistics`` instead."""
total = len(data.requirements)
passed = sum(1 for r in data.requirements if r.status == StatusChoices.PASS)
failed = sum(1 for r in data.requirements if r.status == StatusChoices.FAIL)
manual = sum(1 for r in data.requirements if r.status == StatusChoices.MANUAL)
evaluated = passed + failed
overall_compliance = (passed / evaluated * 100) if evaluated > 0 else 100.0
profile_counts: dict[str, dict[str, int]] = {
"L1": {"passed": 0, "failed": 0, "manual": 0},
"L2": {"passed": 0, "failed": 0, "manual": 0},
"Other": {"passed": 0, "failed": 0, "manual": 0},
}
assessment_counts: dict[str, dict[str, int]] = {
"Automated": {"passed": 0, "failed": 0, "manual": 0},
"Manual": {"passed": 0, "failed": 0, "manual": 0},
}
section_stats: dict[str, dict[str, int]] = defaultdict(
lambda: {"passed": 0, "failed": 0, "manual": 0}
)
for req in data.requirements:
meta = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
if meta is None:
continue
profile_bucket = _normalize_profile(getattr(meta, "Profile", None))
assessment_enum = getattr(meta, "AssessmentStatus", None)
assessment_value = getattr(assessment_enum, "value", None) or str(
assessment_enum or ""
)
assessment_bucket = (
"Automated" if assessment_value == "Automated" else "Manual"
)
section = getattr(meta, "Section", None) or "Other"
status_key = {
StatusChoices.PASS: "passed",
StatusChoices.FAIL: "failed",
StatusChoices.MANUAL: "manual",
}.get(req.status)
if status_key is None:
continue
profile_counts[profile_bucket][status_key] += 1
assessment_counts[assessment_bucket][status_key] += 1
section_stats[section][status_key] += 1
# Top 5 sections with lowest pass rate (only sections with evaluated reqs)
def _section_rate(item):
_, stats_ = item
evaluated_ = stats_["passed"] + stats_["failed"]
if evaluated_ == 0:
return 101 # sort evaluated=0 to the bottom
return stats_["passed"] / evaluated_ * 100
top_failing_sections = sorted(
(
item
for item in section_stats.items()
if (item[1]["passed"] + item[1]["failed"]) > 0
),
key=_section_rate,
)[:5]
return {
"total": total,
"passed": passed,
"failed": failed,
"manual": manual,
"overall_compliance": overall_compliance,
"profile_counts": profile_counts,
"assessment_counts": assessment_counts,
"section_stats": dict(section_stats),
"top_failing_sections": top_failing_sections,
}
@@ -26,6 +26,52 @@ from .config import (
)
def truncate_text(text: str, max_len: int) -> str:
"""Truncate ``text`` to ``max_len`` characters, appending an ellipsis if cut.
Used by report generators that need to squeeze long descriptions, section
titles or finding titles into a fixed-width table cell.
Args:
text: Source string. ``None`` and non-string values are treated as empty.
max_len: Maximum output length including the ellipsis. Values < 4 are
clamped so the result never grows beyond ``max_len``.
Returns:
The original string if short enough, otherwise ``text[: max_len - 3] + "..."``.
When ``max_len < 4`` a plain substring of length ``max_len`` is returned
so callers never get a string longer than they asked for.
"""
if not text:
return ""
text = str(text)
if len(text) <= max_len:
return text
if max_len < 4:
return text[:max_len]
return text[: max_len - 3] + "..."
def escape_html(text: str) -> str:
"""Escape the minimal HTML entities required for safe ReportLab Paragraph rendering.
ReportLab's ``Paragraph`` parses a small HTML subset, so raw ``<``, ``>``
and ``&`` in user-provided content (rationale, remediation, etc.) would
break layout or be interpreted as tags. This helper mirrors
``html.escape`` but avoids pulling in the stdlib dependency and keeps the
output deterministic.
Args:
text: Untrusted source string.
Returns:
A string safe to embed inside a ReportLab Paragraph.
"""
return (
str(text or "").replace("&", "&amp;").replace("<", "&lt;").replace(">", "&gt;")
)
def get_color_for_risk_level(risk_level: int) -> colors.Color:
"""
Get color based on risk level.
@@ -313,6 +313,32 @@ FRAMEWORK_REGISTRY: dict[str, FrameworkConfig] = {
has_niveles=False,
has_weight=False,
),
"cis": FrameworkConfig(
name="cis",
display_name="CIS Benchmark",
logo_filename=None,
primary_color=COLOR_BLUE,
secondary_color=COLOR_LIGHT_BLUE,
bg_color=COLOR_BG_BLUE,
attribute_fields=[
"Section",
"SubSection",
"Profile",
"AssessmentStatus",
"Description",
"RationaleStatement",
"ImpactStatement",
"RemediationProcedure",
"AuditProcedure",
"References",
],
sections=None, # Derived dynamically per CIS variant (section names differ across versions/providers)
language="en",
has_risk_levels=False,
has_dimensions=False,
has_niveles=False,
has_weight=False,
),
}
@@ -336,5 +362,7 @@ def get_framework_config(compliance_id: str) -> FrameworkConfig | None:
return FRAMEWORK_REGISTRY["nis2"]
if "csa" in compliance_lower or "ccm" in compliance_lower:
return FRAMEWORK_REGISTRY["csa_ccm"]
if compliance_lower.startswith("cis_") or "cis" in compliance_lower:
return FRAMEWORK_REGISTRY["cis"]
return None
+45 -5
View File
@@ -1198,7 +1198,36 @@ def aggregate_findings(tenant_id: str, scan_id: str):
)
for agg in aggregation
}
ScanSummary.objects.bulk_create(scan_aggregations, batch_size=3000)
# Upsert so re-runs (post-mute reaggregation) don't trip
# `unique_scan_summary`; race-safe under concurrent writers.
ScanSummary.objects.bulk_create(
scan_aggregations,
batch_size=3000,
update_conflicts=True,
unique_fields=[
"tenant",
"scan",
"check_id",
"service",
"severity",
"region",
],
update_fields=[
"_pass",
"fail",
"muted",
"total",
"new",
"changed",
"unchanged",
"fail_new",
"fail_changed",
"pass_new",
"pass_changed",
"muted_new",
"muted_changed",
],
)
def _aggregate_findings_by_region(
@@ -1543,13 +1572,24 @@ def aggregate_attack_surface(tenant_id: str, scan_id: str):
)
)
# Bulk create overview records
if overview_objects:
with rls_transaction(tenant_id):
AttackSurfaceOverview.objects.bulk_create(overview_objects, batch_size=500)
logger.info(
f"Created {len(overview_objects)} attack surface overview records for scan {scan_id}"
# Upsert so re-runs (post-mute reaggregation) don't trip
# `unique_attack_surface_per_scan`; race-safe under concurrent writers.
AttackSurfaceOverview.objects.bulk_create(
overview_objects,
batch_size=500,
update_conflicts=True,
unique_fields=["tenant_id", "scan_id", "attack_surface_type"],
update_fields=[
"total_findings",
"failed_findings",
"muted_failed_findings",
],
)
logger.info(
f"Upserted {len(overview_objects)} attack surface overview records for scan {scan_id}"
)
else:
logger.info(f"No attack surface overview records created for scan {scan_id}")
+72 -19
View File
@@ -20,8 +20,8 @@ from tasks.jobs.backfill import (
backfill_finding_group_summaries,
backfill_provider_compliance_scores,
backfill_resource_scan_summaries,
backfill_scan_category_summaries,
backfill_scan_resource_group_summaries,
aggregate_scan_category_summaries,
aggregate_scan_resource_group_summaries,
)
from tasks.jobs.connection import (
check_integration_connection,
@@ -46,7 +46,11 @@ from tasks.jobs.lighthouse_providers import (
refresh_lighthouse_provider_models,
)
from tasks.jobs.muting import mute_historical_findings
from tasks.jobs.report import generate_compliance_reports_job
from tasks.jobs.report import (
STALE_TMP_OUTPUT_MAX_AGE_HOURS,
_cleanup_stale_tmp_output_directories,
generate_compliance_reports_job,
)
from tasks.jobs.scan import (
aggregate_attack_surface,
aggregate_daily_severity,
@@ -440,6 +444,19 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
scan_id (str): The scan identifier.
provider_id (str): The provider_id id to be used in generating outputs.
"""
try:
_cleanup_stale_tmp_output_directories(
DJANGO_TMP_OUTPUT_DIRECTORY,
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan=(tenant_id, scan_id),
)
except Exception as error:
logger.warning(
"Skipping stale tmp cleanup before output generation for scan %s: %s",
scan_id,
error,
)
# Check if the scan has findings
if not ScanSummary.objects.filter(scan_id=scan_id).exists():
logger.info(f"No findings found for scan {scan_id}")
@@ -659,9 +676,9 @@ def backfill_finding_group_summaries_task(tenant_id: str, days: int = None):
return backfill_finding_group_summaries(tenant_id=tenant_id, days=days)
@shared_task(name="backfill-scan-category-summaries", queue="backfill")
@shared_task(name="scan-category-summaries", queue="overview")
@handle_provider_deletion
def backfill_scan_category_summaries_task(tenant_id: str, scan_id: str):
def aggregate_scan_category_summaries_task(tenant_id: str, scan_id: str):
"""
Backfill ScanCategorySummary for a completed scan.
@@ -671,12 +688,12 @@ def backfill_scan_category_summaries_task(tenant_id: str, scan_id: str):
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
"""
return backfill_scan_category_summaries(tenant_id=tenant_id, scan_id=scan_id)
return aggregate_scan_category_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="backfill-scan-resource-group-summaries", queue="backfill")
@shared_task(name="scan-resource-group-summaries", queue="overview")
@handle_provider_deletion
def backfill_scan_resource_group_summaries_task(tenant_id: str, scan_id: str):
def aggregate_scan_resource_group_summaries_task(tenant_id: str, scan_id: str):
"""
Backfill ScanGroupSummary for a completed scan.
@@ -686,7 +703,7 @@ def backfill_scan_resource_group_summaries_task(tenant_id: str, scan_id: str):
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
"""
return backfill_scan_resource_group_summaries(tenant_id=tenant_id, scan_id=scan_id)
return aggregate_scan_resource_group_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="backfill-provider-compliance-scores", queue="backfill")
@@ -771,15 +788,26 @@ def aggregate_finding_group_summaries_task(tenant_id: str, scan_id: str):
)
@set_tenant(keep_tenant=True)
def reaggregate_all_finding_group_summaries_task(tenant_id: str):
"""Reaggregate finding group summaries for every (provider, day) combination.
"""Reaggregate every pre-aggregated summary table for this tenant.
Mirrors the unbounded scope of `mute_historical_findings_task`: that task
rewrites every Finding row whose UID matches a mute rule, with no time
limit. To keep the daily summaries consistent with that update, this task
re-runs the aggregator on the latest completed scan of every (provider,
day) pair that exists in the database. Tasks are dispatched in parallel
via a Celery group so the wallclock scales with the worker pool, not with
the number of pairs.
limit. To keep the pre-aggregated tables consistent with that update,
this task re-runs the same per-scan aggregation pipeline that scan
completion runs on the latest completed scan of every (provider, day)
pair, rebuilding the tables that power the read endpoints:
- `ScanSummary` and `DailySeveritySummary` -> `/overviews/findings`,
`/overviews/findings-severity`, `/overviews/services`.
- `FindingGroupDailySummary` -> `/finding-groups` and
`/finding-groups/latest`.
- `ScanGroupSummary` -> `/overviews/resource-groups` (resource
inventory).
- `ScanCategorySummary` -> `/overviews/categories`.
- `AttackSurfaceOverview` -> `/overviews/attack-surfaces`.
Per-scan pipelines are dispatched in parallel via a Celery group so
wallclock scales with the worker pool.
"""
completed_scans = list(
Scan.objects.filter(
@@ -804,12 +832,32 @@ def reaggregate_all_finding_group_summaries_task(tenant_id: str):
scan_ids = list(latest_scans.values())
if scan_ids:
logger.info(
"Reaggregating finding group summaries for %d scans (provider x day)",
"Reaggregating overview/finding summaries for %d scans (provider x day)",
len(scan_ids),
)
# DailySeveritySummary reads from ScanSummary, so ScanSummary must be
# recomputed first; the other aggregators read Finding directly and
# can run in parallel with the severity step.
group(
aggregate_finding_group_summaries_task.si(
tenant_id=tenant_id, scan_id=scan_id
chain(
perform_scan_summary_task.si(tenant_id=tenant_id, scan_id=scan_id),
group(
aggregate_daily_severity_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
aggregate_finding_group_summaries_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
aggregate_scan_resource_group_summaries_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
aggregate_scan_category_summaries_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
aggregate_attack_surface_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
),
)
for scan_id in scan_ids
).apply_async()
@@ -982,13 +1030,17 @@ def jira_integration_task(
@handle_provider_deletion
def generate_compliance_reports_task(tenant_id: str, scan_id: str, provider_id: str):
"""
Optimized task to generate ThreatScore, ENS, NIS2, and CSA CCM reports with shared queries.
Optimized task to generate ThreatScore, ENS, NIS2, CSA CCM and CIS reports with shared queries.
This task is more efficient than running separate report tasks because it reuses database queries:
- Provider object fetched once (instead of multiple times)
- Requirement statistics aggregated once (instead of multiple times)
- Can reduce database load by up to 50-70%
CIS emits a single PDF per run: the one matching the highest CIS version
available for the scan's provider, picked dynamically from
``Compliance.get_bulk`` (no hard-coded provider version mapping).
Args:
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
@@ -1005,6 +1057,7 @@ def generate_compliance_reports_task(tenant_id: str, scan_id: str, provider_id:
generate_ens=True,
generate_nis2=True,
generate_csa=True,
generate_cis=True,
)
@@ -1285,6 +1285,12 @@ class TestAttackPathsFindingsHelpers:
config = SimpleNamespace(update_tag=12345)
mock_session = MagicMock()
first_result = MagicMock()
first_result.single.return_value = {"merged_count": 1, "dropped_count": 0}
second_result = MagicMock()
second_result.single.return_value = {"merged_count": 0, "dropped_count": 1}
mock_session.run.side_effect = [first_result, second_result]
with (
patch(
"tasks.jobs.attack_paths.findings.get_node_uid_field",
@@ -1294,6 +1300,7 @@ class TestAttackPathsFindingsHelpers:
"tasks.jobs.attack_paths.findings.get_provider_resource_label",
return_value="_AWSResource",
),
patch("tasks.jobs.attack_paths.findings.logger") as mock_logger,
):
findings_module.load_findings(
mock_session, findings_generator(), provider, config
@@ -1305,6 +1312,14 @@ class TestAttackPathsFindingsHelpers:
assert params["last_updated"] == config.update_tag
assert "findings_data" in params
summary_log = next(
call_args.args[0]
for call_args in mock_logger.info.call_args_list
if call_args.args and "Finished loading" in call_args.args[0]
)
assert "edges_merged=1" in summary_log
assert "edges_dropped=1" in summary_log
def test_stream_findings_with_resources_returns_latest_scan_data(
self,
tenants_fixture,
@@ -1484,11 +1499,12 @@ class TestAttackPathsFindingsHelpers:
"default",
):
result = findings_module._enrich_batch_with_resources(
[finding_dict], str(tenant.id)
[finding_dict], str(tenant.id), lambda uid: f"short:{uid}"
)
assert len(result) == 1
assert result[0]["resource_uid"] == resource.uid
assert result[0]["resource_short_uid"] == f"short:{resource.uid}"
assert result[0]["id"] == str(finding.id)
assert result[0]["status"] == "FAIL"
@@ -1572,7 +1588,7 @@ class TestAttackPathsFindingsHelpers:
"default",
):
result = findings_module._enrich_batch_with_resources(
[finding_dict], str(tenant.id)
[finding_dict], str(tenant.id), lambda uid: uid
)
assert len(result) == 3
@@ -1646,7 +1662,7 @@ class TestAttackPathsFindingsHelpers:
patch("tasks.jobs.attack_paths.findings.logger") as mock_logger,
):
result = findings_module._enrich_batch_with_resources(
[finding_dict], str(tenant.id)
[finding_dict], str(tenant.id), lambda uid: uid
)
assert len(result) == 0
@@ -1693,6 +1709,63 @@ class TestAttackPathsFindingsHelpers:
mock_session.run.assert_not_called()
@pytest.mark.parametrize(
"uid, expected",
[
(
"arn:aws:ec2:us-east-1:552455647653:instance/i-05075b63eb51baacb",
"i-05075b63eb51baacb",
),
(
"arn:aws:ec2:us-east-1:123456789012:volume/vol-0abcd1234ef567890",
"vol-0abcd1234ef567890",
),
(
"arn:aws:ec2:us-east-1:123456789012:security-group/sg-0123abcd",
"sg-0123abcd",
),
("arn:aws:s3:::my-bucket-name", "my-bucket-name"),
("arn:aws:iam::123456789012:role/MyRole", "MyRole"),
(
"arn:aws:lambda:us-east-1:123456789012:function:my-function",
"my-function",
),
("i-05075b63eb51baacb", "i-05075b63eb51baacb"),
],
)
def test_extract_short_uid_aws_variants(self, uid, expected):
from tasks.jobs.attack_paths.aws import extract_short_uid
assert extract_short_uid(uid) == expected
def test_insert_finding_template_has_short_id_fallback(self):
from tasks.jobs.attack_paths.queries import (
INSERT_FINDING_TEMPLATE,
render_cypher_template,
)
rendered = render_cypher_template(
INSERT_FINDING_TEMPLATE,
{
"__NODE_UID_FIELD__": "arn",
"__RESOURCE_LABEL__": "_AWSResource",
},
)
assert (
"resource_by_uid:_AWSResource {arn: finding_data.resource_uid}" in rendered
)
assert "resource_by_id:_AWSResource {id: finding_data.resource_uid}" in rendered
assert (
"resource_by_short:_AWSResource {id: finding_data.resource_short_uid}"
in rendered
)
assert "head(collect(resource_by_short)) AS resource_by_short" in rendered
assert (
"COALESCE(resource_by_uid, resource_by_id, resource_by_short)" in rendered
)
assert "RETURN merged_count, dropped_count" in rendered
class TestAddResourceLabel:
def test_add_resource_label_applies_private_label(self):
+145 -14
View File
@@ -7,8 +7,8 @@ from tasks.jobs.backfill import (
backfill_compliance_summaries,
backfill_provider_compliance_scores,
backfill_resource_scan_summaries,
backfill_scan_category_summaries,
backfill_scan_resource_group_summaries,
aggregate_scan_category_summaries,
aggregate_scan_resource_group_summaries,
)
from api.models import (
@@ -183,6 +183,10 @@ class TestBackfillComplianceSummaries:
def test_backfill_creates_compliance_summaries(
self, tenants_fixture, scans_fixture, compliance_requirements_overviews_fixture
):
# Fixture seeds compliance rows the backfill aggregates over; pytest
# injects it by parameter name, so we reference it explicitly here
# to keep static analysers from flagging it as unused.
del compliance_requirements_overviews_fixture
tenant = tenants_fixture[0]
scan = scans_fixture[0]
@@ -227,22 +231,86 @@ class TestBackfillComplianceSummaries:
@pytest.mark.django_db
class TestBackfillScanCategorySummaries:
def test_already_backfilled(self, scan_category_summary_fixture):
def test_rerun_with_no_findings_is_noop(self, scan_category_summary_fixture):
"""When the scan has no findings, the backfill is a no-op: it
reports `no categories to backfill` and leaves the table
untouched. The upsert path cannot drop rows it does not produce,
so any pre-existing row survives (matching the scan-completion
writer that used `ignore_conflicts=True`)."""
tenant_id = scan_category_summary_fixture.tenant_id
scan_id = scan_category_summary_fixture.scan_id
result = backfill_scan_category_summaries(str(tenant_id), str(scan_id))
result = aggregate_scan_category_summaries(str(tenant_id), str(scan_id))
assert result == {"status": "already backfilled"}
assert result == {"status": "no categories to backfill"}
assert ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id, category="existing-category"
).exists()
def test_rerun_upserts_without_duplicating(self, findings_with_categories_fixture):
"""Calling the backfill twice upserts rather than raising on
`unique_category_severity_per_scan`; rows are updated in place
(same primary keys)."""
finding = findings_with_categories_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
aggregate_scan_category_summaries(tenant_id, scan_id)
first_ids = set(
ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values_list("id", flat=True)
)
aggregate_scan_category_summaries(tenant_id, scan_id)
second_ids = set(
ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values_list("id", flat=True)
)
assert first_ids == second_ids
assert len(first_ids) == 2 # 2 categories x 1 severity
def test_rerun_reflects_mute_between_runs(self, findings_with_categories_fixture):
"""Muting a finding between two backfill runs must move counters:
`failed_findings` and `new_failed_findings` drop to zero (muted
findings are excluded from those totals). Guards against a
regression where the upsert keeps stale counts from the first run."""
finding = findings_with_categories_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
aggregate_scan_category_summaries(tenant_id, scan_id)
before = list(
ScanCategorySummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
)
assert all(s.failed_findings == 1 for s in before)
assert all(s.new_failed_findings == 1 for s in before)
assert all(s.total_findings == 1 for s in before)
Finding.all_objects.filter(pk=finding.pk).update(muted=True)
aggregate_scan_category_summaries(tenant_id, scan_id)
after = list(
ScanCategorySummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
)
assert {s.id for s in after} == {s.id for s in before}
assert all(s.failed_findings == 0 for s in after)
assert all(s.new_failed_findings == 0 for s in after)
assert all(s.total_findings == 0 for s in after)
def test_not_completed_scan(self, get_not_completed_scans):
for scan in get_not_completed_scans:
result = backfill_scan_category_summaries(str(scan.tenant_id), str(scan.id))
result = aggregate_scan_category_summaries(
str(scan.tenant_id), str(scan.id)
)
assert result == {"status": "scan is not completed"}
def test_no_categories_to_backfill(self, scans_fixture):
scan = scans_fixture[1] # Failed scan with no findings
result = backfill_scan_category_summaries(str(scan.tenant_id), str(scan.id))
result = aggregate_scan_category_summaries(str(scan.tenant_id), str(scan.id))
assert result == {"status": "no categories to backfill"}
def test_successful_backfill(self, findings_with_categories_fixture):
@@ -250,7 +318,7 @@ class TestBackfillScanCategorySummaries:
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
result = backfill_scan_category_summaries(tenant_id, scan_id)
result = aggregate_scan_category_summaries(tenant_id, scan_id)
# 2 categories × 1 severity = 2 rows
assert result == {"status": "backfilled", "categories_count": 2}
@@ -311,24 +379,87 @@ def scan_resource_group_summary_fixture(scans_fixture):
@pytest.mark.django_db
class TestBackfillScanGroupSummaries:
def test_already_backfilled(self, scan_resource_group_summary_fixture):
def test_rerun_with_no_findings_is_noop(self, scan_resource_group_summary_fixture):
"""When the scan has no findings, the backfill is a no-op: it
reports `no resource groups to backfill` and leaves the table
untouched. The upsert path cannot drop rows it does not produce,
so any pre-existing row survives (matching the scan-completion
writer that used `ignore_conflicts=True`)."""
tenant_id = scan_resource_group_summary_fixture.tenant_id
scan_id = scan_resource_group_summary_fixture.scan_id
result = backfill_scan_resource_group_summaries(str(tenant_id), str(scan_id))
result = aggregate_scan_resource_group_summaries(str(tenant_id), str(scan_id))
assert result == {"status": "already backfilled"}
assert result == {"status": "no resource groups to backfill"}
assert ScanGroupSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id, resource_group="existing-group"
).exists()
def test_rerun_upserts_without_duplicating(self, findings_with_group_fixture):
"""Calling the backfill twice upserts rather than raising on
`unique_resource_group_severity_per_scan`; rows are updated in
place (same primary keys)."""
finding = findings_with_group_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
aggregate_scan_resource_group_summaries(tenant_id, scan_id)
first_ids = set(
ScanGroupSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values_list("id", flat=True)
)
aggregate_scan_resource_group_summaries(tenant_id, scan_id)
second_ids = set(
ScanGroupSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values_list("id", flat=True)
)
assert first_ids == second_ids
assert len(first_ids) == 1 # 1 resource group x 1 severity
def test_rerun_reflects_mute_between_runs(self, findings_with_group_fixture):
"""Muting a finding between two backfill runs must move counters:
`failed_findings` and `new_failed_findings` drop to zero (muted
findings are excluded from those totals). Guards against a
regression where the upsert keeps stale counts from the first run."""
finding = findings_with_group_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
aggregate_scan_resource_group_summaries(tenant_id, scan_id)
before = list(
ScanGroupSummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
)
assert len(before) == 1
assert before[0].failed_findings == 1
assert before[0].new_failed_findings == 1
assert before[0].total_findings == 1
Finding.all_objects.filter(pk=finding.pk).update(muted=True)
aggregate_scan_resource_group_summaries(tenant_id, scan_id)
after = list(
ScanGroupSummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
)
assert {s.id for s in after} == {s.id for s in before}
assert after[0].failed_findings == 0
assert after[0].new_failed_findings == 0
assert after[0].total_findings == 0
def test_not_completed_scan(self, get_not_completed_scans):
for scan in get_not_completed_scans:
result = backfill_scan_resource_group_summaries(
result = aggregate_scan_resource_group_summaries(
str(scan.tenant_id), str(scan.id)
)
assert result == {"status": "scan is not completed"}
def test_no_resource_groups_to_backfill(self, scans_fixture):
scan = scans_fixture[1] # Failed scan with no findings
result = backfill_scan_resource_group_summaries(
result = aggregate_scan_resource_group_summaries(
str(scan.tenant_id), str(scan.id)
)
assert result == {"status": "no resource groups to backfill"}
@@ -338,7 +469,7 @@ class TestBackfillScanGroupSummaries:
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
result = backfill_scan_resource_group_summaries(tenant_id, scan_id)
result = aggregate_scan_resource_group_summaries(tenant_id, scan_id)
# 1 resource group × 1 severity = 1 row
assert result == {"status": "backfilled", "resource_groups_count": 1}
+798 -2
View File
@@ -1,10 +1,21 @@
import os
import time
import uuid
from unittest.mock import Mock, patch
import matplotlib
import pytest
from reportlab.lib import colors
from tasks.jobs.report import generate_compliance_reports, generate_threatscore_report
from tasks.jobs.report import (
STALE_TMP_OUTPUT_MAX_AGE_HOURS,
STALE_TMP_OUTPUT_LOCK_FILE_NAME,
_cleanup_stale_tmp_output_directories,
_is_scan_directory_protected,
_pick_latest_cis_variant,
_should_run_stale_cleanup,
generate_compliance_reports,
generate_threatscore_report,
)
from tasks.jobs.reports import (
CHART_COLOR_GREEN_1,
CHART_COLOR_GREEN_2,
@@ -29,7 +40,13 @@ from tasks.jobs.threatscore_utils import (
_load_findings_for_requirement_checks,
)
from api.models import Finding, Resource, ResourceFindingMapping, StatusChoices
from api.models import (
Finding,
Resource,
ResourceFindingMapping,
StateChoices,
StatusChoices,
)
from prowler.lib.check.models import Severity
matplotlib.use("Agg") # Use non-interactive backend for tests
@@ -351,6 +368,366 @@ class TestLoadFindingsForChecks:
assert result == {}
class TestCleanupStaleTmpOutputDirectories:
"""Unit tests for opportunistic stale cleanup under tmp output root."""
def test_removes_only_scan_dirs_older_than_ttl(self, tmp_path, monkeypatch):
"""Should remove stale scan directories and keep recent ones."""
root_dir = tmp_path / "prowler_api_output"
old_scan_dir = root_dir / "tenant-a" / "scan-old"
old_scan_dir.mkdir(parents=True)
(old_scan_dir / "artifact.txt").write_text("old")
recent_scan_dir = root_dir / "tenant-a" / "scan-recent"
recent_scan_dir.mkdir(parents=True)
(recent_scan_dir / "artifact.txt").write_text("recent")
now = time.time()
stale_ts = now - ((STALE_TMP_OUTPUT_MAX_AGE_HOURS + 1) * 60 * 60)
os.utime(old_scan_dir, (stale_ts, stale_ts))
monkeypatch.setattr(
"tasks.jobs.report.STALE_TMP_OUTPUT_SAFE_ROOT", root_dir.resolve()
)
monkeypatch.setattr(
"tasks.jobs.report._should_run_stale_cleanup", lambda *_: True
)
monkeypatch.setattr(
"tasks.jobs.report._is_scan_directory_protected", lambda **_: False
)
removed = _cleanup_stale_tmp_output_directories(
str(root_dir), max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS
)
assert removed == 1
assert not old_scan_dir.exists()
assert recent_scan_dir.exists()
def test_skips_current_scan_even_when_stale(self, tmp_path, monkeypatch):
"""Should not delete stale directory for the currently processed scan."""
root_dir = tmp_path / "prowler_api_output"
current_scan_dir = root_dir / "tenant-current" / "scan-current"
current_scan_dir.mkdir(parents=True)
(current_scan_dir / "artifact.txt").write_text("current")
other_stale_scan_dir = root_dir / "tenant-other" / "scan-old"
other_stale_scan_dir.mkdir(parents=True)
(other_stale_scan_dir / "artifact.txt").write_text("other")
now = time.time()
stale_ts = now - ((STALE_TMP_OUTPUT_MAX_AGE_HOURS + 1) * 60 * 60)
os.utime(current_scan_dir, (stale_ts, stale_ts))
os.utime(other_stale_scan_dir, (stale_ts, stale_ts))
monkeypatch.setattr(
"tasks.jobs.report.STALE_TMP_OUTPUT_SAFE_ROOT", root_dir.resolve()
)
monkeypatch.setattr(
"tasks.jobs.report._should_run_stale_cleanup", lambda *_: True
)
monkeypatch.setattr(
"tasks.jobs.report._is_scan_directory_protected", lambda **_: False
)
removed = _cleanup_stale_tmp_output_directories(
str(root_dir),
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan=("tenant-current", "scan-current"),
)
assert removed == 1
assert current_scan_dir.exists()
assert not other_stale_scan_dir.exists()
def test_respects_max_deletions_per_run(self, tmp_path, monkeypatch):
"""Cleanup should stop deleting when max_deletions_per_run is reached."""
root_dir = tmp_path / "prowler_api_output"
stale_dir_1 = root_dir / "tenant-a" / "scan-old-1"
stale_dir_2 = root_dir / "tenant-a" / "scan-old-2"
stale_dir_1.mkdir(parents=True)
stale_dir_2.mkdir(parents=True)
(stale_dir_1 / "artifact.txt").write_text("old-1")
(stale_dir_2 / "artifact.txt").write_text("old-2")
now = time.time()
stale_ts = now - ((STALE_TMP_OUTPUT_MAX_AGE_HOURS + 1) * 60 * 60)
os.utime(stale_dir_1, (stale_ts, stale_ts))
os.utime(stale_dir_2, (stale_ts, stale_ts))
monkeypatch.setattr(
"tasks.jobs.report.STALE_TMP_OUTPUT_SAFE_ROOT", root_dir.resolve()
)
monkeypatch.setattr(
"tasks.jobs.report._should_run_stale_cleanup", lambda *_: True
)
monkeypatch.setattr(
"tasks.jobs.report._is_scan_directory_protected", lambda **_: False
)
removed = _cleanup_stale_tmp_output_directories(
str(root_dir),
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
max_deletions_per_run=1,
)
assert removed == 1
remaining = sum(
1 for scan_dir in (stale_dir_1, stale_dir_2) if scan_dir.exists()
)
assert remaining == 1
def test_rejects_non_safe_root(self, tmp_path, monkeypatch):
"""Cleanup must no-op when called with a root outside the allowed safe root."""
root_dir = tmp_path / "prowler_api_output"
root_dir.mkdir(parents=True)
monkeypatch.setattr(
"tasks.jobs.report.STALE_TMP_OUTPUT_SAFE_ROOT",
(tmp_path / "another-root").resolve(),
)
def _fail_should_run(*_args, **_kwargs):
raise AssertionError("_should_run_stale_cleanup should not be called")
monkeypatch.setattr(
"tasks.jobs.report._should_run_stale_cleanup", _fail_should_run
)
removed = _cleanup_stale_tmp_output_directories(str(root_dir), max_age_hours=48)
assert removed == 0
def test_ignores_symlink_scan_directories(self, tmp_path, monkeypatch):
"""Symlinked scan directories must never be deleted by cleanup."""
root_dir = tmp_path / "prowler_api_output"
stale_real_scan_dir = root_dir / "tenant-a" / "scan-old-real"
stale_real_scan_dir.mkdir(parents=True)
(stale_real_scan_dir / "artifact.txt").write_text("old")
symlink_target = tmp_path / "symlink-target"
symlink_target.mkdir(parents=True)
(symlink_target / "artifact.txt").write_text("target")
symlink_scan_dir = root_dir / "tenant-a" / "scan-link"
symlink_scan_dir.symlink_to(symlink_target, target_is_directory=True)
now = time.time()
stale_ts = now - ((STALE_TMP_OUTPUT_MAX_AGE_HOURS + 1) * 60 * 60)
os.utime(stale_real_scan_dir, (stale_ts, stale_ts))
monkeypatch.setattr(
"tasks.jobs.report.STALE_TMP_OUTPUT_SAFE_ROOT", root_dir.resolve()
)
monkeypatch.setattr(
"tasks.jobs.report._should_run_stale_cleanup", lambda *_: True
)
monkeypatch.setattr(
"tasks.jobs.report._is_scan_directory_protected", lambda **_: False
)
removed = _cleanup_stale_tmp_output_directories(
str(root_dir), max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS
)
assert removed == 1
assert not stale_real_scan_dir.exists()
assert symlink_scan_dir.exists()
assert symlink_target.exists()
def test_handles_internal_exception_without_propagating(
self, tmp_path, monkeypatch
):
"""Cleanup errors must be swallowed so callers are not interrupted."""
root_dir = tmp_path / "prowler_api_output"
stale_scan_dir = root_dir / "tenant-a" / "scan-old"
stale_scan_dir.mkdir(parents=True)
now = time.time()
stale_ts = now - ((STALE_TMP_OUTPUT_MAX_AGE_HOURS + 1) * 60 * 60)
os.utime(stale_scan_dir, (stale_ts, stale_ts))
monkeypatch.setattr(
"tasks.jobs.report.STALE_TMP_OUTPUT_SAFE_ROOT", root_dir.resolve()
)
monkeypatch.setattr(
"tasks.jobs.report._should_run_stale_cleanup", lambda *_: True
)
def _raise(*_args, **_kwargs):
raise RuntimeError("db timeout")
monkeypatch.setattr("tasks.jobs.report._is_scan_directory_protected", _raise)
removed = _cleanup_stale_tmp_output_directories(
str(root_dir), max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS
)
assert removed == 0
assert stale_scan_dir.exists()
def test_safe_root_follows_custom_tmp_output_directory(self, tmp_path, monkeypatch):
"""Custom DJANGO_TMP_OUTPUT_DIRECTORY must be honored as the safe root."""
from tasks.jobs import report as report_module
custom_root = tmp_path / "custom_tmp_output"
custom_root.mkdir(parents=True)
monkeypatch.setattr(
report_module, "DJANGO_TMP_OUTPUT_DIRECTORY", str(custom_root)
)
resolved_root = report_module._resolve_stale_tmp_safe_root()
assert resolved_root == custom_root.resolve()
stale_scan_dir = custom_root / "tenant-a" / "scan-old"
stale_scan_dir.mkdir(parents=True)
(stale_scan_dir / "artifact.txt").write_text("old")
stale_ts = time.time() - ((STALE_TMP_OUTPUT_MAX_AGE_HOURS + 1) * 60 * 60)
os.utime(stale_scan_dir, (stale_ts, stale_ts))
monkeypatch.setattr(report_module, "STALE_TMP_OUTPUT_SAFE_ROOT", resolved_root)
monkeypatch.setattr(
"tasks.jobs.report._should_run_stale_cleanup", lambda *_: True
)
monkeypatch.setattr(
"tasks.jobs.report._is_scan_directory_protected", lambda **_: False
)
removed = _cleanup_stale_tmp_output_directories(
str(custom_root), max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS
)
assert removed == 1
assert not stale_scan_dir.exists()
@pytest.mark.parametrize(
"forbidden_root",
["/", "/tmp", "/var", "/var/tmp", "/home", "/root", "/etc", "/usr"],
)
def test_safe_root_rejects_forbidden_system_roots(
self, forbidden_root, monkeypatch
):
"""Cleanup must refuse to operate against shared system roots."""
from tasks.jobs import report as report_module
monkeypatch.setattr(
report_module, "DJANGO_TMP_OUTPUT_DIRECTORY", forbidden_root
)
assert report_module._resolve_stale_tmp_safe_root() is None
def test_skips_cleanup_when_safe_root_is_none(self, tmp_path, monkeypatch):
"""A None safe root (forbidden config) must short-circuit the cleanup."""
root_dir = tmp_path / "prowler_api_output"
root_dir.mkdir(parents=True)
monkeypatch.setattr("tasks.jobs.report.STALE_TMP_OUTPUT_SAFE_ROOT", None)
def _fail_should_run(*_args, **_kwargs):
raise AssertionError("_should_run_stale_cleanup should not be called")
monkeypatch.setattr(
"tasks.jobs.report._should_run_stale_cleanup", _fail_should_run
)
removed = _cleanup_stale_tmp_output_directories(
str(root_dir), max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS
)
assert removed == 0
class TestStaleCleanupProtectionHelpers:
"""Unit tests for stale cleanup helper guard logic."""
def test_should_run_cleanup_is_throttled(self, tmp_path):
root_dir = tmp_path / "prowler_api_output"
root_dir.mkdir(parents=True)
assert _should_run_stale_cleanup(root_dir, throttle_seconds=3600) is True
assert _should_run_stale_cleanup(root_dir, throttle_seconds=3600) is False
lock_file = root_dir / STALE_TMP_OUTPUT_LOCK_FILE_NAME
lock_file.write_text(str(int(time.time()) - 7200), encoding="ascii")
assert _should_run_stale_cleanup(root_dir, throttle_seconds=3600) is True
@patch("tasks.jobs.report.fcntl.flock", side_effect=BlockingIOError)
def test_should_run_cleanup_returns_false_when_lock_is_busy(
self, _mock_flock, tmp_path
):
root_dir = tmp_path / "prowler_api_output"
root_dir.mkdir(parents=True)
assert _should_run_stale_cleanup(root_dir, throttle_seconds=3600) is False
@patch("tasks.jobs.report.Scan.all_objects.using")
def test_is_scan_directory_protected_for_executing_scan(
self, mock_scan_using, tmp_path
):
scan_id = str(uuid.uuid4())
scan_path = tmp_path / scan_id
scan_path.mkdir(parents=True)
mock_scan_using.return_value.filter.return_value.only.return_value.first.return_value = Mock(
state=StateChoices.EXECUTING, output_location=None
)
assert (
_is_scan_directory_protected(
tenant_id="tenant-a",
scan_id=scan_id,
scan_path=scan_path,
)
is True
)
@patch("tasks.jobs.report.Scan.all_objects.using")
def test_is_scan_directory_protected_for_local_output(
self, mock_scan_using, tmp_path
):
scan_id = str(uuid.uuid4())
scan_path = tmp_path / scan_id
scan_path.mkdir(parents=True)
local_output_path = scan_path / "outputs.zip"
mock_scan_using.return_value.filter.return_value.only.return_value.first.return_value = Mock(
state=StateChoices.COMPLETED, output_location=str(local_output_path)
)
assert (
_is_scan_directory_protected(
tenant_id="tenant-a",
scan_id=scan_id,
scan_path=scan_path.resolve(),
)
is True
)
@patch("tasks.jobs.report.Scan.all_objects.using")
def test_is_scan_directory_not_protected_for_s3_output(
self, mock_scan_using, tmp_path
):
scan_id = str(uuid.uuid4())
scan_path = tmp_path / scan_id
scan_path.mkdir(parents=True)
mock_scan_using.return_value.filter.return_value.only.return_value.first.return_value = Mock(
state=StateChoices.COMPLETED,
output_location="s3://bucket/path/report.zip",
)
assert (
_is_scan_directory_protected(
tenant_id="tenant-a",
scan_id=scan_id,
scan_path=scan_path,
)
is False
)
@pytest.mark.django_db
class TestGenerateThreatscoreReportFunction:
"""Test suite for generate_threatscore_report function."""
@@ -422,6 +799,425 @@ class TestGenerateComplianceReportsOptimized:
mock_ens.assert_not_called()
mock_nis2.assert_not_called()
@patch(
"tasks.jobs.report._cleanup_stale_tmp_output_directories",
side_effect=RuntimeError("cleanup boom"),
)
def test_cleanup_exception_does_not_break_no_findings_flow(self, _mock_cleanup):
"""Unexpected cleanup failures must not abort report generation."""
random_tenant = str(uuid.uuid4())
random_scan = str(uuid.uuid4())
random_provider = str(uuid.uuid4())
with patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
result = generate_compliance_reports(
tenant_id=random_tenant,
scan_id=random_scan,
provider_id=random_provider,
generate_threatscore=True,
generate_ens=False,
generate_nis2=False,
generate_csa=False,
generate_cis=False,
)
assert result["threatscore"] == {"upload": False, "path": ""}
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_cis_report")
def test_no_findings_returns_flat_cis_entry(
self,
mock_cis,
mock_upload,
tenants_fixture,
scans_fixture,
providers_fixture,
):
"""Scan with no findings and ``generate_cis=True`` must yield a flat
``{"upload": False, "path": ""}`` entry, consistent with the other
frameworks (no nested dict, no sentinel keys)."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
result = generate_compliance_reports(
tenant_id=str(tenant.id),
scan_id=str(scan.id),
provider_id=str(provider.id),
generate_threatscore=False,
generate_ens=False,
generate_nis2=False,
generate_csa=False,
generate_cis=True,
)
assert result["cis"] == {"upload": False, "path": ""}
mock_cis.assert_not_called()
@patch("tasks.jobs.report.rmtree")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_threatscore_report")
@patch("tasks.jobs.report._generate_compliance_output_directory")
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report.Compliance.get_bulk")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.ScanSummary.objects.filter")
def test_cleanup_runs_when_supported_reports_upload_successfully(
self,
mock_scan_summary_filter,
mock_provider_get,
mock_get_bulk,
mock_aggregate_stats,
mock_generate_output_dir,
mock_threatscore,
mock_upload_to_s3,
mock_rmtree,
):
"""Cleanup must run when all generated (supported) reports are uploaded."""
mock_scan_summary_filter.return_value.exists.return_value = True
mock_provider_get.return_value = Mock(uid="provider-uid", provider="m365")
mock_get_bulk.return_value = {}
mock_aggregate_stats.return_value = {}
mock_generate_output_dir.return_value = (
"/tmp/tenant/scan/threatscore/prowler-output-provider-20240101000000"
)
mock_upload_to_s3.return_value = (
"s3://bucket/tenant/scan/threatscore/report.pdf"
)
result = generate_compliance_reports(
tenant_id=str(uuid.uuid4()),
scan_id=str(uuid.uuid4()),
provider_id=str(uuid.uuid4()),
generate_threatscore=True,
generate_ens=True,
generate_nis2=True,
generate_csa=True,
generate_cis=True,
)
assert result["threatscore"]["upload"] is True
assert result["ens"]["upload"] is False
assert result["nis2"]["upload"] is False
assert result["csa"]["upload"] is False
assert result["cis"] == {"upload": False, "path": ""}
mock_generate_output_dir.assert_called_once()
mock_threatscore.assert_called_once()
mock_rmtree.assert_called_once()
@patch("tasks.jobs.report.rmtree")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_threatscore_report")
@patch("tasks.jobs.report._generate_compliance_output_directory")
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report.Compliance.get_bulk")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.ScanSummary.objects.filter")
def test_cleanup_skipped_when_supported_upload_fails(
self,
mock_scan_summary_filter,
mock_provider_get,
mock_get_bulk,
mock_aggregate_stats,
mock_generate_output_dir,
mock_threatscore,
mock_upload_to_s3,
mock_rmtree,
):
"""Cleanup must not run when a generated report upload fails."""
mock_scan_summary_filter.return_value.exists.return_value = True
mock_provider_get.return_value = Mock(uid="provider-uid", provider="m365")
mock_get_bulk.return_value = {}
mock_aggregate_stats.return_value = {}
mock_generate_output_dir.return_value = (
"/tmp/tenant/scan/threatscore/prowler-output-provider-20240101000000"
)
mock_upload_to_s3.return_value = None
result = generate_compliance_reports(
tenant_id=str(uuid.uuid4()),
scan_id=str(uuid.uuid4()),
provider_id=str(uuid.uuid4()),
generate_threatscore=True,
generate_ens=True,
generate_nis2=True,
generate_csa=True,
generate_cis=True,
)
assert result["threatscore"]["upload"] is False
assert result["cis"] == {"upload": False, "path": ""}
mock_generate_output_dir.assert_called_once()
mock_threatscore.assert_called_once()
mock_rmtree.assert_not_called()
@pytest.mark.django_db
class TestGenerateComplianceReportsCIS:
"""Test suite covering the CIS branch of generate_compliance_reports."""
def _force_scan_has_findings(self, monkeypatch):
"""Bypass the ScanSummary.exists() early-return guard."""
class _FakeManager:
def filter(self, **kwargs):
class _Q:
def exists(self):
return True
return _Q()
monkeypatch.setattr("tasks.jobs.report.ScanSummary.objects", _FakeManager())
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_cis_report")
@patch("tasks.jobs.report.Compliance.get_bulk")
def test_cis_picks_latest_version(
self,
mock_get_bulk,
mock_cis,
mock_upload,
mock_stats,
monkeypatch,
tenants_fixture,
scans_fixture,
providers_fixture,
):
"""CIS branch should generate a single PDF for the highest version.
The returned ``results["cis"]`` must have the same flat shape as the
other single-version frameworks (``{"upload", "path"}``) the picked
variant is an internal detail and is not exposed in the result.
"""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
self._force_scan_has_findings(monkeypatch)
mock_stats.return_value = {}
# Multiple CIS variants + a non-CIS framework that must be ignored.
# Includes 1.10 to verify the selection is not lexicographic.
mock_get_bulk.return_value = {
"cis_1.4_aws": Mock(),
"cis_1.10_aws": Mock(),
"cis_2.0_aws": Mock(),
"cis_5.0_aws": Mock(),
"ens_rd2022_aws": Mock(),
}
mock_upload.return_value = "s3://bucket/path"
result = generate_compliance_reports(
tenant_id=str(tenant.id),
scan_id=str(scan.id),
provider_id=str(provider.id),
generate_threatscore=False,
generate_ens=False,
generate_nis2=False,
generate_csa=False,
generate_cis=True,
)
# Exactly one call for the latest version, never for older variants
# or non-CIS frameworks.
assert mock_cis.call_count == 1
assert mock_cis.call_args.kwargs["compliance_id"] == "cis_5.0_aws"
assert result["cis"]["upload"] is True
assert result["cis"]["path"] == "s3://bucket/path"
assert "compliance_id" not in result["cis"]
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_cis_report")
@patch("tasks.jobs.report.Compliance.get_bulk")
def test_cis_latest_variant_failure_captured_in_results(
self,
mock_get_bulk,
mock_cis,
mock_upload,
mock_stats,
monkeypatch,
tenants_fixture,
scans_fixture,
providers_fixture,
):
"""A failure in the latest CIS variant must be surfaced in the flat results entry."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
self._force_scan_has_findings(monkeypatch)
mock_stats.return_value = {}
mock_get_bulk.return_value = {
"cis_1.4_aws": Mock(),
"cis_5.0_aws": Mock(),
}
mock_cis.side_effect = RuntimeError("boom")
result = generate_compliance_reports(
tenant_id=str(tenant.id),
scan_id=str(scan.id),
provider_id=str(provider.id),
generate_threatscore=False,
generate_ens=False,
generate_nis2=False,
generate_csa=False,
generate_cis=True,
)
# Only the latest variant is attempted; its failure lands in a flat
# entry keyed under "cis" with the same shape as sibling frameworks.
assert mock_cis.call_count == 1
assert result["cis"]["upload"] is False
assert result["cis"]["error"] == "boom"
assert "compliance_id" not in result["cis"]
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_cis_report")
@patch("tasks.jobs.report.Compliance.get_bulk")
def test_cis_provider_without_cis_skipped_cleanly(
self,
mock_get_bulk,
mock_cis,
mock_upload,
mock_stats,
monkeypatch,
tenants_fixture,
scans_fixture,
providers_fixture,
):
"""When ``Compliance.get_bulk`` returns no CIS entry the CIS branch
must skip cleanly and record a flat ``{"upload": False, "path": ""}``
entry no hard-coded provider whitelist is consulted."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
self._force_scan_has_findings(monkeypatch)
mock_stats.return_value = {}
# No ``cis_*`` keys in the bulk → no variant picked.
mock_get_bulk.return_value = {"ens_rd2022_aws": Mock()}
result = generate_compliance_reports(
tenant_id=str(tenant.id),
scan_id=str(scan.id),
provider_id=str(provider.id),
generate_threatscore=False,
generate_ens=False,
generate_nis2=False,
generate_csa=False,
generate_cis=True,
)
assert result["cis"] == {"upload": False, "path": ""}
mock_cis.assert_not_called()
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report._generate_compliance_output_directory")
@patch("tasks.jobs.report.Compliance.get_bulk")
def test_cis_output_directory_failure_is_captured(
self,
mock_get_bulk,
mock_generate_output_dir,
mock_stats,
monkeypatch,
tenants_fixture,
scans_fixture,
providers_fixture,
):
"""CIS output dir errors must be captured in results (not raised)."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
self._force_scan_has_findings(monkeypatch)
mock_stats.return_value = {}
mock_get_bulk.return_value = {"cis_5.0_aws": Mock()}
mock_generate_output_dir.side_effect = RuntimeError("dir boom")
result = generate_compliance_reports(
tenant_id=str(tenant.id),
scan_id=str(scan.id),
provider_id=str(provider.id),
generate_threatscore=False,
generate_ens=False,
generate_nis2=False,
generate_csa=False,
generate_cis=True,
)
assert result["cis"]["upload"] is False
assert result["cis"]["error"] == "dir boom"
class TestPickLatestCisVariant:
"""Unit tests for `_pick_latest_cis_variant` helper."""
def test_empty_returns_none(self):
assert _pick_latest_cis_variant([]) is None
def test_single_variant(self):
assert _pick_latest_cis_variant(["cis_5.0_aws"]) == "cis_5.0_aws"
def test_numeric_not_lexicographic(self):
"""1.10 must beat 1.2 (lex sort would pick 1.2)."""
variants = ["cis_1.2_kubernetes", "cis_1.10_kubernetes"]
assert _pick_latest_cis_variant(variants) == "cis_1.10_kubernetes"
def test_major_version_wins(self):
variants = ["cis_1.4_aws", "cis_2.0_aws", "cis_5.0_aws", "cis_6.0_aws"]
assert _pick_latest_cis_variant(variants) == "cis_6.0_aws"
def test_minor_version_breaks_tie(self):
variants = ["cis_3.0_aws", "cis_3.1_aws", "cis_2.9_aws"]
assert _pick_latest_cis_variant(variants) == "cis_3.1_aws"
def test_three_part_version(self):
"""Versions like 3.0.1 must win over 3.0."""
variants = ["cis_3.0_aws", "cis_3.0.1_aws"]
assert _pick_latest_cis_variant(variants) == "cis_3.0.1_aws"
def test_malformed_names_ignored(self):
variants = ["notcis_1.0_aws", "cis_abc_aws", "cis_5.0_aws"]
assert _pick_latest_cis_variant(variants) == "cis_5.0_aws"
def test_only_malformed_returns_none(self):
variants = ["notcis_1.0_aws", "cis_abc_aws"]
assert _pick_latest_cis_variant(variants) is None
def test_multidigit_provider_name(self):
"""Provider name with underscores (e.g. googleworkspace) must parse."""
variants = ["cis_1.3_googleworkspace"]
assert _pick_latest_cis_variant(variants) == "cis_1.3_googleworkspace"
def test_accepts_iterator(self):
"""The helper must accept any iterable, not just lists."""
def _gen():
yield "cis_1.4_aws"
yield "cis_5.0_aws"
assert _pick_latest_cis_variant(_gen()) == "cis_5.0_aws"
def test_rejects_single_integer_version(self):
"""The regex requires at least one dotted component. ``cis_5_aws``
without a minor version is malformed per the backend contract."""
assert _pick_latest_cis_variant(["cis_5_aws"]) is None
def test_rejects_trailing_dot(self):
"""Inputs like ``cis_5._aws`` must be rejected at the regex stage
instead of silently normalising to ``(5, 0)``."""
assert _pick_latest_cis_variant(["cis_5._aws", "cis_1.0_aws"]) == "cis_1.0_aws"
def test_rejects_lone_dot_version(self):
"""``cis_._aws`` has no numeric component and must be skipped."""
assert _pick_latest_cis_variant(["cis_._aws", "cis_1.0_aws"]) == "cis_1.0_aws"
class TestOptimizationImprovements:
"""Test suite for optimization-related functionality."""
@@ -0,0 +1,532 @@
from unittest.mock import Mock, patch
import pytest
from reportlab.platypus import Image, LongTable, Paragraph, Table
from tasks.jobs.reports import FRAMEWORK_REGISTRY, ComplianceData, RequirementData
from tasks.jobs.reports.cis import (
CISReportGenerator,
_normalize_profile,
_profile_badge_text,
)
from api.models import StatusChoices
# =============================================================================
# Fixtures
# =============================================================================
@pytest.fixture
def cis_generator():
"""Create a CISReportGenerator instance for testing."""
config = FRAMEWORK_REGISTRY["cis"]
return CISReportGenerator(config)
def _make_attr(
section: str,
profile_value: str = "Level 1",
assessment_value: str = "Automated",
sub_section: str = "",
**extras,
) -> Mock:
"""Build a mock CIS_Requirement_Attribute with duck-typed fields."""
attr = Mock()
attr.Section = section
attr.SubSection = sub_section
# CIS enums have `.value`. Use a simple Mock that exposes `.value`.
attr.Profile = Mock(value=profile_value)
attr.AssessmentStatus = Mock(value=assessment_value)
attr.Description = extras.get("description", "desc")
attr.RationaleStatement = extras.get("rationale", "the rationale")
attr.ImpactStatement = extras.get("impact", "the impact")
attr.RemediationProcedure = extras.get("remediation", "the remediation")
attr.AuditProcedure = extras.get("audit", "the audit")
attr.AdditionalInformation = ""
attr.DefaultValue = ""
attr.References = extras.get("references", "https://example.com")
return attr
@pytest.fixture
def basic_cis_compliance_data():
"""Create basic ComplianceData for CIS testing (no requirements)."""
return ComplianceData(
tenant_id="tenant-123",
scan_id="scan-456",
provider_id="provider-789",
compliance_id="cis_5.0_aws",
framework="CIS",
name="CIS Amazon Web Services Foundations Benchmark v5.0.0",
version="5.0",
description="Center for Internet Security AWS Foundations Benchmark",
)
@pytest.fixture
def populated_cis_compliance_data(basic_cis_compliance_data):
"""CIS data with mixed requirements across 2 sections, Profile L1/L2, Pass/Fail/Manual."""
data = basic_cis_compliance_data
data.requirements = [
RequirementData(
id="1.1",
description="Maintain current contact details",
status=StatusChoices.PASS,
passed_findings=5,
failed_findings=0,
total_findings=5,
checks=["aws_check_1"],
),
RequirementData(
id="1.2",
description="Ensure root account has no access keys",
status=StatusChoices.FAIL,
passed_findings=0,
failed_findings=3,
total_findings=3,
checks=["aws_check_2"],
),
RequirementData(
id="1.3",
description="Ensure MFA is enabled for all IAM users",
status=StatusChoices.MANUAL,
checks=[],
),
RequirementData(
id="2.1",
description="Ensure S3 Buckets are logging",
status=StatusChoices.PASS,
passed_findings=2,
failed_findings=0,
total_findings=2,
checks=["aws_check_3"],
),
RequirementData(
id="2.2",
description="Ensure encryption at rest is enabled",
status=StatusChoices.FAIL,
passed_findings=0,
failed_findings=4,
total_findings=4,
checks=["aws_check_4"],
),
]
data.attributes_by_requirement_id = {
"1.1": {
"attributes": {
"req_attributes": [
_make_attr(
"1 Identity and Access Management",
profile_value="Level 1",
assessment_value="Automated",
)
],
"checks": ["aws_check_1"],
}
},
"1.2": {
"attributes": {
"req_attributes": [
_make_attr(
"1 Identity and Access Management",
profile_value="Level 1",
assessment_value="Automated",
)
],
"checks": ["aws_check_2"],
}
},
"1.3": {
"attributes": {
"req_attributes": [
_make_attr(
"1 Identity and Access Management",
profile_value="Level 2",
assessment_value="Manual",
)
],
"checks": [],
}
},
"2.1": {
"attributes": {
"req_attributes": [
_make_attr(
"2 Storage",
profile_value="Level 2",
assessment_value="Automated",
)
],
"checks": ["aws_check_3"],
}
},
"2.2": {
"attributes": {
"req_attributes": [
_make_attr(
"2 Storage",
profile_value="Level 1",
assessment_value="Automated",
)
],
"checks": ["aws_check_4"],
}
},
}
return data
# =============================================================================
# Helper function tests
# =============================================================================
class TestNormalizeProfile:
"""Test suite for _normalize_profile helper."""
def test_level_1_string(self):
assert _normalize_profile(Mock(value="Level 1")) == "L1"
def test_level_2_string(self):
assert _normalize_profile(Mock(value="Level 2")) == "L2"
def test_e3_level_1(self):
assert _normalize_profile(Mock(value="E3 Level 1")) == "L1"
def test_e5_level_2(self):
assert _normalize_profile(Mock(value="E5 Level 2")) == "L2"
def test_none_returns_other(self):
assert _normalize_profile(None) == "Other"
def test_substring_trap_rejected(self):
"""Unrelated tokens containing the literal ``L2`` must NOT map to L2."""
# A future enum value like "CL2 Kubernetes Worker" would be silently
# misclassified by a naive substring check.
assert _normalize_profile(Mock(value="CL2 Worker")) == "Other"
assert _normalize_profile(Mock(value="HL2 Legacy")) == "Other"
def test_raw_string_level_1(self):
# Mock without .value falls back to str(profile); use a real string
class NoValue:
def __str__(self):
return "Level 1"
assert _normalize_profile(NoValue()) == "L1"
def test_unknown_profile_returns_other(self):
assert _normalize_profile(Mock(value="Custom Profile")) == "Other"
class TestProfileBadgeText:
def test_l1_label(self):
assert _profile_badge_text("L1") == "Level 1"
def test_l2_label(self):
assert _profile_badge_text("L2") == "Level 2"
def test_other_label(self):
assert _profile_badge_text("Other") == "Other"
# =============================================================================
# Generator initialization
# =============================================================================
class TestCISGeneratorInitialization:
def test_generator_created(self, cis_generator):
assert cis_generator is not None
assert cis_generator.config.name == "cis"
def test_generator_language(self, cis_generator):
assert cis_generator.config.language == "en"
def test_generator_sections_dynamic(self, cis_generator):
# CIS sections differ per variant so config.sections MUST be None
assert cis_generator.config.sections is None
def test_attribute_fields_contain_cis_specific(self, cis_generator):
for field in ("Profile", "AssessmentStatus", "RationaleStatement"):
assert field in cis_generator.config.attribute_fields
# =============================================================================
# _derive_sections
# =============================================================================
class TestDeriveSections:
def test_preserves_first_seen_order(
self, cis_generator, populated_cis_compliance_data
):
sections = cis_generator._derive_sections(populated_cis_compliance_data)
assert sections == [
"1 Identity and Access Management",
"2 Storage",
]
def test_deduplicates_sections(self, cis_generator, basic_cis_compliance_data):
basic_cis_compliance_data.requirements = [
RequirementData(id="1.1", description="a", status=StatusChoices.PASS),
RequirementData(id="1.2", description="b", status=StatusChoices.PASS),
]
attr = _make_attr("1 IAM")
basic_cis_compliance_data.attributes_by_requirement_id = {
"1.1": {"attributes": {"req_attributes": [attr], "checks": []}},
"1.2": {"attributes": {"req_attributes": [attr], "checks": []}},
}
assert cis_generator._derive_sections(basic_cis_compliance_data) == ["1 IAM"]
def test_empty_data_returns_empty(self, cis_generator, basic_cis_compliance_data):
basic_cis_compliance_data.requirements = []
basic_cis_compliance_data.attributes_by_requirement_id = {}
assert cis_generator._derive_sections(basic_cis_compliance_data) == []
# =============================================================================
# _compute_statistics
# =============================================================================
class TestComputeStatistics:
def test_totals(self, cis_generator, populated_cis_compliance_data):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
assert stats["total"] == 5
assert stats["passed"] == 2
assert stats["failed"] == 2
assert stats["manual"] == 1
def test_overall_compliance_excludes_manual(
self, cis_generator, populated_cis_compliance_data
):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
# 2 passed / 4 evaluated (pass + fail) = 50%
assert stats["overall_compliance"] == pytest.approx(50.0)
def test_overall_compliance_all_manual(
self, cis_generator, basic_cis_compliance_data
):
basic_cis_compliance_data.requirements = [
RequirementData(id="x", description="d", status=StatusChoices.MANUAL),
]
attr = _make_attr("1 IAM", profile_value="Level 1", assessment_value="Manual")
basic_cis_compliance_data.attributes_by_requirement_id = {
"x": {"attributes": {"req_attributes": [attr], "checks": []}},
}
stats = cis_generator._compute_statistics(basic_cis_compliance_data)
# No evaluated → defaults to 100%
assert stats["overall_compliance"] == 100.0
def test_profile_counts(self, cis_generator, populated_cis_compliance_data):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
profile = stats["profile_counts"]
# From fixture:
# L1: 1.1 (PASS, Auto), 1.2 (FAIL, Auto), 2.2 (FAIL, Auto) → pass=1, fail=2, manual=0
# L2: 1.3 (MANUAL, Manual), 2.1 (PASS, Auto) → pass=1, fail=0, manual=1
assert profile["L1"] == {"passed": 1, "failed": 2, "manual": 0}
assert profile["L2"] == {"passed": 1, "failed": 0, "manual": 1}
def test_assessment_counts(self, cis_generator, populated_cis_compliance_data):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
assessment = stats["assessment_counts"]
# Automated: 1.1 PASS, 1.2 FAIL, 2.1 PASS, 2.2 FAIL → pass=2, fail=2, manual=0
# Manual: 1.3 MANUAL → pass=0, fail=0, manual=1
assert assessment["Automated"] == {"passed": 2, "failed": 2, "manual": 0}
assert assessment["Manual"] == {"passed": 0, "failed": 0, "manual": 1}
def test_top_failing_sections_includes_all_evaluated(
self, cis_generator, populated_cis_compliance_data
):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
top = stats["top_failing_sections"]
# Both sections have 1 PASS + 1 FAIL evaluated → tied at 50%. The
# sort is stable, so both must appear and both must be capped at
# 5 entries.
assert len(top) == 2
section_names = {name for name, _ in top}
assert section_names == {
"1 Identity and Access Management",
"2 Storage",
}
def test_compute_statistics_is_memoized(
self, cis_generator, populated_cis_compliance_data
):
"""Calling ``_compute_statistics`` twice with the same data must
reuse the cached value and not re-run the uncached kernel."""
with patch.object(
CISReportGenerator,
"_compute_statistics_uncached",
wraps=cis_generator._compute_statistics_uncached,
) as spy:
cis_generator._compute_statistics(populated_cis_compliance_data)
cis_generator._compute_statistics(populated_cis_compliance_data)
assert spy.call_count == 1
# =============================================================================
# Executive summary
# =============================================================================
class TestCISExecutiveSummary:
def test_title_present(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_executive_summary(populated_cis_compliance_data)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
text = " ".join(str(p.text) for p in paragraphs)
assert "Executive Summary" in text
def test_tables_rendered(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_executive_summary(populated_cis_compliance_data)
tables = [e for e in elements if isinstance(e, Table)]
# Exact count: Summary, Profile, Assessment, Top Failing Sections = 4.
assert len(tables) == 4
def test_no_requirements(self, cis_generator, basic_cis_compliance_data):
basic_cis_compliance_data.requirements = []
basic_cis_compliance_data.attributes_by_requirement_id = {}
elements = cis_generator.create_executive_summary(basic_cis_compliance_data)
# With no requirements: Summary table always renders, and both Profile
# and Assessment breakdown tables render with a 0-filled default row,
# but Top Failing Sections is suppressed → exactly 3 tables.
tables = [e for e in elements if isinstance(e, Table)]
assert len(tables) == 3
# =============================================================================
# Charts section
# =============================================================================
class TestCISChartsSection:
def test_charts_rendered(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_charts_section(populated_cis_compliance_data)
# At least 1 image for the pie + 1 for section bar + 1 for stacked
images = [e for e in elements if isinstance(e, Image)]
assert len(images) >= 1
def test_charts_no_data_no_crash(self, cis_generator, basic_cis_compliance_data):
basic_cis_compliance_data.requirements = []
basic_cis_compliance_data.attributes_by_requirement_id = {}
elements = cis_generator.create_charts_section(basic_cis_compliance_data)
# Must not raise; may or may not have any Image
assert isinstance(elements, list)
# =============================================================================
# Requirements index
# =============================================================================
class TestCISRequirementsIndex:
def test_title_present(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_requirements_index(
populated_cis_compliance_data
)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
text = " ".join(str(p.text) for p in paragraphs)
assert "Requirements Index" in text
def test_groups_by_section(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_requirements_index(
populated_cis_compliance_data
)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
text = " ".join(str(p.text) for p in paragraphs)
assert "1 Identity and Access Management" in text
assert "2 Storage" in text
def test_renders_tables_per_section(
self, cis_generator, populated_cis_compliance_data
):
elements = cis_generator.create_requirements_index(
populated_cis_compliance_data
)
# One table per section with requirements. ``create_data_table``
# returns a LongTable when the row count exceeds its threshold and a
# plain Table otherwise — both are valid.
tables = [e for e in elements if isinstance(e, (Table, LongTable))]
assert len(tables) == 2
# =============================================================================
# Detailed findings extras hook
# =============================================================================
class TestRenderRequirementDetailExtras:
def test_inserts_all_fields(self, cis_generator, populated_cis_compliance_data):
req = populated_cis_compliance_data.requirements[1] # 1.2 FAIL
extras = cis_generator._render_requirement_detail_extras(
req, populated_cis_compliance_data
)
text = " ".join(str(p.text) for p in extras if isinstance(p, Paragraph))
assert "Rationale" in text
assert "Impact" in text
assert "Audit Procedure" in text
assert "Remediation" in text
assert "References" in text
def test_missing_metadata_returns_empty(
self, cis_generator, basic_cis_compliance_data
):
basic_cis_compliance_data.attributes_by_requirement_id = {}
req = RequirementData(id="99", description="unknown", status=StatusChoices.FAIL)
extras = cis_generator._render_requirement_detail_extras(
req, basic_cis_compliance_data
)
assert extras == []
def test_escapes_html_chars(self, cis_generator, basic_cis_compliance_data):
attr = _make_attr(
"1 IAM",
rationale="<script>alert('x')</script>",
)
basic_cis_compliance_data.attributes_by_requirement_id = {
"1.1": {"attributes": {"req_attributes": [attr], "checks": []}}
}
req = RequirementData(id="1.1", description="d", status=StatusChoices.FAIL)
extras = cis_generator._render_requirement_detail_extras(
req, basic_cis_compliance_data
)
text = " ".join(str(p.text) for p in extras if isinstance(p, Paragraph))
assert "<script>" not in text
assert "&lt;script&gt;" in text
# =============================================================================
# Cover page
# =============================================================================
class TestCISCoverPage:
@patch("tasks.jobs.reports.cis.Image")
def test_cover_page_has_logo(
self, mock_image, cis_generator, basic_cis_compliance_data
):
elements = cis_generator.create_cover_page(basic_cis_compliance_data)
assert len(elements) > 0
assert mock_image.call_count >= 1
def test_cover_page_title_includes_version(
self, cis_generator, basic_cis_compliance_data
):
elements = cis_generator.create_cover_page(basic_cis_compliance_data)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
content = " ".join(str(p.text) for p in paragraphs)
assert "CIS Benchmark" in content
assert "5.0" in content
def test_cover_page_title_includes_provider_when_set(
self, cis_generator, basic_cis_compliance_data
):
provider = Mock()
provider.provider = "aws"
provider.uid = "123456789012"
provider.alias = "test-account"
basic_cis_compliance_data.provider_obj = provider
elements = cis_generator.create_cover_page(basic_cis_compliance_data)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
content = " ".join(str(p.text) for p in paragraphs)
assert "AWS" in content
+91
View File
@@ -36,6 +36,7 @@ from api.models import (
Provider,
Resource,
Scan,
ScanSummary,
StateChoices,
StatusChoices,
)
@@ -3358,6 +3359,96 @@ class TestAggregateFindings:
regions = {s.region for s in summaries}
assert regions == {"us-east-1", "us-west-2"}
def test_aggregate_findings_is_idempotent_on_rerun(
self,
tenants_fixture,
scans_fixture,
findings_fixture,
):
"""Re-running `aggregate_findings` for the same scan must not violate
the `unique_scan_summary` constraint. The post-mute reaggregation
pipeline re-dispatches `perform_scan_summary_task` against scans
whose summaries already exist; upsert must update existing rows in
place (same primary keys) rather than inserting duplicates."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
value_columns = (
"check_id",
"service",
"severity",
"region",
"fail",
"_pass",
"muted",
"total",
)
aggregate_findings(str(tenant.id), str(scan.id))
first_run_ids = set(
ScanSummary.all_objects.filter(
tenant_id=tenant.id, scan_id=scan.id
).values_list("id", flat=True)
)
first_run_rows = list(
ScanSummary.all_objects.filter(tenant_id=tenant.id, scan_id=scan.id).values(
*value_columns
)
)
# Second invocation must not raise and must not duplicate rows.
aggregate_findings(str(tenant.id), str(scan.id))
second_run_ids = set(
ScanSummary.all_objects.filter(
tenant_id=tenant.id, scan_id=scan.id
).values_list("id", flat=True)
)
second_run_rows = list(
ScanSummary.all_objects.filter(tenant_id=tenant.id, scan_id=scan.id).values(
*value_columns
)
)
# Upsert preserves the original row identities; values stay stable
# because the underlying Finding set is unchanged between runs.
assert second_run_rows == first_run_rows
assert first_run_ids == second_run_ids
def test_aggregate_findings_reflects_mute_between_runs(
self,
tenants_fixture,
scans_fixture,
findings_fixture,
):
"""Re-running `aggregate_findings` after a finding is muted between
runs must move counters: the matching ScanSummary row's `fail`
decrements and `muted` increments. Guards against a regression where
upsert silently keeps stale values from the first run."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
finding1, _ = findings_fixture # finding1 is FAIL and not muted.
aggregate_findings(str(tenant.id), str(scan.id))
before = ScanSummary.all_objects.get(
tenant_id=tenant.id,
scan_id=scan.id,
check_id=finding1.check_id,
service="ec2",
severity=finding1.severity,
region="us-east-1",
)
assert before.fail == 1
assert before.muted == 0
Finding.all_objects.filter(pk=finding1.pk).update(muted=True)
aggregate_findings(str(tenant.id), str(scan.id))
after = ScanSummary.all_objects.get(pk=before.pk)
assert after.fail == 0
assert after.muted == 1
assert after.total == before.total
@pytest.mark.django_db
class TestAggregateFindingsByRegion:
+130 -33
View File
@@ -13,6 +13,8 @@ from tasks.jobs.lighthouse_providers import (
_extract_bedrock_credentials,
)
from tasks.tasks import (
DJANGO_TMP_OUTPUT_DIRECTORY,
STALE_TMP_OUTPUT_MAX_AGE_HOURS,
_cleanup_orphan_scheduled_scans,
_perform_scan_complete_tasks,
check_integrations_task,
@@ -236,7 +238,8 @@ class TestGenerateOutputs:
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
def test_no_findings_returns_early(self):
@patch("tasks.tasks._cleanup_stale_tmp_output_directories")
def test_no_findings_returns_early(self, mock_cleanup_stale_tmp_output_directories):
with patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
@@ -248,6 +251,34 @@ class TestGenerateOutputs:
assert result == {"upload": False}
mock_filter.assert_called_once_with(scan_id=self.scan_id)
mock_cleanup_stale_tmp_output_directories.assert_called_once_with(
DJANGO_TMP_OUTPUT_DIRECTORY,
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan=(self.tenant_id, self.scan_id),
)
@patch(
"tasks.tasks._cleanup_stale_tmp_output_directories",
side_effect=RuntimeError("cleanup boom"),
)
def test_cleanup_exception_does_not_break_no_findings_flow(
self, mock_cleanup_stale_tmp_output_directories
):
with patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert result == {"upload": False}
mock_cleanup_stale_tmp_output_directories.assert_called_once_with(
DJANGO_TMP_OUTPUT_DIRECTORY,
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan=(self.tenant_id, self.scan_id),
)
@patch("tasks.tasks._upload_to_s3")
@patch("tasks.tasks._compress_output_files")
@@ -309,7 +340,7 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock(name="CSVCompliance"))]},
{"aws": [(lambda _x: True, MagicMock(name="CSVCompliance"))]},
),
patch(
"tasks.tasks._generate_output_directory",
@@ -361,7 +392,7 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock())]},
{"aws": [(lambda _x: True, MagicMock())]},
),
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
patch("tasks.tasks._upload_to_s3", return_value=None),
@@ -441,7 +472,7 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, mock_compliance_class)]},
{"aws": [(lambda _x: True, mock_compliance_class)]},
),
):
mock_filter.return_value.exists.return_value = True
@@ -470,6 +501,10 @@ class TestGenerateOutputs:
class TrackingWriter:
def __init__(self, findings, file_path, file_extension, from_cli):
self.findings = findings
self.file_path = file_path
self.file_extension = file_extension
self.from_cli = from_cli
self.transform_called = 0
self.batch_write_data_to_file = MagicMock()
self._data = []
@@ -578,13 +613,13 @@ class TestGenerateOutputs:
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch(
"tasks.tasks.FindingOutput.transform_api_finding",
side_effect=lambda f, prov: f,
side_effect=lambda f, _prov: f,
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
patch(
"tasks.tasks.Scan.all_objects.filter",
return_value=MagicMock(update=lambda **kw: None),
return_value=MagicMock(update=lambda **_kw: None),
),
patch("tasks.tasks.batched", return_value=two_batches),
patch("tasks.tasks.OUTPUT_FORMATS_MAPPING", {}),
@@ -666,7 +701,7 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, mock_compliance_class)]},
{"aws": [(lambda _x: True, mock_compliance_class)]},
),
):
mock_filter.return_value.exists.return_value = True
@@ -748,7 +783,7 @@ class TestScanCompleteTasks:
@patch("tasks.tasks.can_provider_run_attack_paths_scan", return_value=False)
def test_scan_complete_tasks(
self,
mock_can_run_attack_paths,
_mock_can_run_attack_paths,
mock_attack_paths_task,
mock_check_integrations_task,
mock_compliance_reports_task,
@@ -994,7 +1029,7 @@ class TestCheckIntegrationsTask:
@patch("tasks.tasks.rmtree")
def test_generate_outputs_with_asff_for_aws_with_security_hub(
self,
mock_rmtree,
_mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
@@ -1122,7 +1157,7 @@ class TestCheckIntegrationsTask:
@patch("tasks.tasks.rmtree")
def test_generate_outputs_no_asff_for_aws_without_security_hub(
self,
mock_rmtree,
_mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
@@ -1245,7 +1280,7 @@ class TestCheckIntegrationsTask:
@patch("tasks.tasks.rmtree")
def test_generate_outputs_no_asff_for_non_aws_provider(
self,
mock_rmtree,
_mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
@@ -2359,11 +2394,26 @@ class TestReaggregateAllFindingGroupSummaries:
def setup_method(self):
self.tenant_id = str(uuid.uuid4())
@patch("tasks.tasks.chain")
@patch("tasks.tasks.group")
@patch("tasks.tasks.aggregate_attack_surface_task")
@patch("tasks.tasks.aggregate_scan_category_summaries_task")
@patch("tasks.tasks.aggregate_scan_resource_group_summaries_task")
@patch("tasks.tasks.aggregate_finding_group_summaries_task")
@patch("tasks.tasks.aggregate_daily_severity_task")
@patch("tasks.tasks.perform_scan_summary_task")
@patch("tasks.tasks.Scan.objects.filter")
def test_dispatches_subtasks_for_each_provider_per_day(
self, mock_scan_filter, mock_agg_task, mock_group
self,
mock_scan_filter,
mock_scan_summary_task,
mock_daily_severity_task,
mock_finding_group_task,
mock_resource_group_task,
mock_category_task,
mock_attack_surface_task,
mock_group,
mock_chain,
):
provider_id_1 = uuid.uuid4()
provider_id_2 = uuid.uuid4()
@@ -2373,8 +2423,13 @@ class TestReaggregateAllFindingGroupSummaries:
today = datetime.now(tz=timezone.utc)
yesterday = today - timedelta(days=1)
mock_group_result = MagicMock()
mock_group.side_effect = lambda gen: (list(gen), mock_group_result)[1]
mock_outer_group_result = MagicMock()
# The first `group()` call wraps the inner parallel step; subsequent
# calls wrap the outer per-scan generator.
mock_group.side_effect = lambda *args, **kwargs: (
list(args[0]) if args and hasattr(args[0], "__iter__") else None,
mock_outer_group_result,
)[1]
mock_scan_filter.return_value.order_by.return_value.values.return_value = [
{
@@ -2397,23 +2452,49 @@ class TestReaggregateAllFindingGroupSummaries:
result = reaggregate_all_finding_group_summaries_task(tenant_id=self.tenant_id)
assert result == {"scans_reaggregated": 3}
assert mock_agg_task.si.call_count == 3
mock_agg_task.si.assert_any_call(
tenant_id=self.tenant_id, scan_id=str(scan_id_today_p1)
)
mock_agg_task.si.assert_any_call(
tenant_id=self.tenant_id, scan_id=str(scan_id_today_p2)
)
mock_agg_task.si.assert_any_call(
tenant_id=self.tenant_id, scan_id=str(scan_id_yesterday_p1)
)
mock_group_result.apply_async.assert_called_once()
expected_scan_ids = {
str(scan_id_today_p1),
str(scan_id_today_p2),
str(scan_id_yesterday_p1),
}
for task_mock in (
mock_scan_summary_task,
mock_daily_severity_task,
mock_finding_group_task,
mock_resource_group_task,
mock_category_task,
mock_attack_surface_task,
):
assert task_mock.si.call_count == 3
dispatched = {
call.kwargs["scan_id"] for call in task_mock.si.call_args_list
}
assert dispatched == expected_scan_ids
for call in task_mock.si.call_args_list:
assert call.kwargs["tenant_id"] == self.tenant_id
assert mock_chain.call_count == 3
mock_outer_group_result.apply_async.assert_called_once()
@patch("tasks.tasks.chain")
@patch("tasks.tasks.group")
@patch("tasks.tasks.aggregate_attack_surface_task")
@patch("tasks.tasks.aggregate_scan_category_summaries_task")
@patch("tasks.tasks.aggregate_scan_resource_group_summaries_task")
@patch("tasks.tasks.aggregate_finding_group_summaries_task")
@patch("tasks.tasks.aggregate_daily_severity_task")
@patch("tasks.tasks.perform_scan_summary_task")
@patch("tasks.tasks.Scan.objects.filter")
def test_dedupes_scans_to_latest_per_provider_per_day(
self, mock_scan_filter, mock_agg_task, mock_group
self,
mock_scan_filter,
mock_scan_summary_task,
mock_daily_severity_task,
mock_finding_group_task,
mock_resource_group_task,
mock_category_task,
mock_attack_surface_task,
mock_group,
mock_chain,
):
"""When several scans run on the same day for the same provider, only
the latest one is dispatched (matching the daily summary unique key)."""
@@ -2423,8 +2504,11 @@ class TestReaggregateAllFindingGroupSummaries:
today_late = datetime.now(tz=timezone.utc)
today_early = today_late - timedelta(hours=4)
mock_group_result = MagicMock()
mock_group.side_effect = lambda gen: (list(gen), mock_group_result)[1]
mock_outer_group_result = MagicMock()
mock_group.side_effect = lambda *args, **kwargs: (
list(args[0]) if args and hasattr(args[0], "__iter__") else None,
mock_outer_group_result,
)[1]
# Returned ordered by `-completed_at`, so the most recent comes first.
mock_scan_filter.return_value.order_by.return_value.values.return_value = [
@@ -2443,17 +2527,30 @@ class TestReaggregateAllFindingGroupSummaries:
result = reaggregate_all_finding_group_summaries_task(tenant_id=self.tenant_id)
assert result == {"scans_reaggregated": 1}
mock_agg_task.si.assert_called_once_with(
tenant_id=self.tenant_id, scan_id=str(latest_scan_today)
)
mock_group_result.apply_async.assert_called_once()
for task_mock in (
mock_scan_summary_task,
mock_daily_severity_task,
mock_finding_group_task,
mock_resource_group_task,
mock_category_task,
mock_attack_surface_task,
):
task_mock.si.assert_called_once_with(
tenant_id=self.tenant_id, scan_id=str(latest_scan_today)
)
mock_chain.assert_called_once()
mock_outer_group_result.apply_async.assert_called_once()
@patch("tasks.tasks.chain")
@patch("tasks.tasks.group")
@patch("tasks.tasks.Scan.objects.filter")
def test_no_completed_scans_skips_dispatch(self, mock_scan_filter, mock_group):
def test_no_completed_scans_skips_dispatch(
self, mock_scan_filter, mock_group, mock_chain
):
mock_scan_filter.return_value.order_by.return_value.values.return_value = []
result = reaggregate_all_finding_group_summaries_task(tenant_id=self.tenant_id)
assert result == {"scans_reaggregated": 0}
mock_group.assert_not_called()
mock_chain.assert_not_called()
@@ -203,10 +203,10 @@ For detailed authentication configuration, see the [Authentication documentation
## Regions
Alibaba Cloud has multiple regions across the globe. By default, Prowler audits all available regions. You can specify specific regions using the `--regions` CLI argument:
Alibaba Cloud has multiple regions across the globe. By default, Prowler audits all available regions. You can specify specific regions using the `--region` CLI argument:
```bash
prowler alibabacloud --regions cn-hangzhou cn-shanghai
prowler alibabacloud --region cn-hangzhou cn-shanghai
```
The list of supported regions is maintained in [`prowler/providers/alibabacloud/config.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/alibabacloud/config.py).
+1 -1
View File
@@ -8,7 +8,7 @@ Prowler supports multiple output formats, allowing users to tailor findings pres
- Output Organization in Prowler
Prowler outputs are managed within the `/lib/outputs` directory. Each format—such as JSON, CSV, HTML—is implemented as a Python class.
Prowler outputs are managed within the `/lib/outputs` directory. Each format—such as JSON, CSV, HTML, SARIF—is implemented as a Python class.
- Outputs are generated based on scan findings, which are stored as structured dictionaries containing details such as:
+7
View File
@@ -164,6 +164,13 @@
}
]
},
{
"group": "CI/CD",
"pages": [
"user-guide/tutorials/prowler-app-github-action",
"user-guide/cookbooks/cicd-pipeline"
]
},
{
"group": "CLI",
"pages": [
@@ -25,7 +25,12 @@ If you prefer the former verbose output, use: `--verbose`. This allows seeing mo
## Report Generation
By default, Prowler generates CSV, JSON-OCSF, and HTML reports. To generate a JSON-ASFF report (used by AWS Security Hub), specify `-M` or `--output-modes`:
By default, Prowler generates CSV, JSON-OCSF, and HTML reports. Additional provider-specific formats are available:
* **JSON-ASFF** (AWS only): Used by AWS Security Hub
* **SARIF** (IaC only): Used by GitHub Code Scanning
To specify output formats, use the `-M` or `--output-modes` flag:
```console
prowler <provider> -M csv json-asff json-ocsf html
Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

@@ -61,6 +61,7 @@ Prowler natively supports the following reporting output formats:
- JSON-OCSF
- JSON-ASFF (AWS only)
- HTML
- SARIF (IaC only)
Hereunder is the structure for each of the supported report formats by Prowler:
@@ -368,6 +369,29 @@ Each finding is a `json` object within a list.
The following image is an example of the HTML output:
<img src="/images/cli/reporting/html-output.png" />
### SARIF (IaC Only)
import { VersionBadge } from "/snippets/version-badge.mdx"
<VersionBadge version="5.25.0" />
The SARIF (Static Analysis Results Interchange Format) output generates a [SARIF 2.1.0](https://docs.oasis-open.org/sarif/sarif/v2.1.0/sarif-v2.1.0.html) document compatible with GitHub Code Scanning and other SARIF-compatible tools. This format is exclusively available for the IaC provider, as it is designed for static analysis results that reference specific files and line numbers.
```console
prowler iac --scan-repository-url https://github.com/user/repo -M sarif
```
<Note>
The SARIF output format is only available when using the `iac` provider. Attempting to use it with other providers results in an error.
</Note>
The SARIF output includes:
* **Rules:** Each unique check ID produces a rule entry with severity, description, remediation, and a markdown help panel.
* **Results:** Only failed (non-muted) findings are included, with file paths and line numbers for precise annotation.
* **Severity mapping:** Prowler severities map to SARIF levels (`critical`/`high` → `error`, `medium` → `warning`, `low`/`informational` → `note`).
## V4 Deprecations
Some deprecations have been made to unify formats and improve outputs.
@@ -2,6 +2,10 @@
title: 'Run Prowler in CI/CD and Send Findings to Prowler Cloud'
---
<Warning>
For new projects, use the official [Prowler GitHub Action](/user-guide/tutorials/prowler-app-github-action) — a Docker-based reusable action that runs scans, optionally pushes findings to Prowler Cloud, and uploads SARIF results to GitHub Code Scanning. The GitHub Actions examples below document the legacy pip-based flow.
</Warning>
This cookbook demonstrates how to integrate Prowler into CI/CD pipelines so that security scans run automatically and findings are sent to Prowler Cloud via [Import Findings](/user-guide/tutorials/prowler-app-import-findings). Examples cover GitHub Actions and GitLab CI.
## Prerequisites
@@ -2,7 +2,7 @@
title: 'Alibaba Cloud Authentication in Prowler'
---
Prowler requires Alibaba Cloud credentials to perform security checks. Authentication is supported via multiple methods, prioritized as follows:
Prowler supports multiple Alibaba Cloud authentication flows. If more than one is configured at the same time, the provider resolves them in this order:
1. **Credentials URI**
2. **OIDC Role Authentication**
@@ -12,119 +12,325 @@ Prowler requires Alibaba Cloud credentials to perform security checks. Authentic
6. **Permanent Access Keys**
7. **Default Credential Chain**
## Authentication Methods
<Warning>
Do not use the AccessKey pair of the main Alibaba Cloud account for Prowler. Use a RAM user, a RAM role, or another temporary credential flow instead.
</Warning>
### Credentials URI (Recommended for Centralized Services)
## Choose The Right Method
Prowler can retrieve credentials from an external URI endpoint. Provide the URI via the `--credentials-uri` flag or the `ALIBABA_CLOUD_CREDENTIALS_URI` environment variable. The URI must return credentials in the standard JSON format.
| Where Prowler runs | What you need to create | Recommended method |
| --- | --- | --- |
| Local workstation | RAM user + AccessKey pair | [RAM User And AccessKey](#ram-user-and-accesskey) |
| CI runner outside Alibaba Cloud | RAM user + AccessKey pair, optionally a target RAM role | [RAM Role Assumption](#ram-role-assumption-recommended) |
| ECS instance | ECS RAM role attached to the instance | [ECS RAM Role](#ecs-ram-role) |
| ACK / Kubernetes | OIDC IdP + RAM role + OIDC token file | [OIDC Role Authentication](#oidc-role-authentication) |
| Internal credential broker | An HTTP endpoint that returns STS credentials | [Credentials URI](#credentials-uri) |
## RAM User And AccessKey
This is the simplest setup for a workstation or a basic CI runner.
### Create The RAM User
1. Open the [RAM console](https://ram.console.alibabacloud.com/).
2. Go to `Identities` > `Users`.
3. Click `Create User`.
4. Enter a logon name and display name.
5. In `Access Configuration`, select `Permanent AccessKey`.
![Create a RAM user and enable Permanent AccessKey](./img/create_user.png)
6. Save the generated `AccessKey ID` and `AccessKey Secret` immediately. Alibaba Cloud only shows the secret once.
7. Grant the user the read permissions required for the Alibaba Cloud services you want Prowler to scan.
![Grant permissions to the RAM user](./img/grant_permissions.png)
Alibaba Cloud walkthroughs with current console screenshots:
- [Create a RAM user](https://www.alibabacloud.com/help/en/ram/user-guide/create-a-ram-user)
- [Create an AccessKey pair](https://www.alibabacloud.com/help/en/ram/user-guide/create-an-accesskey-pair)
- [Grant permissions to a RAM user](https://www.alibabacloud.com/help/en/ram/user-guide/grant-permissions-to-the-ram-user)
### Use The AccessKey With Prowler
```bash
# Using CLI flag
prowler alibabacloud --credentials-uri http://localhost:8080/credentials
# Or using environment variable
export ALIBABA_CLOUD_CREDENTIALS_URI="http://localhost:8080/credentials"
prowler alibabacloud
```
### OIDC Role Authentication (Recommended for ACK/Kubernetes)
OIDC authentication assumes the specified role using an OIDC token. This is the most secure method for containerized applications running in ACK (Alibaba Container Service for Kubernetes) with RRSA enabled.
The role ARN can be provided via the `--oidc-role-arn` flag or the `ALIBABA_CLOUD_ROLE_ARN` environment variable. The OIDC provider ARN and token file must be set via environment variables:
- `ALIBABA_CLOUD_OIDC_PROVIDER_ARN`
- `ALIBABA_CLOUD_OIDC_TOKEN_FILE`
```bash
# Using CLI flag for role ARN
export ALIBABA_CLOUD_OIDC_PROVIDER_ARN="acs:ram::123456789012:oidc-provider/ack-rrsa-provider"
export ALIBABA_CLOUD_OIDC_TOKEN_FILE="/var/run/secrets/tokens/oidc-token"
prowler alibabacloud --oidc-role-arn acs:ram::123456789012:role/YourRole
# Or using all environment variables
export ALIBABA_CLOUD_ROLE_ARN="acs:ram::123456789012:role/YourRole"
export ALIBABA_CLOUD_OIDC_PROVIDER_ARN="acs:ram::123456789012:oidc-provider/ack-rrsa-provider"
export ALIBABA_CLOUD_OIDC_TOKEN_FILE="/var/run/secrets/tokens/oidc-token"
prowler alibabacloud
```
### ECS RAM Role (Recommended for ECS Instances)
When running on an ECS instance with an attached RAM role, Prowler can obtain credentials from the ECS instance metadata service.
```bash
# Using CLI argument
prowler alibabacloud --ecs-ram-role RoleName
# Or using environment variable
export ALIBABA_CLOUD_ECS_METADATA="RoleName"
prowler alibabacloud
```
### RAM Role Assumption (Recommended for Cross-Account)
For cross-account access, use RAM role assumption. Provide the initial credentials (access keys) via environment variables and the target role ARN via the `--role-arn` flag or the `ALIBABA_CLOUD_ROLE_ARN` environment variable.
The `--role-session-name` flag customizes the session identifier (defaults to `ProwlerAssessmentSession`).
```bash
# Using CLI flags
export ALIBABA_CLOUD_ACCESS_KEY_ID="your-access-key-id"
export ALIBABA_CLOUD_ACCESS_KEY_SECRET="your-access-key-secret"
prowler alibabacloud --role-arn acs:ram::123456789012:role/ProwlerAuditRole --role-session-name MyAuditSession
# Or using all environment variables
export ALIBABA_CLOUD_ACCESS_KEY_ID="your-access-key-id"
export ALIBABA_CLOUD_ACCESS_KEY_SECRET="your-access-key-secret"
export ALIBABA_CLOUD_ROLE_ARN="acs:ram::123456789012:role/ProwlerAuditRole"
prowler alibabacloud
```
### STS Temporary Credentials
Prowler also accepts `ALIYUN_ACCESS_KEY_ID` and `ALIYUN_ACCESS_KEY_SECRET` for compatibility, but `ALIBABA_CLOUD_*` is the preferred naming.
If you already have temporary STS credentials, you can provide them via environment variables.
### Use The Default Credential Chain
If you prefer not to export credentials in every shell, you can store them with the Alibaba Cloud CLI and let Prowler reuse the default credential chain from `~/.aliyun/config.json`.
```bash
aliyun configure --mode AK
prowler alibabacloud
```
For profile management details, see Alibaba Cloud's [CLI credential management guide](https://www.alibabacloud.com/help/en/cli/other-configure-command-operations).
## RAM Role Assumption (Recommended)
Use this when:
- you want short-lived credentials instead of long-lived AccessKeys in Prowler,
- you are scanning another Alibaba Cloud account, or
- you are configuring Alibaba Cloud in Prowler Cloud and want to provide a `Role ARN`.
This flow has two parts:
1. A source identity that can call `sts:AssumeRole`.
2. A target RAM role that has the scan permissions.
### Create The Source Identity
Create a RAM user with an AccessKey pair by following the steps in [RAM User And AccessKey](#ram-user-and-accesskey), or reuse an existing automation identity.
### Create The Target Role
1. Open the [RAM console](https://ram.console.alibabacloud.com/).
2. Go to `Identities` > `Roles`.
3. Click `Create Role`.
4. Set `Principal Type` to `Cloud Account`.
5. Choose:
- `Current Account` if the RAM user and the role are in the same account.
- `Other Account` if the RAM user belongs to a different Alibaba Cloud account.
6. Give the role a name such as `ProwlerAuditRole`.
7. Attach the scan permissions to the role.
8. Copy the role ARN in the format `acs:ram::<account-id>:role/<role-name>`.
If you want to restrict the role so that only one RAM user or one RAM role can assume it, edit the trust policy accordingly.
Helpful references:
- [Create a RAM role for a trusted Alibaba Cloud account](https://www.alibabacloud.com/help/en/ram/user-guide/create-a-ram-role-for-a-trusted-alibaba-cloud-account)
- [Assume a RAM role](https://www.alibabacloud.com/help/doc-detail/116820.html)
### Allow The Source Identity To Assume The Role
The source RAM user must be able to call `sts:AssumeRole`.
The easiest starting point is to attach Alibaba Cloud's `AliyunSTSAssumeRoleAccess` policy to that RAM user. If you want tighter scope, attach a custom policy limited to the target role ARN.
### Run Prowler
```bash
export ALIBABA_CLOUD_ACCESS_KEY_ID="source-user-access-key-id"
export ALIBABA_CLOUD_ACCESS_KEY_SECRET="source-user-access-key-secret"
prowler alibabacloud \
--role-arn acs:ram::123456789012:role/ProwlerAuditRole \
--role-session-name ProwlerAssessmentSession
```
You can also set the role ARN with `ALIBABA_CLOUD_ROLE_ARN`, but the source AccessKey pair is still required for this flow.
## STS Temporary Credentials
Use this if another tool already gives you a temporary `AccessKey ID`, `AccessKey Secret`, and `SecurityToken`.
This is common when:
- a CI platform brokers Alibaba credentials for the job,
- your internal tooling already calls `AssumeRole`, or
- you want to test with a short-lived session before switching to a RAM role flow.
```bash
export ALIBABA_CLOUD_ACCESS_KEY_ID="your-sts-access-key-id"
export ALIBABA_CLOUD_ACCESS_KEY_SECRET="your-sts-access-key-secret"
export ALIBABA_CLOUD_SECURITY_TOKEN="your-sts-security-token"
prowler alibabacloud
```
### Permanent Access Keys
You can use standard permanent access keys via environment variables.
You can also store the session in the Alibaba CLI configuration:
```bash
export ALIBABA_CLOUD_ACCESS_KEY_ID="your-access-key-id"
export ALIBABA_CLOUD_ACCESS_KEY_SECRET="your-access-key-secret"
aliyun configure --mode StsToken
prowler alibabacloud
```
## Required Permissions
<Note>
Prowler does not mint standalone STS sessions for you. If you use this method, you must provide all three STS values from your external workflow.
</Note>
The credentials used by Prowler should have the minimum required permissions to audit the resources. At a minimum, the following permissions are recommended:
## ECS RAM Role
- `ram:GetUser`
- `ram:ListUsers`
- `ram:GetPasswordPolicy`
- `ram:GetAccountSummary`
- `ram:ListVirtualMFADevices`
- `ram:ListGroups`
- `ram:ListPolicies`
- `ram:ListAccessKeys`
- `ram:GetLoginProfile`
- `ram:ListPoliciesForUser`
- `ram:ListGroupsForUser`
- `actiontrail:DescribeTrails`
- `oss:GetBucketLogging`
- `oss:GetBucketAcl`
- `rds:DescribeDBInstances`
- `rds:DescribeDBInstanceAttribute`
- `ecs:DescribeInstances`
- `vpc:DescribeVpcs`
- `sls:ListProject`
- `sls:ListAlerts`
- `sls:ListLogStores`
- `sls:GetLogStore`
Use this when Prowler runs on an ECS instance and you do not want to store any AccessKeys on disk.
### Create And Attach The Role
1. Open the [RAM console](https://ram.console.alibabacloud.com/).
2. Go to `Identities` > `Roles`.
3. Click `Create Role`.
4. Set the trusted entity to `Alibaba Cloud Service`.
5. Select `ECS` as the trusted service.
6. Attach the read permissions required for the scan.
7. Attach that RAM role to the ECS instance that runs Prowler.
Alibaba Cloud guide:
- [Instance RAM roles](https://www.alibabacloud.com/help/en/doc-detail/54579.html)
### Run Prowler
```bash
prowler alibabacloud --ecs-ram-role ProwlerEcsRole
```
Or:
```bash
export ALIBABA_CLOUD_ECS_METADATA="ProwlerEcsRole"
prowler alibabacloud
```
## OIDC Role Authentication
Use this when Prowler runs in ACK or another Kubernetes environment that provides an OIDC token file.
### Create The OIDC Identity Provider
1. Open the [RAM console](https://ram.console.alibabacloud.com/).
2. Go to `Integrations` > `SSO`.
3. Select `Role-based SSO`, then the `OIDC` tab.
4. Click `Create IdP`.
5. Fill in:
- `IdP Name`
- `Issuer URL`
- `Fingerprint`
- `Client ID`
6. Create the IdP and note its ARN.
Alibaba Cloud guides:
- [Manage an OIDC IdP](https://www.alibabacloud.com/help/en/ram/manage-an-oidc-idp)
- [Overview of role-based OIDC SSO](https://www.alibabacloud.com/help/en/ram/overview-of-oidc-based-sso)
### Create The RAM Role Trusted By That IdP
Create a RAM role whose trusted entity is the OIDC IdP, then attach the scan permissions to that role.
If you are running in ACK with RRSA, this is typically the role bound to the service account that runs Prowler.
### Provide The OIDC Variables To Prowler
Prowler currently expects:
- `--oidc-role-arn` for the RAM role ARN,
- `ALIBABA_CLOUD_OIDC_PROVIDER_ARN` for the OIDC provider ARN,
- `ALIBABA_CLOUD_OIDC_TOKEN_FILE` for the token file path.
Example:
```bash
export ALIBABA_CLOUD_OIDC_PROVIDER_ARN="acs:ram::123456789012:oidc-provider/ack-rrsa-provider"
export ALIBABA_CLOUD_OIDC_TOKEN_FILE="/var/run/secrets/ack.alibabacloud.com/rrsa-tokens/token"
prowler alibabacloud --oidc-role-arn acs:ram::123456789012:role/ProwlerAckRole
```
If you use ACK RRSA, Alibaba's `ack-pod-identity-webhook` can inject the three required environment variables and mount the token file into the pod automatically:
- [ack-pod-identity-webhook](https://www.alibabacloud.com/help/en/cs/user-guide/ack-pod-identity-webhook)
- [Use RRSA to authorize different pods to access different cloud services](https://www.alibabacloud.com/help/doc-detail/356611.html)
<Note>
Even if your pod already exposes `ALIBABA_CLOUD_ROLE_ARN`, use `--oidc-role-arn` with Prowler. The provider currently reads the role ARN for OIDC from the CLI argument.
</Note>
## Credentials URI
Use this only if you already operate an internal credential broker that returns temporary Alibaba Cloud credentials over HTTP.
The endpoint must return a JSON body with this structure:
```json
{
"Code": "Success",
"AccessKeyId": "STS.xxxxx",
"AccessKeySecret": "xxxxx",
"SecurityToken": "xxxxx",
"Expiration": "2026-04-23T10:00:00Z"
}
```
Run Prowler with:
```bash
prowler alibabacloud --credentials-uri http://localhost:8080/credentials
```
Or:
```bash
export ALIBABA_CLOUD_CREDENTIALS_URI="http://localhost:8080/credentials"
prowler alibabacloud
```
For the expected response format, see Alibaba Cloud's SDK guide for [URI credentials](https://www.alibabacloud.com/help/en/sdk/developer-reference/v2-manage-access-credentials).
## Permissions Guidance
The exact minimum policy depends on the checks and services you enable.
If you are using the RAM console's `Grant Permission` screen, search for the **system policy names** below. Alibaba Cloud often uses product policy names that differ from the service name shown in Prowler.
### System Policies In The RAM Console
| Prowler use case | Policy name in RAM console | Notes |
| --- | --- | --- |
| Source user for `--role-arn` | `AliyunSTSAssumeRoleAccess` | Grants `sts:AssumeRole` so the source identity can assume the scan role. |
| RAM checks | `AliyunRAMReadOnlyAccess` | Covers RAM read APIs such as users, groups, policies, MFA devices, and account alias. |
| ECS checks | `AliyunECSReadOnlyAccess` | Read-only ECS access. |
| VPC checks | `AliyunVPCReadOnlyAccess` | Read-only VPC access. |
| OSS checks | `AliyunOSSReadOnlyAccess` | Read-only OSS access. |
| ActionTrail checks | `AliyunActionTrailReadOnlyAccess` | Read-only ActionTrail access. |
| SLS checks | `AliyunLogReadOnlyAccess` | In the RAM console, Simple Log Service appears as `Log`. |
| RDS checks | `AliyunRDSReadOnlyAccess` | Read-only RDS access. |
| ACK / Container Service checks | `AliyunCSReadOnlyAccess` | In the RAM console, ACK permissions appear under `CS`. |
| Security Center checks | `AliyunYundunSASReadOnlyAccess` | In the RAM console, Security Center appears under `Yundun SAS`. |
### Recommended Starting Point
For a broad Alibaba Cloud scan, the identity used by Prowler usually needs read access to the services Prowler currently audits, including:
- `RAM`
- `ECS`
- `VPC`
- `OSS`
- `ActionTrail`
- `Simple Log Service (SLS)`
- `RDS`
- `Container Service / ACK`
- `Security Center`
Use the following setup as a practical starting point:
- If you use **static AccessKeys**, attach the read-only policies above directly to the RAM user used by Prowler.
- If you use **RAM role assumption**, attach `AliyunSTSAssumeRoleAccess` to the source RAM user and attach the read-only policies above to the target scan role.
- If you use **ECS RAM role** or **OIDC/RRSA**, attach the read-only policies above to the role assumed by Prowler.
If you prefer a tighter custom policy instead of system policies, the current provider relies on read APIs such as:
- `ram:Get*`, `ram:List*`
- `ecs:Describe*`
- `vpc:Describe*`
- `oss:Get*`, `oss:List*`
- `actiontrail:Describe*`
- `log:Get*`, `log:List*`, `log:Query*`
- `rds:Describe*`
- `cs:Get*`, `cs:List*`, `cs:Describe*`
- `yundun-sas:Get*`, `yundun-sas:Describe*`, `yundun-sas:List*`
<Note>
If a service is denied, Prowler can still start, but checks for that service may fail or return incomplete results.
</Note>
@@ -12,9 +12,9 @@ Before you begin, make sure you have:
1. An **Alibaba Cloud Account ID** (visible in the Alibaba Cloud Console under your profile).
2. **Credentials** with appropriate permissions:
- **RAM User with Access Keys**: For static credential authentication.
- **RAM Role**: For cross-account access using role assumption (recommended).
3. The required permissions for Prowler to audit your resources. See the [Alibaba Cloud Authentication](/user-guide/providers/alibabacloud/authentication) guide for the full list of required permissions.
- **RAM User with Access Keys**: For local CLI usage or simple CI setups. See [RAM User and AccessKey](/user-guide/providers/alibabacloud/authentication#ram-user-and-accesskey).
- **RAM Role**: For role assumption and Prowler Cloud onboarding. See [RAM Role Assumption](/user-guide/providers/alibabacloud/authentication#ram-role-assumption-recommended).
3. The required permissions for Prowler to audit your resources. See the [Alibaba Cloud Authentication](/user-guide/providers/alibabacloud/authentication) guide for setup steps and permission guidance.
<CardGroup cols={2}>
<Card title="Prowler Cloud" icon="cloud" href="#prowler-cloud">
@@ -64,7 +64,7 @@ After the Account ID is in place, select the authentication method that matches
#### RAM Role Assumption (Recommended)
Use this method for secure cross-account access. For detailed instructions on how to create the RAM role, see the [Authentication guide](/user-guide/providers/alibabacloud/authentication#ram-role-assumption-recommended-for-cross-account).
Use this method for secure cross-account access. For detailed instructions on how to create the RAM role, see the [Authentication guide](/user-guide/providers/alibabacloud/authentication#ram-role-assumption-recommended).
1. Enter the **Role ARN** (format: `acs:ram::<account-id>:role/<role-name>`)
2. Enter the **Access Key ID** and **Access Key Secret** of the RAM user that will assume the role
@@ -77,7 +77,7 @@ The RAM user whose credentials you provide must have permission to assume the ta
#### Credentials (Static Access Keys)
Use static credentials for quick scans (not recommended for production). For detailed setup, see the [Authentication guide](/user-guide/providers/alibabacloud/authentication#permanent-access-keys).
Use static credentials for quick scans (not recommended for production). For detailed setup, see the [Authentication guide](/user-guide/providers/alibabacloud/authentication#ram-user-and-accesskey).
1. Enter the **Access Key ID** and **Access Key Secret**
@@ -104,7 +104,7 @@ You can also run Alibaba Cloud assessments directly from the CLI. Both command-l
### Step 1: Select an Authentication Method
Choose one of the following authentication methods. For the complete list and detailed configuration, see the [Authentication guide](/user-guide/providers/alibabacloud/authentication).
Choose one of the following authentication methods. For step-by-step credential creation and the full list of supported authentication modes, see the [Authentication guide](/user-guide/providers/alibabacloud/authentication).
#### Environment Variables
@@ -114,6 +114,13 @@ export ALIBABA_CLOUD_ACCESS_KEY_SECRET="your-access-key-secret"
prowler alibabacloud
```
#### Default Credential Chain
```bash
aliyun configure --mode AK
prowler alibabacloud
```
#### RAM Role Assumption
```bash
@@ -146,7 +153,7 @@ prowler alibabacloud
#### Scan specific regions
```bash
prowler alibabacloud --regions cn-hangzhou cn-shanghai
prowler alibabacloud --region cn-hangzhou cn-shanghai
```
#### Run specific checks
Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 282 KiB

@@ -29,7 +29,7 @@ Prowler IaC provider scans the following Infrastructure as Code configurations f
- For remote repository scans, authentication can be provided via [git URL](https://git-scm.com/docs/git-clone#_git_urls), CLI flags or environment variables.
- Check the [IaC Authentication](/user-guide/providers/iac/authentication) page for more details.
- Mutelist logic ([filtering](https://trivy.dev/latest/docs/configuration/filtering/)) is handled by Trivy, not Prowler.
- Results are output in the same formats as other Prowler providers (CSV, JSON, HTML, etc.).
- Results are output in the same formats as other Prowler providers (CSV, JSON-OCSF, HTML), plus [SARIF](/user-guide/cli/tutorials/reporting#sarif-iac-only) for GitHub Code Scanning integration.
## Prowler Cloud
@@ -140,8 +140,20 @@ prowler iac --scan-path ./my-iac-directory --exclude-path ./my-iac-directory/tes
### Output
Use the standard Prowler output options, for example:
Use the standard Prowler output options. The IaC provider also supports [SARIF](/user-guide/cli/tutorials/reporting#sarif-iac-only) output for GitHub Code Scanning integration:
```sh
prowler iac --scan-path ./iac --output-formats csv json html
prowler iac --scan-path ./iac --output-formats csv json-ocsf html
```
#### SARIF Output
<VersionBadge version="5.25.0" />
To generate SARIF output for integration with SARIF-compatible tools:
```sh
prowler iac --scan-repository-url https://github.com/user/repo -M sarif
```
See the [SARIF reporting documentation](/user-guide/cli/tutorials/reporting#sarif-iac-only) for details on the format and severity mapping.
@@ -0,0 +1,265 @@
---
title: 'GitHub Action'
description: 'Run Prowler scans in GitHub Actions using the official Docker-based action'
---
import { VersionBadge } from "/snippets/version-badge.mdx"
<VersionBadge version="5.25.0" />
The official **Prowler GitHub Action** runs Prowler scans inside your GitHub workflows using the official [`prowlercloud/prowler`](https://hub.docker.com/r/prowlercloud/prowler) Docker image. It supports every [Prowler provider](/user-guide/providers/) (AWS, Azure, GCP, Kubernetes, GitHub, Cloudflare, IaC, and more), optionally pushes findings to Prowler Cloud, and uploads SARIF results to GitHub Code Scanning so findings appear in the **Security** tab and as inline PR annotations.
Source: [`prowler-cloud/prowler`](https://github.com/prowler-cloud/prowler) · Marketplace listing: [Prowler Security Scan](https://github.com/marketplace/actions/prowler-security-scan).
## Inputs
| Input | Required | Default | Description |
|-------|----------|---------|-------------|
| `provider` | yes | — | Cloud provider to scan (`aws`, `azure`, `gcp`, `github`, `kubernetes`, `iac`, `cloudflare`, etc.) |
| `image-tag` | no | `stable` | Docker image tag — `stable` (latest release), `latest` (master, not stable), or `<x.y.z>` (pinned). See [available tags](https://hub.docker.com/r/prowlercloud/prowler/tags). |
| `output-formats` | no | `json-ocsf` | Output format(s) for scan results. Space-separated (e.g. `sarif json-ocsf`) |
| `push-to-cloud` | no | `false` | Push findings to [Prowler Cloud](/user-guide/tutorials/prowler-app-import-findings). When `true`, `PROWLER_CLOUD_API_KEY` is auto-forwarded |
| `flags` | no | `""` | Additional CLI flags (e.g. `--severity critical high`). Values with spaces can be quoted: `--resource-tag 'Environment=My Server'` |
| `extra-env` | no | `""` | Space-, newline-, or comma-separated list of env var **names** to forward to the container (see [Authentication](#authentication)) |
| `upload-sarif` | no | `false` | Upload SARIF results to GitHub Code Scanning |
| `sarif-file` | no | `""` | Path to SARIF file (auto-detected from `output/` if not set) |
| `sarif-category` | no | `prowler` | Category for the SARIF upload (distinguishes multiple analyses) |
| `fail-on-findings` | no | `false` | Fail the workflow step when findings are detected (exit code 3) |
## Usage
### AWS scan
```yaml
- uses: prowler-cloud/prowler@5.25
with:
provider: aws
extra-env: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_SESSION_TOKEN: ${{ secrets.AWS_SESSION_TOKEN }}
```
### Push findings to Prowler Cloud
Send scan results directly to [Prowler Cloud](/user-guide/tutorials/prowler-app-import-findings) for centralized visibility, compliance tracking, and team collaboration.
```yaml
- uses: prowler-cloud/prowler@5.25
with:
provider: aws
push-to-cloud: true
extra-env: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_SESSION_TOKEN: ${{ secrets.AWS_SESSION_TOKEN }}
PROWLER_CLOUD_API_KEY: ${{ secrets.PROWLER_CLOUD_API_KEY }}
```
<Info>
When `push-to-cloud: true`, `PROWLER_CLOUD_API_KEY` is forwarded automatically — set it in `env:` but don't list it in `extra-env`. Requires a Prowler Cloud subscription and an API key with the **Manage Ingestions** permission. See [API Keys](/user-guide/tutorials/prowler-app-api-keys).
</Info>
### Upload SARIF to GitHub Code Scanning
Findings appear in the **Security** tab and as **inline PR annotations** when SARIF upload is enabled.
```yaml
name: Prowler IaC Scan
on:
pull_request:
permissions:
contents: read
security-events: write
actions: read
jobs:
prowler:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: prowler-cloud/prowler@5.25
with:
provider: iac
output-formats: sarif json-ocsf
upload-sarif: true
flags: --severity critical high
```
<Warning>
**Requirements:**
- Include `sarif` in `output-formats` (the action warns if this is missing).
- The workflow needs `security-events: write` and `actions: read` permissions.
- GitHub Code Scanning is free for public repositories. Private repositories require a [GitHub Code Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) license.
</Warning>
### Combine push-to-cloud with SARIF upload
```yaml
- uses: prowler-cloud/prowler@5.25
with:
provider: aws
output-formats: sarif json-ocsf
push-to-cloud: true
upload-sarif: true
extra-env: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_SESSION_TOKEN: ${{ secrets.AWS_SESSION_TOKEN }}
PROWLER_CLOUD_API_KEY: ${{ secrets.PROWLER_CLOUD_API_KEY }}
```
### Scan the current repository with the GitHub provider
```yaml
name: Prowler GitHub Scan
on:
schedule:
- cron: '0 0 * * 0'
workflow_dispatch:
jobs:
prowler:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: prowler-cloud/prowler@5.25
with:
provider: github
flags: --repository ${{ github.repository }}
extra-env: GITHUB_PERSONAL_ACCESS_TOKEN
env:
GITHUB_PERSONAL_ACCESS_TOKEN: ${{ secrets.PROWLER_GITHUB_PAT }}
```
<Info>
`--repository` scans a single repo. Use `--organization <name>` instead to include org-level checks (MFA, security policies, etc.). See the [GitHub provider authentication](/user-guide/providers/github/authentication) for required token permissions.
</Info>
### Fail the PR on findings
By default the action tolerates findings (exit code 3) and succeeds. Set `fail-on-findings: true` to fail the workflow step when Prowler detects findings. Combine with `--severity` to control which severity levels trigger the failure:
```yaml
- uses: prowler-cloud/prowler@5.25
with:
provider: iac
output-formats: sarif
upload-sarif: true
fail-on-findings: true
flags: --severity critical high
```
The scan step fails if critical/high findings are detected, blocking the PR via required checks. SARIF is still uploaded (the upload step runs with `if: always()`) so findings appear in the Security tab regardless.
## Authentication
Each provider requires its own credentials passed as environment variables. Credentials are **not forwarded automatically** — list every env var name you need in the `extra-env` input, and set its value via `env:` at the step, job, or workflow level (typically from `secrets.*`).
Refer to the [Prowler provider docs](/user-guide/providers/) for the full list of variables each provider supports. Common ones:
| Provider | Typical `extra-env` |
|----------|---------------------|
| AWS | `AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_DEFAULT_REGION` (OIDC exports these automatically) |
| Azure | `AZURE_CLIENT_ID AZURE_CLIENT_SECRET AZURE_TENANT_ID` |
| GCP | `GOOGLE_APPLICATION_CREDENTIALS CLOUDSDK_AUTH_ACCESS_TOKEN GOOGLE_CLOUD_PROJECT` |
| GitHub | `GITHUB_PERSONAL_ACCESS_TOKEN` *(or `GITHUB_OAUTH_APP_TOKEN`, or `GITHUB_APP_ID GITHUB_APP_KEY`)* |
| Kubernetes | `KUBECONFIG` |
| Cloudflare | `CLOUDFLARE_API_TOKEN` *(or `CLOUDFLARE_API_KEY CLOUDFLARE_API_EMAIL`)* |
<Info>
`PROWLER_CLOUD_API_KEY` is auto-forwarded when `push-to-cloud: true` — no need to add it to `extra-env`.
</Info>
### AWS
Use [aws-actions/configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials) with OIDC (recommended) or pass static credentials. OIDC sets `AWS_*` env vars on the runner, so you only forward them:
```yaml
permissions:
id-token: write
contents: read
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/ProwlerRole
aws-region: eu-west-1
- uses: prowler-cloud/prowler@5.25
with:
provider: aws
extra-env: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_DEFAULT_REGION
```
### Azure
Use [azure/login](https://github.com/Azure/login) with a service principal or pass credentials directly:
```yaml
steps:
- uses: azure/login@v2
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: prowler-cloud/prowler@5.25
with:
provider: azure
extra-env: AZURE_CLIENT_ID AZURE_CLIENT_SECRET AZURE_TENANT_ID
env:
AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_CLIENT_SECRET }}
AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
```
### GCP
Use [google-github-actions/auth](https://github.com/google-github-actions/auth) with Workload Identity Federation (recommended):
```yaml
permissions:
id-token: write
contents: read
steps:
- uses: google-github-actions/auth@v2
with:
workload_identity_provider: projects/123456/locations/global/workloadIdentityPools/my-pool/providers/my-provider
service_account: prowler@my-project.iam.gserviceaccount.com
- uses: prowler-cloud/prowler@5.25
with:
provider: gcp
extra-env: GOOGLE_APPLICATION_CREDENTIALS CLOUDSDK_AUTH_ACCESS_TOKEN GOOGLE_CLOUD_PROJECT
```
### Cloudflare
Create a Cloudflare API Token with `Zone:Read`, `Zone Settings:Read`, and `DNS:Read` permissions ([provider auth docs](/user-guide/providers/cloudflare/authentication)). Then:
```yaml
- uses: prowler-cloud/prowler@5.25
with:
provider: cloudflare
extra-env: CLOUDFLARE_API_TOKEN
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
```
## Outputs
Scan results are written to `output/` in the workspace and uploaded as artifacts named `prowler-<provider>` with 30-day retention.
When `upload-sarif` is enabled, SARIF results are also uploaded to GitHub Code Scanning and appear on the repository's **Security → Code scanning** tab, filtered by the branch that ran the scan.
### Step summary
The action writes a summary to the run page with a per-severity breakdown of failing checks, artifact and Code Scanning links, and (when `push-to-cloud: false`) a pointer to [Prowler Cloud](https://cloud.prowler.com) for continuous monitoring.
<img src="/images/github-action/scan-summary.png" alt="GitHub Actions run page showing the Prowler IaC Scan Summary with failing and passing counts, severity breakdown, scan log link, artifact link, and GitHub Code Security link" width="1400" />
@@ -365,6 +365,10 @@ Prowler must be installed in the CI/CD environment before running scans. Refer t
### GitHub Actions
<Info>
For new projects, use the official [Prowler GitHub Action](/user-guide/tutorials/prowler-app-github-action) — a Docker-based reusable action that runs scans, optionally pushes findings to Prowler Cloud, and uploads SARIF results to GitHub Code Scanning. The example below documents the legacy pip-based flow.
</Info>
```yaml
- name: Install Prowler
run: pip install prowler
@@ -140,6 +140,34 @@ Invitations expire after 7 days. If an invitation has expired, contact the organ
</Note>
## Expelling a User From an Organization
Organization owners can expel a member from the organization. Expelling removes the membership immediately, revoking access to all providers, scans, and findings scoped to that organization. Owners expelling themselves are blocked if they are the last remaining owner of the organization.
To expel a user:
1. Navigate to the **Users** page.
2. Locate the user to remove and open the row actions menu.
3. Select **Expel user**.
<img src="/images/prowler-app/multi-tenant/expel-user-organization.png" alt="Users table row action menu showing the 'Expel user' destructive option" width="700" />
4. Confirm the action in the dialog. The membership is removed immediately and the expelled user loses access to the organization.
<img src="/images/prowler-app/multi-tenant/expel-user-organization-modal.png" alt="Confirmation dialog asking to expel the selected user from the current organization" width="700" />
<Warning>
Expelling a user revokes any refresh tokens the account holds, but access tokens already issued remain valid until they expire. The default access token lifetime is 30 minutes, so an expelled user may retain access to the organization for up to that window before being fully locked out.
</Warning>
<Warning>
If the expelled organization was the user's **only** organization, the account is permanently deleted along with the membership. All personal profile data associated with that account is removed and cannot be recovered. To preserve the account, confirm that the user belongs to another organization before expelling.
</Warning>
## Permissions Reference
| Action | Required Conditions |
@@ -149,3 +177,4 @@ Invitations expire after 7 days. If an invitation has expired, contact the organ
| Switch organizations | Any authenticated user |
| Edit organization name | Organization owner with **Manage Account** permission |
| Delete an organization | Organization owner with **Manage Account** permission; must belong to more than one organization |
| Expel a user from an organization | Organization owner (no additional permission required); last remaining owner cannot expel themselves |
+3 -3
View File
@@ -879,11 +879,11 @@ wheels = [
[[package]]
name = "pygments"
version = "2.19.2"
version = "2.20.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
sdist = { url = "https://files.pythonhosted.org/packages/c3/b2/bc9c9196916376152d655522fdcebac55e66de6603a76a02bca1b6414f6c/pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f", size = 4955991, upload-time = "2026-03-29T13:29:33.898Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
{ url = "https://files.pythonhosted.org/packages/f4/7e/a72dd26f3b0f4f2bf1dd8923c85f7ceb43172af56d63c7383eb62b332364/pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176", size = 1231151, upload-time = "2026-03-29T13:29:30.038Z" },
]
[[package]]
Generated
+7 -7
View File
@@ -4702,14 +4702,14 @@ dev = ["black (==22.6.0)", "flake8", "mypy", "pytest"]
[[package]]
name = "pyasn1"
version = "0.6.2"
version = "0.6.3"
description = "Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208)"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pyasn1-0.6.2-py3-none-any.whl", hash = "sha256:1eb26d860996a18e9b6ed05e7aae0e9fc21619fcee6af91cca9bad4fbea224bf"},
{file = "pyasn1-0.6.2.tar.gz", hash = "sha256:9b59a2b25ba7e4f8197db7686c09fb33e658b98339fadb826e9512629017833b"},
{file = "pyasn1-0.6.3-py3-none-any.whl", hash = "sha256:a80184d120f0864a52a073acc6fc642847d0be408e7c7252f31390c0f4eadcde"},
{file = "pyasn1-0.6.3.tar.gz", hash = "sha256:697a8ecd6d98891189184ca1fa05d1bb00e2f84b5977c481452050549c8a72cf"},
]
[[package]]
@@ -4941,14 +4941,14 @@ urllib3 = ">=1.26.0"
[[package]]
name = "pygments"
version = "2.19.2"
version = "2.20.0"
description = "Pygments is a syntax highlighting package written in Python."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
{file = "pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176"},
{file = "pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f"},
]
[package.extras]
+16
View File
@@ -2,11 +2,27 @@
All notable changes to the **Prowler SDK** are documented in this file.
## [5.25.0] (Prowler UNRELEASED)
### 🚀 Added
- GitHub Actions service for scanning workflow security issues using zizmor [(#10607)](https://github.com/prowler-cloud/prowler/pull/10607)
- SARIF output format for the IaC provider, enabling GitHub Code Scanning integration via `--output-formats sarif` [(#10626)](https://github.com/prowler-cloud/prowler/pull/10626)
- `repository_default_branch_dismisses_stale_reviews` check for GitHub provider to ensure stale pull request approvals are dismissed when new commits are pushed [(#10569)](https://github.com/prowler-cloud/prowler/pull/10569)
- Official Prowler GitHub Action (`prowler-cloud/prowler@5.25`) for running scans in GitHub workflows with optional `--push-to-cloud` and SARIF upload to GitHub Code Scanning [(#10872)](https://github.com/prowler-cloud/prowler/pull/10872)
### 🐞 Fixed
- Alibaba Cloud CS service SDK compatibility, harden other services and improve documentation [(#10871)](https://github.com/prowler-cloud/prowler/pull/10871)
---
## [5.24.3] (Prowler v5.24.3)
### 🐞 Fixed
- CloudTrail resource timeline uses resource name as fallback in `LookupEvents` [(#10828)](https://github.com/prowler-cloud/prowler/pull/10828)
- Exclude `me-south-1` and `me-central-1` from default AWS scans to prevent hangs when the host can't reach those regional endpoints [(#10837)](https://github.com/prowler-cloud/prowler/pull/10837)
---
+15 -4
View File
@@ -18,6 +18,7 @@ from prowler.config.config import (
json_asff_file_suffix,
json_ocsf_file_suffix,
orange_color,
sarif_file_suffix,
)
from prowler.lib.banner import print_banner
from prowler.lib.check.check import (
@@ -122,6 +123,7 @@ from prowler.lib.outputs.html.html import HTML
from prowler.lib.outputs.ocsf.ingestion import send_ocsf_to_api
from prowler.lib.outputs.ocsf.ocsf import OCSF
from prowler.lib.outputs.outputs import extract_findings_statistics, report
from prowler.lib.outputs.sarif.sarif import SARIF
from prowler.lib.outputs.slack.slack import Slack
from prowler.lib.outputs.summary_table import display_summary_table
from prowler.providers.alibabacloud.models import AlibabaCloudOutputOptions
@@ -197,7 +199,8 @@ def prowler():
if compliance_framework:
args.output_formats.extend(compliance_framework)
# If no input compliance framework, set all, unless a specific service or check is input
elif default_execution:
# Skip for IAC and LLM providers that don't use compliance frameworks
elif default_execution and provider not in ["iac", "llm"]:
args.output_formats.extend(get_available_compliance_frameworks(provider))
# Set Logger configuration
@@ -428,14 +431,15 @@ def prowler():
findings = global_provider.run_scan(streaming_callback=streaming_callback)
else:
# Original behavior for IAC or non-verbose LLM
# Original behavior for IAC and Image
try:
findings = global_provider.run()
except ImageBaseException as error:
logger.critical(f"{error}")
sys.exit(1)
# Note: IaC doesn't support granular progress tracking since Trivy runs as a black box
# and returns all findings at once. Progress tracking would just be 0% → 100%.
# Note: External tool providers don't support granular progress tracking since
# they run external tools as a black box and return all findings at once.
# Progress tracking would just be 0% → 100%.
# Filter findings by status if specified
if hasattr(args, "status") and args.status:
@@ -552,6 +556,13 @@ def prowler():
html_output.batch_write_data_to_file(
provider=global_provider, stats=stats
)
if mode == "sarif":
sarif_output = SARIF(
findings=finding_outputs,
file_path=f"{filename}{sarif_file_suffix}",
)
generated_outputs["regular"].append(sarif_output)
sarif_output.batch_write_data_to_file()
if getattr(args, "push_to_cloud", False):
if not ocsf_output or not getattr(ocsf_output, "file_path", None):
File diff suppressed because it is too large Load Diff
@@ -73,7 +73,9 @@
{
"Id": "1.1.4",
"Description": "Ensure that when a proposed code change is updated, previous approvals are declined, and new approvals are required.",
"Checks": [],
"Checks": [
"repository_default_branch_dismisses_stale_reviews"
],
"Attributes": [
{
"Section": "1 Source Code",
+28 -6
View File
@@ -9,6 +9,16 @@ import requests
import yaml
from packaging import version
from prowler.lib.check.compliance_models import load_compliance_framework_universal
# Re-exported from a leaf module so prowler.lib.check.utils can import the
# constant without participating in the config <-> compliance_models <-> utils
# import cycle. Existing consumers continue to import from this module.
# The `as EXTERNAL_TOOL_PROVIDERS` rename is the PEP 484 explicit re-export
# form so static analyzers (CodeQL, mypy, ruff) treat the name as public.
from prowler.lib.check.external_tool_providers import ( # noqa: F401
EXTERNAL_TOOL_PROVIDERS as EXTERNAL_TOOL_PROVIDERS,
)
from prowler.lib.logger import logger
@@ -68,10 +78,6 @@ class Provider(str, Enum):
VERCEL = "vercel"
# Providers that delegate scanning to an external tool (e.g. Trivy, promptfoo)
# and bypass standard check/service loading.
EXTERNAL_TOOL_PROVIDERS = frozenset({"iac", "llm", "image"})
# Compliance
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
@@ -91,6 +97,21 @@ def get_available_compliance_frameworks(provider=None):
available_compliance_frameworks.append(
file.name.removesuffix(".json")
)
# Also scan top-level compliance/ for multi-provider JSONs
compliance_root = f"{actual_directory}/../compliance"
if os.path.isdir(compliance_root):
with os.scandir(compliance_root) as files:
for file in files:
if file.is_file() and file.name.endswith(".json"):
name = file.name.removesuffix(".json")
if provider:
framework = load_compliance_framework_universal(file.path)
if framework is None or not framework.supports_provider(
provider
):
continue
if name not in available_compliance_frameworks:
available_compliance_frameworks.append(name)
return available_compliance_frameworks
@@ -110,6 +131,7 @@ json_file_suffix = ".json"
json_asff_file_suffix = ".asff.json"
json_ocsf_file_suffix = ".ocsf.json"
html_file_suffix = ".html"
sarif_file_suffix = ".sarif"
default_config_file_path = (
f"{pathlib.Path(os.path.dirname(os.path.realpath(__file__)))}/config.yaml"
)
@@ -120,7 +142,7 @@ default_redteam_config_file_path = (
f"{pathlib.Path(os.path.dirname(os.path.realpath(__file__)))}/llm_config.yaml"
)
encoding_format_utf_8 = "utf-8"
available_output_formats = ["csv", "json-asff", "json-ocsf", "html"]
available_output_formats = ["csv", "json-asff", "json-ocsf", "html", "sarif"]
# Prowler Cloud API settings
cloud_api_base_url = os.getenv("PROWLER_CLOUD_API_BASE_URL", "https://api.prowler.com")
@@ -135,7 +157,7 @@ def set_output_timestamp(
Override the global output timestamps so generated artifacts reflect a specific scan.
Returns the previous values so callers can restore them afterwards.
"""
global timestamp, timestamp_utc, output_file_timestamp, timestamp_iso
global output_file_timestamp, timestamp_iso
previous_values = (
timestamp.value,
+3 -1
View File
@@ -6,7 +6,9 @@ aws:
# aws.disallowed_regions --> List of AWS regions to exclude from the scan.
# Also settable via the PROWLER_AWS_DISALLOWED_REGIONS environment variable or
# the --excluded-region CLI flag. Precedence: CLI > env var > config file.
# disallowed_regions: []
disallowed_regions:
- me-south-1
- me-central-1
# If you want to mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w mutelist.yaml`:
# Mutelist:
# Accounts:
+3 -2
View File
@@ -2,6 +2,7 @@ import sys
from colorama import Fore, Style
from prowler.config.config import EXTERNAL_TOOL_PROVIDERS
from prowler.lib.check.check import parse_checks_from_file
from prowler.lib.check.compliance_models import Compliance
from prowler.lib.check.models import CheckMetadata, Severity
@@ -24,8 +25,8 @@ def load_checks_to_execute(
) -> set:
"""Generate the list of checks to execute based on the cloud provider and the input arguments given"""
try:
# Bypass check loading for providers that use Trivy directly
if provider in ("iac", "image"):
# Bypass check loading for providers that use external tools directly
if provider in EXTERNAL_TOOL_PROVIDERS:
return set()
# Local subsets
@@ -0,0 +1,7 @@
# Providers that delegate scanning to an external tool (e.g. Trivy, promptfoo)
# and bypass standard check/service loading.
#
# Kept in a leaf module with no imports so it can be referenced from both
# prowler.config.config and prowler.lib.check.utils without forming an
# import cycle.
EXTERNAL_TOOL_PROVIDERS = frozenset({"iac", "llm", "image"})
+4 -9
View File
@@ -1094,15 +1094,10 @@ class CheckReportIAC(Check_Report):
self.resource = finding
self.resource_name = file_path
self.resource_line_range = (
(
str(finding.get("CauseMetadata", {}).get("StartLine", ""))
+ ":"
+ str(finding.get("CauseMetadata", {}).get("EndLine", ""))
)
if finding.get("CauseMetadata", {}).get("StartLine", "")
else ""
)
cause = finding.get("CauseMetadata", {})
start = cause.get("StartLine") or finding.get("StartLine")
end = cause.get("EndLine") or finding.get("EndLine")
self.resource_line_range = f"{start}:{end}" if start else ""
@dataclass
+5 -4
View File
@@ -2,6 +2,7 @@ import importlib
import sys
from pkgutil import walk_packages
from prowler.lib.check.external_tool_providers import EXTERNAL_TOOL_PROVIDERS
from prowler.lib.logger import logger
@@ -14,8 +15,8 @@ def recover_checks_from_provider(
Returns a list of tuples with the following format (check_name, check_path)
"""
try:
# Bypass check loading for IAC, LLM, and Image providers since they use external tools directly
if provider in ("iac", "llm", "image"):
# Bypass check loading for providers that use external tools directly
if provider in EXTERNAL_TOOL_PROVIDERS:
return []
checks = []
@@ -63,8 +64,8 @@ def recover_checks_from_service(service_list: list, provider: str) -> set:
Returns a set of checks from the given services
"""
try:
# Bypass check loading for IAC provider since it uses Trivy directly
if provider == "iac":
# Bypass check loading for providers that use external tools directly
if provider in EXTERNAL_TOOL_PROVIDERS:
return set()
checks = set()
+10 -3
View File
@@ -18,6 +18,7 @@ from prowler.providers.common.arguments import (
init_providers_parser,
validate_asff_usage,
validate_provider_arguments,
validate_sarif_usage,
)
@@ -28,7 +29,7 @@ class ProwlerArgumentParser:
self.parser = argparse.ArgumentParser(
prog="prowler",
formatter_class=RawTextHelpFormatter,
usage="prowler [-h] [--version] {aws,azure,gcp,kubernetes,m365,github,googleworkspace,nhn,mongodbatlas,oraclecloud,alibabacloud,cloudflare,openstack,vercel,dashboard,iac,image} ...",
usage="prowler [-h] [--version] {aws,azure,gcp,kubernetes,m365,github,googleworkspace,nhn,mongodbatlas,oraclecloud,alibabacloud,cloudflare,openstack,vercel,dashboard,iac,image,llm} ...",
epilog="""
Available Cloud Providers:
{aws,azure,gcp,kubernetes,m365,github,googleworkspace,iac,llm,image,nhn,mongodbatlas,oraclecloud,alibabacloud,cloudflare,openstack,vercel}
@@ -43,11 +44,11 @@ Available Cloud Providers:
oraclecloud Oracle Cloud Infrastructure Provider
openstack OpenStack Provider
alibabacloud Alibaba Cloud Provider
iac IaC Provider (Beta)
iac IaC Provider
llm LLM Provider (Beta)
image Container Image Provider
nhn NHN Provider (Unofficial)
mongodbatlas MongoDB Atlas Provider (Beta)
mongodbatlas MongoDB Atlas Provider
vercel Vercel Provider
Available components:
@@ -153,6 +154,12 @@ Detailed documentation at https://docs.prowler.com
if not asff_is_valid:
self.parser.error(asff_error)
sarif_is_valid, sarif_error = validate_sarif_usage(
args.provider, getattr(args, "output_formats", None)
)
if not sarif_is_valid:
self.parser.error(sarif_error)
return args
def __set_default_provider__(self, args: list) -> list:
@@ -41,6 +41,9 @@ def display_compliance_table(
Returns:
None
"""
# Filter out findings with dynamic CheckIDs not present in bulk_checks_metadata
findings = [f for f in findings if f.check_metadata.CheckID in bulk_checks_metadata]
try:
if "ens_" in compliance_framework:
get_ens_table(
@@ -0,0 +1,395 @@
import os
from datetime import datetime
from typing import List
from py_ocsf_models.events.base_event import SeverityID
from py_ocsf_models.events.base_event import StatusID as EventStatusID
from py_ocsf_models.events.findings.compliance_finding import ComplianceFinding
from py_ocsf_models.events.findings.compliance_finding_type_id import (
ComplianceFindingTypeID,
)
from py_ocsf_models.events.findings.finding import ActivityID, FindingInformation
from py_ocsf_models.objects.check import Check
from py_ocsf_models.objects.compliance import Compliance
from py_ocsf_models.objects.compliance_status import StatusID as ComplianceStatusID
from py_ocsf_models.objects.group import Group
from py_ocsf_models.objects.metadata import Metadata
from py_ocsf_models.objects.product import Product
from py_ocsf_models.objects.resource_details import ResourceDetails
from prowler.config.config import prowler_version
from prowler.lib.check.compliance_models import ComplianceFramework
from prowler.lib.logger import logger
from prowler.lib.outputs.finding import Finding
from prowler.lib.outputs.ocsf.ocsf import OCSF
from prowler.lib.outputs.utils import unroll_dict_to_list
from prowler.lib.utils.utils import open_file
PROWLER_TO_COMPLIANCE_STATUS = {
"PASS": ComplianceStatusID.Pass,
"FAIL": ComplianceStatusID.Fail,
"MANUAL": ComplianceStatusID.Unknown,
}
def _to_snake_case(name: str) -> str:
"""Convert a PascalCase or camelCase string to snake_case."""
import re
# Insert underscore before uppercase letters preceded by lowercase
s = re.sub(r"([a-z0-9])([A-Z])", r"\1_\2", name)
# Insert underscore between consecutive uppercase and following lowercase
s = re.sub(r"([A-Z]+)([A-Z][a-z])", r"\1_\2", s)
return s.lower()
def _build_requirement_attrs(requirement, framework) -> dict:
"""Build a dict with requirement attributes for the unmapped section.
Keys are normalized to snake_case for OCSF consistency.
Only includes attributes whose AttributeMetadata has output_formats.ocsf=True.
When no metadata is declared, all attributes are included.
"""
attrs = requirement.attributes
if not attrs:
return {}
# Build set of keys allowed for OCSF output
metadata = framework.attributes_metadata
if metadata:
ocsf_keys = {m.key for m in metadata if m.output_formats.ocsf}
else:
ocsf_keys = None # No metadata → include all
result = {}
for key, value in attrs.items():
if ocsf_keys is not None and key not in ocsf_keys:
continue
result[_to_snake_case(key)] = value
return result
class OCSFComplianceOutput:
"""Produces OCSF ComplianceFinding (class_uid 2003) events from
universal compliance framework data.
Each finding × requirement combination produces one ComplianceFinding event
with structured Compliance and Check objects.
"""
def __init__(
self,
findings: list,
framework: ComplianceFramework,
file_path: str = None,
from_cli: bool = True,
provider: str = None,
) -> None:
self._data = []
self._file_descriptor = None
self.file_path = file_path
self._from_cli = from_cli
self._provider = provider
self.close_file = False
if findings:
compliance_name = (
framework.framework + "-" + framework.version
if framework.version
else framework.framework
)
self._transform(findings, framework, compliance_name)
if not self._file_descriptor and file_path:
self._create_file_descriptor(file_path)
@property
def data(self):
return self._data
def _transform(
self,
findings: List[Finding],
framework: ComplianceFramework,
compliance_name: str,
) -> None:
# Build check -> requirements map
check_req_map = {}
for req in framework.requirements:
checks = req.checks
if self._provider:
all_checks = checks.get(self._provider.lower(), [])
else:
all_checks = []
for check_list in checks.values():
all_checks.extend(check_list)
for check_id in all_checks:
check_req_map.setdefault(check_id, []).append(req)
for finding in findings:
if finding.check_id in check_req_map:
for req in check_req_map[finding.check_id]:
cf = self._build_compliance_finding(
finding, framework, req, compliance_name
)
if cf:
self._data.append(cf)
# Manual requirements (no checks or empty for current provider)
for req in framework.requirements:
checks = req.checks
if self._provider:
has_checks = bool(checks.get(self._provider.lower(), []))
else:
has_checks = any(checks.values())
if not has_checks:
cf = self._build_manual_compliance_finding(
framework, req, compliance_name
)
if cf:
self._data.append(cf)
def _build_unmapped(self, finding, requirement, framework) -> dict:
"""Build the unmapped dict with cloud info and requirement attributes."""
unmapped = {}
# Cloud info (from finding, when available)
if finding and getattr(finding, "provider", None) != "kubernetes":
unmapped["cloud"] = {
"provider": finding.provider,
"region": finding.region,
"account": {
"uid": finding.account_uid,
"name": finding.account_name,
},
"org": {
"uid": finding.account_organization_uid,
"name": finding.account_organization_name,
},
}
# Requirement attributes
req_attrs = _build_requirement_attrs(requirement, framework)
if req_attrs:
unmapped["requirement_attributes"] = req_attrs
return unmapped or None
def _build_compliance_finding(
self,
finding: Finding,
framework: ComplianceFramework,
requirement,
compliance_name: str,
) -> ComplianceFinding:
try:
compliance_status = PROWLER_TO_COMPLIANCE_STATUS.get(
finding.status, ComplianceStatusID.Unknown
)
check_status = PROWLER_TO_COMPLIANCE_STATUS.get(
finding.status, ComplianceStatusID.Unknown
)
finding_severity = getattr(
SeverityID,
finding.metadata.Severity.capitalize(),
SeverityID.Unknown,
)
event_status = OCSF.get_finding_status_id(finding.muted)
time_value = (
int(finding.timestamp.timestamp())
if isinstance(finding.timestamp, datetime)
else finding.timestamp
)
cf = ComplianceFinding(
activity_id=ActivityID.Create.value,
activity_name=ActivityID.Create.name,
compliance=Compliance(
standards=[compliance_name],
requirements=[requirement.id],
control=requirement.description,
status_id=compliance_status,
checks=[
Check(
uid=finding.check_id,
name=finding.metadata.CheckTitle,
desc=finding.metadata.Description,
status=finding.status,
status_id=check_status,
)
],
),
finding_info=FindingInformation(
uid=f"{finding.uid}-{requirement.id}",
title=requirement.id,
desc=requirement.description,
created_time=time_value,
created_time_dt=(
finding.timestamp
if isinstance(finding.timestamp, datetime)
else None
),
),
message=finding.status_extended,
metadata=Metadata(
event_code=finding.check_id,
product=Product(
uid="prowler",
name="Prowler",
vendor_name="Prowler",
version=finding.prowler_version,
),
profiles=(
["cloud", "datetime"]
if finding.provider != "kubernetes"
else ["container", "datetime"]
),
tenant_uid=finding.account_organization_uid,
),
resources=[
ResourceDetails(
labels=unroll_dict_to_list(finding.resource_tags),
name=finding.resource_name,
uid=finding.resource_uid,
group=Group(name=finding.metadata.ServiceName),
type=finding.metadata.ResourceType,
cloud_partition=(
finding.partition
if finding.provider != "kubernetes"
else None
),
region=(
finding.region if finding.provider != "kubernetes" else None
),
namespace=(
finding.region.replace("namespace: ", "")
if finding.provider == "kubernetes"
else None
),
data={
"details": finding.resource_details,
"metadata": finding.resource_metadata,
},
)
],
severity_id=finding_severity.value,
severity=finding_severity.name,
status_id=event_status.value,
status=event_status.name,
status_code=finding.status,
status_detail=finding.status_extended,
time=time_value,
time_dt=(
finding.timestamp
if isinstance(finding.timestamp, datetime)
else None
),
type_uid=ComplianceFindingTypeID.Create,
type_name=f"Compliance Finding: {ComplianceFindingTypeID.Create.name}",
unmapped=self._build_unmapped(finding, requirement, framework),
)
return cf
except Exception as e:
logger.debug(f"Skipping OCSF compliance finding for {requirement.id}: {e}")
return None
def _build_manual_compliance_finding(
self,
framework: ComplianceFramework,
requirement,
compliance_name: str,
) -> ComplianceFinding:
try:
from prowler.config.config import timestamp as config_timestamp
time_value = int(config_timestamp.timestamp())
return ComplianceFinding(
activity_id=ActivityID.Create.value,
activity_name=ActivityID.Create.name,
compliance=Compliance(
standards=[compliance_name],
requirements=[requirement.id],
control=requirement.description,
status_id=ComplianceStatusID.Unknown,
),
finding_info=FindingInformation(
uid=f"manual-{requirement.id}",
title=requirement.id,
desc=requirement.description,
created_time=time_value,
),
message="Manual check",
metadata=Metadata(
event_code="manual",
product=Product(
uid="prowler",
name="Prowler",
vendor_name="Prowler",
version=prowler_version,
),
),
severity_id=SeverityID.Informational.value,
severity=SeverityID.Informational.name,
status_id=EventStatusID.New.value,
status=EventStatusID.New.name,
status_code="MANUAL",
status_detail="Manual check",
time=time_value,
type_uid=ComplianceFindingTypeID.Create,
type_name=f"Compliance Finding: {ComplianceFindingTypeID.Create.name}",
unmapped=self._build_unmapped(None, requirement, framework),
)
except Exception as e:
logger.debug(
f"Skipping manual OCSF compliance finding for {requirement.id}: {e}"
)
return None
def _create_file_descriptor(self, file_path: str) -> None:
try:
self._file_descriptor = open_file(file_path, "a")
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def batch_write_data_to_file(self) -> None:
"""Write ComplianceFinding events to a JSON array file."""
try:
if (
getattr(self, "_file_descriptor", None)
and not self._file_descriptor.closed
and self._data
):
if self._file_descriptor.tell() == 0:
self._file_descriptor.write("[")
for finding in self._data:
try:
if hasattr(finding, "model_dump_json"):
json_output = finding.model_dump_json(
exclude_none=True, indent=4
)
else:
json_output = finding.json(exclude_none=True, indent=4)
self._file_descriptor.write(json_output)
self._file_descriptor.write(",")
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
if self.close_file or self._from_cli:
if self._file_descriptor.tell() != 1:
self._file_descriptor.seek(
self._file_descriptor.tell() - 1, os.SEEK_SET
)
self._file_descriptor.truncate()
self._file_descriptor.write("]")
self._file_descriptor.close()
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
@@ -0,0 +1,490 @@
from colorama import Fore, Style
from tabulate import tabulate
from prowler.config.config import orange_color
from prowler.lib.check.compliance_models import ComplianceFramework
def get_universal_table(
findings: list,
bulk_checks_metadata: dict,
compliance_framework_name: str,
output_filename: str,
output_directory: str,
compliance_overview: bool,
framework: ComplianceFramework = None,
provider: str = None,
output_formats: list = None,
) -> None:
"""Render a compliance console table driven by TableConfig.
Supports 3 modes:
- Grouped: group_by only (generic, C5, CSA, ISO, KISA)
- Split: group_by + split_by (CIS Level 1/2, ENS alto/medio/bajo/opcional)
- Scored: group_by + scoring (ThreatScore weighted risk %)
When ``provider`` is given and ``checks`` is a multi-provider dict,
only the checks for that provider are matched against findings.
"""
if framework is None or not framework.outputs or not framework.outputs.table_config:
return
tc = framework.outputs.table_config
labels = tc.labels or _default_labels()
group_by = tc.group_by
split_by = tc.split_by
scoring = tc.scoring
if scoring:
_render_scored(
findings,
bulk_checks_metadata,
compliance_framework_name,
output_filename,
output_directory,
compliance_overview,
framework,
group_by,
scoring,
labels,
provider,
output_formats=output_formats,
)
elif split_by:
_render_split(
findings,
bulk_checks_metadata,
compliance_framework_name,
output_filename,
output_directory,
compliance_overview,
framework,
group_by,
split_by,
labels,
provider,
output_formats=output_formats,
)
else:
_render_grouped(
findings,
bulk_checks_metadata,
compliance_framework_name,
output_filename,
output_directory,
compliance_overview,
framework,
group_by,
labels,
provider,
output_formats=output_formats,
)
def _default_labels():
"""Return a simple namespace with default labels."""
from prowler.lib.check.compliance_models import TableLabels
return TableLabels()
def _build_requirement_check_map(framework, provider=None):
"""Build a map of check_id -> list of requirements for fast lookup.
When *provider* is given, only the checks for that provider are included.
"""
check_map = {}
for req in framework.requirements:
checks = req.checks
if provider:
all_checks = checks.get(provider.lower(), [])
else:
all_checks = []
for check_list in checks.values():
all_checks.extend(check_list)
for check_id in all_checks:
if check_id not in check_map:
check_map[check_id] = []
check_map[check_id].append(req)
return check_map
def _get_group_key(req, group_by):
"""Extract the group key from a requirement."""
if group_by == "_Tactics":
return req.tactics or []
return [req.attributes.get(group_by, "Unknown")]
def _print_overview(pass_count, fail_count, muted_count, framework_name, labels):
"""Print the overview pass/fail/muted summary."""
total = len(fail_count) + len(pass_count) + len(muted_count)
if total < 2:
return False
title = (
labels.title
or f"Compliance Status of {Fore.YELLOW}{framework_name.upper()}{Style.RESET_ALL} Framework:"
)
print(f"\n{title}")
fail_pct = round(len(fail_count) / total * 100, 2)
pass_pct = round(len(pass_count) / total * 100, 2)
muted_pct = round(len(muted_count) / total * 100, 2)
fail_label = labels.fail_label
pass_label = labels.pass_label
overview_table = [
[
f"{Fore.RED}{fail_pct}% ({len(fail_count)}) {fail_label}{Style.RESET_ALL}",
f"{Fore.GREEN}{pass_pct}% ({len(pass_count)}) {pass_label}{Style.RESET_ALL}",
f"{orange_color}{muted_pct}% ({len(muted_count)}) MUTED{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
return True
def _render_grouped(
findings,
bulk_checks_metadata,
compliance_framework_name,
output_filename,
output_directory,
compliance_overview,
framework,
group_by,
labels,
provider=None,
output_formats=None,
):
"""Grouped mode: one row per group with pass/fail counts."""
check_map = _build_requirement_check_map(framework, provider)
groups = {}
pass_count = []
fail_count = []
muted_count = []
for index, finding in enumerate(findings):
check_id = finding.check_metadata.CheckID
if check_id not in check_map:
continue
for req in check_map[check_id]:
for group_key in _get_group_key(req, group_by):
if group_key not in groups:
groups[group_key] = {"FAIL": 0, "PASS": 0, "Muted": 0}
if finding.muted:
if index not in muted_count:
muted_count.append(index)
groups[group_key]["Muted"] += 1
else:
if finding.status == "FAIL" and index not in fail_count:
fail_count.append(index)
groups[group_key]["FAIL"] += 1
elif finding.status == "PASS" and index not in pass_count:
pass_count.append(index)
groups[group_key]["PASS"] += 1
if not _print_overview(
pass_count, fail_count, muted_count, compliance_framework_name, labels
):
return
if not compliance_overview:
provider_header = labels.provider_header
group_header = labels.group_header or group_by
table = {
provider_header: [],
group_header: [],
labels.status_header: [],
"Muted": [],
}
for group_key in sorted(groups):
table[provider_header].append(
framework.provider or (provider.upper() if provider else "")
)
table[group_header].append(group_key)
if groups[group_key]["FAIL"] > 0:
table[labels.status_header].append(
f"{Fore.RED}{labels.fail_label}({groups[group_key]['FAIL']}){Style.RESET_ALL}"
)
else:
table[labels.status_header].append(
f"{Fore.GREEN}{labels.pass_label}({groups[group_key]['PASS']}){Style.RESET_ALL}"
)
table["Muted"].append(
f"{orange_color}{groups[group_key]['Muted']}{Style.RESET_ALL}"
)
results_title = (
labels.results_title
or f"Framework {Fore.YELLOW}{compliance_framework_name.upper()}{Style.RESET_ALL} Results:"
)
print(f"\n{results_title}")
print(tabulate(table, headers="keys", tablefmt="rounded_grid"))
footer = labels.footer_note or "* Only sections containing results appear."
print(f"{Style.BRIGHT}{footer}{Style.RESET_ALL}")
print(f"\nDetailed results of {compliance_framework_name.upper()} are in:")
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework_name}.csv"
)
if "json-ocsf" in (output_formats or []):
print(
f" - OCSF: {output_directory}/compliance/{output_filename}_{compliance_framework_name}.ocsf.json"
)
print()
def _render_split(
findings,
bulk_checks_metadata,
compliance_framework_name,
output_filename,
output_directory,
compliance_overview,
framework,
group_by,
split_by,
labels,
provider=None,
output_formats=None,
):
"""Split mode: one row per group with columns for each split value (e.g. Level 1/Level 2)."""
check_map = _build_requirement_check_map(framework, provider)
split_field = split_by.field
split_values = split_by.values
groups = {}
pass_count = []
fail_count = []
muted_count = []
for index, finding in enumerate(findings):
check_id = finding.check_metadata.CheckID
if check_id not in check_map:
continue
for req in check_map[check_id]:
for group_key in _get_group_key(req, group_by):
if group_key not in groups:
groups[group_key] = {
sv: {"FAIL": 0, "PASS": 0} for sv in split_values
}
groups[group_key]["Muted"] = 0
split_val = req.attributes.get(split_field, "")
if finding.muted:
if index not in muted_count:
muted_count.append(index)
groups[group_key]["Muted"] += 1
else:
if finding.status == "FAIL" and index not in fail_count:
fail_count.append(index)
elif finding.status == "PASS" and index not in pass_count:
pass_count.append(index)
for sv in split_values:
if sv in str(split_val):
if not finding.muted:
if finding.status == "FAIL":
groups[group_key][sv]["FAIL"] += 1
else:
groups[group_key][sv]["PASS"] += 1
if not _print_overview(
pass_count, fail_count, muted_count, compliance_framework_name, labels
):
return
if not compliance_overview:
provider_header = labels.provider_header
group_header = labels.group_header or group_by
table = {provider_header: [], group_header: []}
for sv in split_values:
table[sv] = []
table["Muted"] = []
for group_key in sorted(groups):
table[provider_header].append(
framework.provider or (provider.upper() if provider else "")
)
table[group_header].append(group_key)
for sv in split_values:
if groups[group_key][sv]["FAIL"] > 0:
table[sv].append(
f"{Fore.RED}{labels.fail_label}({groups[group_key][sv]['FAIL']}){Style.RESET_ALL}"
)
else:
table[sv].append(
f"{Fore.GREEN}{labels.pass_label}({groups[group_key][sv]['PASS']}){Style.RESET_ALL}"
)
table["Muted"].append(
f"{orange_color}{groups[group_key]['Muted']}{Style.RESET_ALL}"
)
results_title = (
labels.results_title
or f"Framework {Fore.YELLOW}{compliance_framework_name.upper()}{Style.RESET_ALL} Results:"
)
print(f"\n{results_title}")
print(tabulate(table, headers="keys", tablefmt="rounded_grid"))
footer = labels.footer_note or "* Only sections containing results appear."
print(f"{Style.BRIGHT}{footer}{Style.RESET_ALL}")
print(f"\nDetailed results of {compliance_framework_name.upper()} are in:")
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework_name}.csv"
)
if "json-ocsf" in (output_formats or []):
print(
f" - OCSF: {output_directory}/compliance/{output_filename}_{compliance_framework_name}.ocsf.json"
)
print()
def _render_scored(
findings,
bulk_checks_metadata,
compliance_framework_name,
output_filename,
output_directory,
compliance_overview,
framework,
group_by,
scoring,
labels,
provider=None,
output_formats=None,
):
"""Scored mode: weighted risk scoring per group (e.g. ThreatScore)."""
check_map = _build_requirement_check_map(framework, provider)
risk_field = scoring.risk_field
weight_field = scoring.weight_field
groups = {}
pass_count = []
fail_count = []
muted_count = []
score_per_group = {}
max_score_per_group = {}
counted_per_group = {}
generic_score = 0
max_generic_score = 0
counted_generic = []
for index, finding in enumerate(findings):
check_id = finding.check_metadata.CheckID
if check_id not in check_map:
continue
for req in check_map[check_id]:
for group_key in _get_group_key(req, group_by):
attrs = req.attributes
risk = attrs.get(risk_field, 0)
weight = attrs.get(weight_field, 0)
if group_key not in groups:
groups[group_key] = {"FAIL": 0, "PASS": 0, "Muted": 0}
score_per_group[group_key] = 0
max_score_per_group[group_key] = 0
counted_per_group[group_key] = []
if index not in counted_per_group[group_key] and not finding.muted:
if finding.status == "PASS":
score_per_group[group_key] += risk * weight
max_score_per_group[group_key] += risk * weight
counted_per_group[group_key].append(index)
if finding.muted:
if index not in muted_count:
muted_count.append(index)
groups[group_key]["Muted"] += 1
else:
if finding.status == "FAIL" and index not in fail_count:
fail_count.append(index)
groups[group_key]["FAIL"] += 1
elif finding.status == "PASS" and index not in pass_count:
pass_count.append(index)
groups[group_key]["PASS"] += 1
if index not in counted_generic and not finding.muted:
if finding.status == "PASS":
generic_score += risk * weight
max_generic_score += risk * weight
counted_generic.append(index)
if not _print_overview(
pass_count, fail_count, muted_count, compliance_framework_name, labels
):
return
if not compliance_overview:
provider_header = labels.provider_header
group_header = labels.group_header or group_by
table = {
provider_header: [],
group_header: [],
labels.status_header: [],
"Score": [],
"Muted": [],
}
for group_key in sorted(groups):
table[provider_header].append(
framework.provider or (provider.upper() if provider else "")
)
table[group_header].append(group_key)
if max_score_per_group[group_key] == 0:
group_score = 100.0
score_color = Fore.GREEN
else:
group_score = (
score_per_group[group_key] / max_score_per_group[group_key]
) * 100
score_color = Fore.RED
table["Score"].append(
f"{Style.BRIGHT}{score_color}{group_score:.2f}%{Style.RESET_ALL}"
)
if groups[group_key]["FAIL"] > 0:
table[labels.status_header].append(
f"{Fore.RED}{labels.fail_label}({groups[group_key]['FAIL']}){Style.RESET_ALL}"
)
else:
table[labels.status_header].append(
f"{Fore.GREEN}{labels.pass_label}({groups[group_key]['PASS']}){Style.RESET_ALL}"
)
table["Muted"].append(
f"{orange_color}{groups[group_key]['Muted']}{Style.RESET_ALL}"
)
if max_generic_score == 0:
generic_threat_score = 100.0
else:
generic_threat_score = generic_score / max_generic_score * 100
results_title = (
labels.results_title
or f"Framework {Fore.YELLOW}{compliance_framework_name.upper()}{Style.RESET_ALL} Results:"
)
print(f"\n{results_title}")
print(f"\nGeneric Threat Score: {generic_threat_score:.2f}%")
print(tabulate(table, headers="keys", tablefmt="rounded_grid"))
footer = labels.footer_note or (
f"{Style.BRIGHT}\n=== Threat Score Guide ===\n"
f"The lower the score, the higher the risk.{Style.RESET_ALL}\n"
f"{Style.BRIGHT}(Only sections containing results appear, the score is calculated as the sum of the "
f"level of risk * weight of the passed findings divided by the sum of the risk * weight of all the findings){Style.RESET_ALL}"
)
print(footer)
print(f"\nDetailed results of {compliance_framework_name.upper()} are in:")
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework_name}.csv"
)
if "json-ocsf" in (output_formats or []):
print(
f" - OCSF: {output_directory}/compliance/{output_filename}_{compliance_framework_name}.ocsf.json"
)
print()
+3
View File
@@ -354,6 +354,9 @@ class Finding(BaseModel):
check_output, "resource_line_range", ""
)
output_data["framework"] = check_output.check_metadata.ServiceName
output_data["raw"] = {
"resource_line_range": output_data.get("resource_line_range", ""),
}
elif provider.type == "llm":
output_data["auth_method"] = provider.auth_method
+191
View File
@@ -0,0 +1,191 @@
from json import dump
from typing import Optional
from prowler.config.config import prowler_version
from prowler.lib.logger import logger
from prowler.lib.outputs.finding import Finding
from prowler.lib.outputs.output import Output
SARIF_SCHEMA_URL = "https://json.schemastore.org/sarif-2.1.0.json"
SARIF_VERSION = "2.1.0"
SEVERITY_TO_SARIF_LEVEL = {
"critical": "error",
"high": "error",
"medium": "warning",
"low": "note",
"informational": "note",
}
SEVERITY_TO_SECURITY_SEVERITY = {
"critical": "9.0",
"high": "7.0",
"medium": "4.0",
"low": "2.0",
"informational": "0.0",
}
class SARIF(Output):
"""Generates SARIF 2.1.0 output compatible with GitHub Code Scanning."""
def transform(self, findings: list[Finding]) -> None:
"""Transform findings into a SARIF 2.1.0 document.
Only FAIL findings that are not muted are included. Each unique
check ID produces one rule entry; multiple findings for the same
check share the rule via ruleIndex.
Args:
findings: List of Finding objects to transform.
"""
rules = {}
rule_indices = {}
results = []
for finding in findings:
if finding.status != "FAIL" or finding.muted:
continue
check_id = finding.metadata.CheckID
severity = finding.metadata.Severity.lower()
if check_id not in rules:
rule_indices[check_id] = len(rules)
rule = {
"id": check_id,
"name": finding.metadata.CheckTitle,
"shortDescription": {"text": finding.metadata.CheckTitle},
"fullDescription": {
"text": finding.metadata.Description or check_id
},
"help": {
"text": finding.metadata.Remediation.Recommendation.Text
or finding.metadata.Description
or check_id,
"markdown": self._build_help_markdown(finding, severity),
},
"defaultConfiguration": {
"level": SEVERITY_TO_SARIF_LEVEL.get(severity, "note"),
},
"properties": {
"tags": [
"security",
f"prowler/{finding.metadata.Provider}",
f"severity/{severity}",
],
"security-severity": SEVERITY_TO_SECURITY_SEVERITY.get(
severity, "0.0"
),
},
}
if finding.metadata.RelatedUrl:
rule["helpUri"] = finding.metadata.RelatedUrl
rules[check_id] = rule
rule_index = rule_indices[check_id]
result = {
"ruleId": check_id,
"ruleIndex": rule_index,
"level": SEVERITY_TO_SARIF_LEVEL.get(severity, "note"),
"message": {
"text": finding.status_extended or finding.metadata.CheckTitle
},
}
location = self._build_location(finding)
if location is not None:
result["locations"] = [location]
results.append(result)
sarif_document = {
"$schema": SARIF_SCHEMA_URL,
"version": SARIF_VERSION,
"runs": [
{
"tool": {
"driver": {
"name": "Prowler",
"version": prowler_version,
"informationUri": "https://prowler.com",
"rules": list(rules.values()),
},
},
"results": results,
},
],
}
self._data = [sarif_document]
def batch_write_data_to_file(self) -> None:
"""Write the SARIF document to the output file as JSON."""
try:
if (
getattr(self, "_file_descriptor", None)
and not self._file_descriptor.closed
and self._data
):
dump(self._data[0], self._file_descriptor, indent=2)
if self.close_file or self._from_cli:
self._file_descriptor.close()
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
@staticmethod
def _build_help_markdown(finding: Finding, severity: str) -> str:
"""Build a markdown help string for a SARIF rule."""
remediation = (
finding.metadata.Remediation.Recommendation.Text
or finding.metadata.Description
or finding.metadata.CheckID
)
lines = [
f"**{finding.metadata.CheckTitle}**\n",
"| Severity | Remediation |",
"| --- | --- |",
f"| {severity.upper()} | {remediation} |",
]
if finding.metadata.RelatedUrl:
lines.append(f"\n[More info]({finding.metadata.RelatedUrl})")
return "\n".join(lines)
@staticmethod
def _build_location(finding: Finding) -> Optional[dict]:
"""Build a SARIF physicalLocation from a Finding.
Uses resource_name as the artifact URI and resource_line_range
(stored in finding.raw for IaC findings) for line range info.
Returns:
A SARIF location dict, or None if resource_name is empty.
"""
if not finding.resource_name:
return None
location = {
"physicalLocation": {
"artifactLocation": {
"uri": finding.resource_name,
},
},
}
line_range = finding.raw.get("resource_line_range", "")
if line_range and ":" in line_range:
parts = line_range.split(":")
try:
start_line = int(parts[0])
end_line = int(parts[1])
if start_line >= 1 and end_line >= 1:
location["physicalLocation"]["region"] = {
"startLine": start_line,
"endLine": end_line,
}
except (ValueError, IndexError):
pass # Malformed line range — skip region, keep location
return location
+5
View File
@@ -9,6 +9,7 @@ from prowler.config.config import (
json_asff_file_suffix,
json_ocsf_file_suffix,
orange_color,
sarif_file_suffix,
)
from prowler.lib.logger import logger
from prowler.providers.github.models import GithubAppIdentityInfo, GithubIdentityInfo
@@ -207,6 +208,10 @@ def display_summary_table(
print(
f" - HTML: {output_directory}/{output_filename}{html_file_suffix}"
)
if "sarif" in output_options.output_modes:
print(
f" - SARIF: {output_directory}/{output_filename}{sarif_file_suffix}"
)
else:
print(
@@ -68,6 +68,45 @@ class AlibabaCloudService:
return self.regional_clients[region]
return self.client
@staticmethod
def _is_retriable_error(error: Exception) -> bool:
"""Return True when an Alibaba API error is worth retrying once."""
error_code = getattr(error, "code", "")
status_code = getattr(error, "statusCode", None) or getattr(
error, "status_code", None
)
message = str(error)
retriable_codes = {"ServiceUnavailable", "Throttling", "Throttling.User"}
retriable_substrings = (
"Connection reset by peer",
"Connection aborted",
"ConnectTimeoutError",
"ReadTimeout",
"timed out",
"temporarily unavailable",
)
return (
error_code in retriable_codes
or status_code in {429, 500, 502, 503, 504}
or any(fragment in message for fragment in retriable_substrings)
)
def _call_with_retries(self, func, *args, retries: int = 1, **kwargs):
"""Call a function and retry once for transient Alibaba API failures."""
last_error = None
for attempt in range(retries + 1):
try:
return func(*args, **kwargs)
except Exception as error: # pragma: no cover - exercised via services
last_error = error
if attempt >= retries or not self._is_retriable_error(error):
raise
raise last_error
def __threading_call__(self, call, iterator=None):
"""
Execute a function across multiple regions or items using threads.
+31 -9
View File
@@ -150,6 +150,34 @@ class AlibabaCloudSession:
)
return self._credentials
@staticmethod
def _get_securitycenter_endpoint(region: str) -> str:
"""Return the public Security Center OpenAPI endpoint for a region."""
securitycenter_region = region or ALIBABACLOUD_DEFAULT_REGION
if securitycenter_region.startswith("cn-"):
return "tds.cn-shanghai.aliyuncs.com"
return "tds.ap-southeast-1.aliyuncs.com"
@staticmethod
def _get_rds_endpoint(region: str) -> str:
"""Return the public RDS OpenAPI endpoint for a region."""
rds_region = region or ALIBABACLOUD_DEFAULT_REGION
shared_rds_regions = {
"cn-qingdao",
"cn-beijing",
"cn-hangzhou",
"cn-shanghai",
"cn-shenzhen",
"cn-heyuan",
"cn-hongkong",
"cn-beijing-finance-1",
"cn-hangzhou-finance",
"cn-shanghai-finance-1",
}
if rds_region in shared_rds_regions:
return "rds.aliyuncs.com"
return f"rds.{rds_region}.aliyuncs.com"
def client(self, service: str, region: str = None):
"""
Create a service client for the given service and region.
@@ -196,11 +224,8 @@ class AlibabaCloudSession:
config.endpoint = f"ecs.{ALIBABACLOUD_DEFAULT_REGION}.aliyuncs.com"
return EcsClient(config)
elif service == "sas" or service == "securitycenter":
# SAS (Security Center) endpoint is regional: sas.{region}.aliyuncs.com
if region:
config.endpoint = f"sas.{region}.aliyuncs.com"
else:
config.endpoint = f"sas.{ALIBABACLOUD_DEFAULT_REGION}.aliyuncs.com"
# Security Center uses regional groups of shared TDS endpoints.
config.endpoint = self._get_securitycenter_endpoint(region)
return SasClient(config)
elif service == "oss":
if region:
@@ -226,10 +251,7 @@ class AlibabaCloudSession:
config.endpoint = f"cs.{ALIBABACLOUD_DEFAULT_REGION}.aliyuncs.com"
return CSClient(config)
elif service == "rds":
if region:
config.endpoint = f"rds.{region}.aliyuncs.com"
else:
config.endpoint = f"rds.{ALIBABACLOUD_DEFAULT_REGION}.aliyuncs.com"
config.endpoint = self._get_rds_endpoint(region)
return RdsClient(config)
elif service == "sls":
if region:
@@ -33,7 +33,7 @@ class ActionTrail(AlibabaCloudService):
try:
# Use Tea SDK client (ActionTrail is regional service)
request = actiontrail_models.DescribeTrailsRequest()
response = regional_client.describe_trails(request)
response = self._call_with_retries(regional_client.describe_trails, request)
if response and response.body and response.body.trail_list:
# trail_list is already a list, not an object with a trail attribute
@@ -1,4 +1,6 @@
import json
from datetime import datetime
from threading import Lock
from typing import Optional
from alibabacloud_cs20151215 import models as cs_models
@@ -23,6 +25,8 @@ class CS(AlibabaCloudService):
# Fetch CS resources
self.clusters = []
self._cluster_ids_lock = Lock()
self._seen_cluster_ids = set()
self.__threading_call__(self._describe_clusters)
def _describe_clusters(self, regional_client):
@@ -33,18 +37,30 @@ class CS(AlibabaCloudService):
try:
# DescribeClustersV1 returns cluster list
request = cs_models.DescribeClustersV1Request()
response = regional_client.describe_clusters_v1(request)
response = self._call_with_retries(
regional_client.describe_clusters_v1, request
)
if response and response.body and response.body.clusters:
for cluster_data in response.body.clusters:
cluster_id = getattr(cluster_data, "cluster_id", "")
cluster_region = getattr(cluster_data, "region_id", "") or region
if (
cluster_region != region
and cluster_region in self.regional_clients
):
continue
if not self.audit_resources or is_resource_filtered(
cluster_id, self.audit_resources
):
cluster_client = self.regional_clients.get(
cluster_region, regional_client
)
# Get detailed information for each cluster
cluster_detail = self._get_cluster_detail(
regional_client, cluster_id
cluster_client, cluster_id
)
if cluster_detail:
@@ -60,12 +76,12 @@ class CS(AlibabaCloudService):
# Get node pools to check CloudMonitor
cloudmonitor_enabled = self._check_cloudmonitor_enabled(
regional_client, cluster_id
cluster_client, cluster_id
)
# Check if cluster checks have been run in the last week
last_check_time = self._get_last_cluster_check(
regional_client, cluster_id
cluster_client, cluster_id
)
# Check addons for dashboard, network policy, etc.
@@ -78,33 +94,33 @@ class CS(AlibabaCloudService):
cluster_detail, region
)
self.clusters.append(
Cluster(
id=cluster_id,
name=getattr(cluster_data, "name", cluster_id),
region=region,
cluster_type=getattr(
cluster_data, "cluster_type", ""
),
state=getattr(cluster_data, "state", ""),
audit_project_name=audit_project_name,
log_service_enabled=bool(audit_project_name),
cloudmonitor_enabled=cloudmonitor_enabled,
rbac_enabled=rbac_enabled,
last_check_time=last_check_time,
dashboard_enabled=addons_status[
"dashboard_enabled"
],
network_policy_enabled=addons_status[
"network_policy_enabled"
],
eni_multiple_ip_enabled=addons_status[
"eni_multiple_ip_enabled"
],
private_cluster_enabled=not public_access_enabled,
)
cluster = Cluster(
id=cluster_id,
name=getattr(cluster_data, "name", cluster_id),
region=cluster_region,
cluster_type=getattr(cluster_data, "cluster_type", ""),
state=getattr(cluster_data, "state", ""),
audit_project_name=audit_project_name,
log_service_enabled=bool(audit_project_name),
cloudmonitor_enabled=cloudmonitor_enabled,
rbac_enabled=rbac_enabled,
last_check_time=last_check_time,
dashboard_enabled=addons_status["dashboard_enabled"],
network_policy_enabled=addons_status[
"network_policy_enabled"
],
eni_multiple_ip_enabled=addons_status[
"eni_multiple_ip_enabled"
],
private_cluster_enabled=not public_access_enabled,
)
with self._cluster_ids_lock:
if cluster_id in self._seen_cluster_ids:
continue
self._seen_cluster_ids.add(cluster_id)
self.clusters.append(cluster)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -114,19 +130,43 @@ class CS(AlibabaCloudService):
"""Get detailed information for a specific cluster."""
try:
# DescribeClusterDetail returns detailed cluster information
request = cs_models.DescribeClusterDetailRequest()
response = regional_client.describe_cluster_detail(cluster_id, request)
if hasattr(cs_models, "DescribeClusterDetailRequest"):
request = cs_models.DescribeClusterDetailRequest()
response = self._call_with_retries(
regional_client.describe_cluster_detail,
cluster_id,
request,
)
else:
response = self._call_with_retries(
regional_client.describe_cluster_detail, cluster_id
)
if response and response.body:
# Convert response body to dict
body = response.body
result = {"meta_data": {}}
result = {"meta_data": {}, "parameters": {}, "master_url": ""}
# Check if meta_data exists in the response
# The ACK SDK exposes meta_data as a JSON string in recent versions.
if hasattr(body, "meta_data"):
meta_data = body.meta_data
if meta_data:
result["meta_data"] = dict(meta_data)
if isinstance(meta_data, dict):
result["meta_data"] = meta_data
elif isinstance(meta_data, str):
try:
parsed_meta_data = json.loads(meta_data)
except (TypeError, ValueError):
parsed_meta_data = {}
if isinstance(parsed_meta_data, dict):
result["meta_data"] = parsed_meta_data
if hasattr(body, "parameters") and body.parameters:
result["parameters"] = body.parameters
if hasattr(body, "master_url") and body.master_url:
result["master_url"] = body.master_url
return result
@@ -143,7 +183,9 @@ class CS(AlibabaCloudService):
try:
# DescribeClusterNodePools returns node pool information
request = cs_models.DescribeClusterNodePoolsRequest()
response = regional_client.describe_cluster_node_pools(cluster_id, request)
response = self._call_with_retries(
regional_client.describe_cluster_node_pools, cluster_id, request
)
if response and response.body and response.body.nodepools:
nodepools = response.body.nodepools
@@ -214,9 +256,19 @@ class CS(AlibabaCloudService):
or None if no successful checks found.
"""
try:
# DescribeClusterChecks returns cluster check history
request = cs_models.DescribeClusterChecksRequest()
response = regional_client.describe_cluster_checks(cluster_id, request)
# Newer ACK SDKs expose ListClusterChecks; older ones used DescribeClusterChecks.
if hasattr(cs_models, "ListClusterChecksRequest") and hasattr(
regional_client, "list_cluster_checks"
):
request = cs_models.ListClusterChecksRequest()
response = self._call_with_retries(
regional_client.list_cluster_checks, cluster_id, request
)
else:
request = cs_models.DescribeClusterChecksRequest()
response = self._call_with_retries(
regional_client.describe_cluster_checks, cluster_id, request
)
if response and response.body and response.body.checks:
checks = response.body.checks
@@ -267,18 +319,20 @@ class CS(AlibabaCloudService):
# Note: Addons structure from API is typically a string representation of JSON or a list
# Based on sample: "Addons": [{"name": "gateway-api", ...}, ...]
addons = meta_data.get("Addons", [])
if addons is None:
addons = []
# If addons is string, try to parse it?
# The SDK typically handles this conversion, but let's be safe
if isinstance(addons, str):
import json
try:
addons = json.loads(addons)
except Exception:
addons = []
for addon in addons:
if not isinstance(addon, dict):
continue
name = addon.get("name", "")
disabled = addon.get("disabled", False)
@@ -317,7 +371,13 @@ class CS(AlibabaCloudService):
parameters = cluster_detail.get("parameters", {})
endpoint_public = parameters.get("endpoint_public", "")
if endpoint_public:
if isinstance(endpoint_public, str):
normalized_public = endpoint_public.strip().lower()
if normalized_public in {"true", "1", "yes"}:
return True
if normalized_public in {"false", "0", "no", ""}:
return False
elif endpoint_public:
return True
# If we can't find explicit indicator, check if master_url is present
@@ -29,6 +29,8 @@ class OSS(AlibabaCloudService):
# Treat as regional for client generation consistency with other services
super().__init__(__class__.__name__, provider, global_service=False)
self._buckets_lock = Lock()
self._bucket_inventory_lock = Lock()
self._bucket_inventory_loaded = False
# Fetch OSS resources
self.buckets = {}
@@ -40,6 +42,11 @@ class OSS(AlibabaCloudService):
def _list_buckets(self, regional_client=None):
region = "unknown"
try:
with self._bucket_inventory_lock:
if self._bucket_inventory_loaded:
return
self._bucket_inventory_loaded = True
regional_client = regional_client or self.client
region = getattr(regional_client, "region", self.region)
endpoint = f"oss-{region}.aliyuncs.com"
@@ -75,11 +82,20 @@ class OSS(AlibabaCloudService):
headers["Authorization"] = f"OSS {credentials.access_key_id}:{signature}"
url = f"https://{endpoint}/"
response = requests.get(url, headers=headers, timeout=10)
response = self._call_with_retries(
requests.get, url, headers=headers, timeout=10
)
if response.status_code != 200:
logger.error(
f"OSS - HTTP listing {endpoint_label} returned {response.status_code}: {response.text}"
)
if response.status_code == 403 and "UserDisable" in (
response.text or ""
):
logger.info(
f"OSS - HTTP listing {endpoint_label} skipped because OSS is disabled for this account."
)
else:
logger.error(
f"OSS - HTTP listing {endpoint_label} returned {response.status_code}: {response.text}"
)
return
try:
@@ -22,6 +22,18 @@ class RDS(AlibabaCloudService):
self.instances = []
self.__threading_call__(self._describe_instances)
@staticmethod
def _set_region_id(request, regional_client) -> None:
"""Populate RegionId on RDS requests when the SDK model exposes it."""
region = getattr(regional_client, "region", "")
if not region:
return
if hasattr(request, "region_id"):
request.region_id = region
elif hasattr(request, "RegionId"):
request.RegionId = region
def _describe_instances(self, regional_client):
"""List all RDS instances and fetch their details in a specific region."""
region = getattr(regional_client, "region", "unknown")
@@ -30,7 +42,10 @@ class RDS(AlibabaCloudService):
try:
# DescribeDBInstances returns instance list
request = rds_models.DescribeDBInstancesRequest()
response = regional_client.describe_dbinstances(request)
self._set_region_id(request, regional_client)
response = self._call_with_retries(
regional_client.describe_dbinstances, request
)
if response and response.body and response.body.items:
for instance_data in response.body.items.dbinstance:
@@ -123,7 +138,10 @@ class RDS(AlibabaCloudService):
try:
request = rds_models.DescribeDBInstanceAttributeRequest()
request.dbinstance_id = instance_id
response = regional_client.describe_dbinstance_attribute(request)
self._set_region_id(request, regional_client)
response = self._call_with_retries(
regional_client.describe_dbinstance_attribute, request
)
if (
response
@@ -146,7 +164,10 @@ class RDS(AlibabaCloudService):
try:
request = rds_models.DescribeDBInstanceSSLRequest()
request.dbinstance_id = instance_id
response = regional_client.describe_dbinstance_ssl(request)
self._set_region_id(request, regional_client)
response = self._call_with_retries(
regional_client.describe_dbinstance_ssl, request
)
if response and response.body:
# response.body is a DescribeDBInstanceSSLResponseBody model object, use getattr
@@ -169,7 +190,10 @@ class RDS(AlibabaCloudService):
try:
request = rds_models.DescribeDBInstanceTDERequest()
request.dbinstance_id = instance_id
response = regional_client.describe_dbinstance_tde(request)
self._set_region_id(request, regional_client)
response = self._call_with_retries(
regional_client.describe_dbinstance_tde, request
)
if response and response.body:
return {
@@ -187,7 +211,10 @@ class RDS(AlibabaCloudService):
try:
request = rds_models.DescribeDBInstanceIPArrayListRequest()
request.dbinstance_id = instance_id
response = regional_client.describe_dbinstance_iparray_list(request)
self._set_region_id(request, regional_client)
response = self._call_with_retries(
regional_client.describe_dbinstance_iparray_list, request
)
ips = []
if response and response.body and response.body.items:
@@ -205,12 +232,12 @@ class RDS(AlibabaCloudService):
def _describe_sql_collector_policy(self, regional_client, instance_id: str) -> dict:
"""Check SQL audit status."""
try:
request = rds_models.DescribeSQLLogRecordsRequest()
request.dbinstance_id = instance_id
policy_request = rds_models.DescribeSQLCollectorPolicyRequest()
policy_request.dbinstance_id = instance_id
response = regional_client.describe_sqlcollector_policy(policy_request)
self._set_region_id(policy_request, regional_client)
response = self._call_with_retries(
regional_client.describe_sqlcollector_policy, policy_request
)
if response and response.body:
status = getattr(response.body, "sqlcollector_status", "")
@@ -232,7 +259,10 @@ class RDS(AlibabaCloudService):
try:
request = rds_models.DescribeParametersRequest()
request.dbinstance_id = instance_id
response = regional_client.describe_parameters(request)
self._set_region_id(request, regional_client)
response = self._call_with_retries(
regional_client.describe_parameters, request
)
params = {}
if response and response.body and response.body.running_parameters:
@@ -50,7 +50,9 @@ class SecurityCenter(AlibabaCloudService):
request.page_size = 100
while True:
response = self.client.describe_vul_list(request)
response = self._call_with_retries(
self.client.describe_vul_list, request
)
if response and response.body and response.body.vul_records:
vul_records = response.body.vul_records
@@ -112,7 +114,9 @@ class SecurityCenter(AlibabaCloudService):
request.page_size = 100
while True:
response = self.client.describe_cloud_center_instances(request)
response = self._call_with_retries(
self.client.describe_cloud_center_instances, request
)
if response and response.body and response.body.instances:
instances = response.body.instances
@@ -174,7 +178,9 @@ class SecurityCenter(AlibabaCloudService):
request.page_size = 100
while True:
response = self.client.list_uninstall_aegis_machines(request)
response = self._call_with_retries(
self.client.list_uninstall_aegis_machines, request
)
if response and response.body and response.body.machine_list:
machines = response.body.machine_list
@@ -221,7 +227,9 @@ class SecurityCenter(AlibabaCloudService):
try:
# Get notification configurations
request = sas_models.DescribeNoticeConfigRequest()
response = self.client.describe_notice_config(request)
response = self._call_with_retries(
self.client.describe_notice_config, request
)
if response and response.body and response.body.notice_config_list:
notice_configs = response.body.notice_config_list
@@ -253,7 +261,7 @@ class SecurityCenter(AlibabaCloudService):
try:
# Get vulnerability scan configuration
request = sas_models.DescribeVulConfigRequest()
response = self.client.describe_vul_config(request)
response = self._call_with_retries(self.client.describe_vul_config, request)
if response and response.body and response.body.target_configs:
target_configs = response.body.target_configs
@@ -281,7 +289,9 @@ class SecurityCenter(AlibabaCloudService):
try:
# Get vulnerability scan level priorities
request = sas_models.DescribeConcernNecessityRequest()
response = self.client.describe_concern_necessity(request)
response = self._call_with_retries(
self.client.describe_concern_necessity, request
)
if response and response.body:
concern_necessity = getattr(response.body, "concern_necessity", [])
@@ -314,7 +324,9 @@ class SecurityCenter(AlibabaCloudService):
try:
# Get Security Center edition
request = sas_models.DescribeVersionConfigRequest()
response = self.client.describe_version_config(request)
response = self._call_with_retries(
self.client.describe_version_config, request
)
if response and response.body:
# Get Version field from response
@@ -39,7 +39,9 @@ class Sls(AlibabaCloudService):
try:
# List Projects
list_project_request = sls_models.ListProjectRequest(offset=0, size=500)
projects_resp = client.list_project(list_project_request)
projects_resp = self._call_with_retries(
client.list_project, list_project_request
)
if projects_resp.body and projects_resp.body.projects:
for project in projects_resp.body.projects:
@@ -50,8 +52,10 @@ class Sls(AlibabaCloudService):
offset=0, size=500
)
try:
alerts_resp = client.list_alerts(
project_name, list_alert_request
alerts_resp = self._call_with_retries(
client.list_alerts,
project_name,
list_alert_request,
)
if alerts_resp.body and alerts_resp.body.results:
for alert in alerts_resp.body.results:
@@ -90,7 +94,9 @@ class Sls(AlibabaCloudService):
try:
# List Projects
list_project_request = sls_models.ListProjectRequest(offset=0, size=500)
projects_resp = client.list_project(list_project_request)
projects_resp = self._call_with_retries(
client.list_project, list_project_request
)
if projects_resp.body and projects_resp.body.projects:
for project in projects_resp.body.projects:
@@ -101,14 +107,18 @@ class Sls(AlibabaCloudService):
offset=0, size=500
)
try:
logstores_resp = client.list_log_stores(
project_name, list_logstores_request
logstores_resp = self._call_with_retries(
client.list_log_stores,
project_name,
list_logstores_request,
)
if logstores_resp.body and logstores_resp.body.logstores:
for logstore_name in logstores_resp.body.logstores:
try:
logstore_resp = client.get_log_store(
project_name, logstore_name
logstore_resp = self._call_with_retries(
client.get_log_store,
project_name,
logstore_name,
)
if logstore_resp.body:
self.log_stores.append(
@@ -33,7 +33,7 @@ class VPC(AlibabaCloudService):
try:
request = vpc_models.DescribeVpcsRequest()
response = regional_client.describe_vpcs(request)
response = self._call_with_retries(regional_client.describe_vpcs, request)
if response and response.body and response.body.vpcs:
for vpc_data in response.body.vpcs.vpc:
@@ -70,7 +70,9 @@ class VPC(AlibabaCloudService):
request = vpc_models.DescribeFlowLogsRequest()
request.resource_id = vpc_id
request.resource_type = "VPC"
response = regional_client.describe_flow_logs(request)
response = self._call_with_retries(
regional_client.describe_flow_logs, request
)
if response and response.body and response.body.flow_logs:
flow_logs = response.body.flow_logs.flow_log
@@ -2193,6 +2193,7 @@
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-5",
"ap-southeast-6",
"ap-southeast-7",
"ca-central-1",
"ca-west-1",
@@ -2255,6 +2256,8 @@
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ap-southeast-5",
"ap-southeast-7",
"ca-central-1",
"ca-west-1",
"eu-central-1",
@@ -3829,7 +3832,9 @@
"us-west-2"
],
"aws-cn": [],
"aws-eusc": [],
"aws-eusc": [
"eusc-de-east-1"
],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
@@ -8065,6 +8070,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -8075,8 +8081,10 @@
"ap-southeast-3",
"ap-southeast-4",
"ap-southeast-5",
"ap-southeast-6",
"ap-southeast-7",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
@@ -8088,6 +8096,7 @@
"il-central-1",
"me-central-1",
"me-south-1",
"mx-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -8279,22 +8288,31 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ap-southeast-5",
"ap-southeast-6",
"ap-southeast-7",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"mx-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -8315,6 +8333,7 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
@@ -8706,11 +8725,13 @@
"aws": [
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-west-1",
"eu-west-2",
"us-east-1",
@@ -9034,6 +9055,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-2"
@@ -9141,6 +9163,8 @@
"ap-southeast-2",
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -9171,9 +9195,7 @@
"us-east-2",
"us-west-2"
],
"aws-cn": [
"cn-north-1"
],
"aws-cn": [],
"aws-eusc": [],
"aws-us-gov": []
}
@@ -10012,7 +10034,10 @@
],
"aws-cn": [],
"aws-eusc": [],
"aws-us-gov": []
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"resource-groups": {
@@ -11647,26 +11672,6 @@
]
}
},
"simspaceweaver": {
"regions": {
"aws": [
"ap-southeast-1",
"ap-southeast-2",
"eu-central-1",
"eu-north-1",
"eu-west-1",
"us-east-1",
"us-east-2",
"us-west-2"
],
"aws-cn": [],
"aws-eusc": [],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"sms": {
"regions": {
"aws": [
@@ -13414,6 +13419,7 @@
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-5",
"ca-central-1",
"eu-central-1",
"eu-west-1",
@@ -13422,6 +13428,7 @@
"il-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-2"
],
"aws-cn": [
+16
View File
@@ -70,3 +70,19 @@ def validate_asff_usage(
False,
f"json-asff output format is only available for the aws provider, but {provider} was selected",
)
def validate_sarif_usage(
provider: Optional[str], output_formats: Optional[Sequence[str]]
) -> tuple[bool, str]:
"""Ensure sarif output is only requested for the IaC provider."""
if not output_formats or "sarif" not in output_formats:
return (True, "")
if provider == "iac":
return (True, "")
return (
False,
f"sarif output format is only available for the iac provider, but {provider} was selected",
)
+5
View File
@@ -282,6 +282,11 @@ class Provider(ABC):
repositories=repos,
repo_list_file=getattr(arguments, "repo_list_file", None),
organizations=orgs,
github_actions_enabled=not getattr(
arguments, "no_github_actions", False
),
exclude_workflows=getattr(arguments, "exclude_workflows", []),
fixer_config=fixer_config,
)
elif "googleworkspace" in provider_class_name.lower():
provider_class(
@@ -119,6 +119,9 @@ class GithubProvider(Provider):
repositories: list = None,
repo_list_file: str = None,
organizations: list = None,
# GitHub Actions scanning
github_actions_enabled: bool = True,
exclude_workflows: list = None,
):
"""
GitHub Provider constructor
@@ -210,8 +213,20 @@ class GithubProvider(Provider):
self._mutelist = GithubMutelist(
mutelist_path=mutelist_path,
)
# GitHub Actions scanning configuration
self._github_actions_enabled = github_actions_enabled
self._exclude_workflows = exclude_workflows or []
Provider.set_global_provider(self)
@property
def github_actions_enabled(self) -> bool:
return self._github_actions_enabled
@property
def exclude_workflows(self) -> list:
return self._exclude_workflows
@property
def auth_method(self):
"""Returns the authentication method for the GitHub provider."""
@@ -64,3 +64,20 @@ def init_parser(self):
default=None,
metavar="ORGANIZATION",
)
github_actions_subparser = github_parser.add_argument_group(
"GitHub Actions Scanning"
)
github_actions_subparser.add_argument(
"--no-github-actions",
action="store_true",
default=False,
help="Disable GitHub Actions workflow security scanning",
)
github_actions_subparser.add_argument(
"--exclude-workflows",
nargs="+",
default=[],
help="Workflow files or glob patterns to exclude from GitHub Actions scanning",
metavar="PATTERN",
)

Some files were not shown because too many files have changed in this diff Show More