Compare commits

..

42 Commits

Author SHA1 Message Date
Prowler Bot fd2ec5e07d chore(deps): bump pyasn1 from 0.6.2 to 0.6.3 (#10838)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: pedrooot <pedromarting3@gmail.com>
Co-authored-by: Hugo P.Brito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2026-04-22 14:18:11 +01:00
Prowler Bot 0433c4ad64 fix(api): merge Attack Paths findings on short UIDs for AWS resources (#10841)
Co-authored-by: Josema Camacho <josema@prowler.com>
2026-04-22 12:35:48 +02:00
Prowler Bot 6d88a402c9 fix(aws): disallow me-south-1 & me-central-1 avoid stuck scans (#10840)
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
2026-04-22 11:40:04 +02:00
Prowler Bot dfadf58e50 chore(deps): bump pygments from 2.19.2 to 2.20.0 in /api (#10836)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-22 11:20:10 +02:00
Prowler Bot 141bc6c30f fix(api): reaggregate overview summaries after muting findings (#10835)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-22 10:59:26 +02:00
Prowler Bot 053e7b7d73 fix(aws): fallback lookup events to resource name (#10830)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-21 18:35:17 +02:00
Prowler Bot 760ccdbffe fix(api): treat muted findings as resolved in finding-groups status (#10826)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-21 17:46:57 +02:00
Prowler Bot e61d5f2cdb chore(release): Bump version to v5.24.3 (#10820)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-21 16:24:43 +02:00
Prowler Bot fa9a3e1039 docs: Update version to v5.24.2 (#10822)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-21 16:23:24 +02:00
Prowler Bot 05441a1676 chore(ui): Bump version to v5.24.3 (#10821)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-21 16:23:17 +02:00
Prowler Bot 22ec11c9a1 chore(api): Bump version to v1.25.3 (#10823)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-21 16:23:04 +02:00
Prowler Bot 322a500352 fix(ui): centralize default muted findings filter on finding groups (#10819)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-21 14:33:42 +02:00
Prowler Bot ea09ff8902 perf(api): speed up finding-groups /resources endpoint (#10817)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-21 13:37:52 +02:00
Prowler Bot 24ce8d268b fix(changelog): relocate entries for the SDK (#10813)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-21 08:20:47 +02:00
Prowler Bot 0eb7b34207 chore(deps): bump pyasn1 from 0.6.2 to 0.6.3 in /api (#10805)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-20 17:58:18 +02:00
Prowler Bot f6b9d8611c fix(api): align latest_resources scan selection with completed_at (#10804)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-20 17:35:40 +02:00
Prowler Bot 28175170ce chore(api): Bump version to v1.25.2 (#10796)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-20 15:41:52 +02:00
Prowler Bot f5cb033f91 chore(release): Bump version to v5.24.2 (#10793)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-20 15:41:20 +02:00
Prowler Bot 558e292a2a docs: Update version to v5.24.1 (#10795)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-20 15:40:52 +02:00
Prowler Bot a4938897ac chore(ui): Bump version to v5.24.2 (#10794)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-20 15:40:15 +02:00
Prowler Bot 2cb8179477 chore: review changelog for v5.24.1 (#10792)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-20 14:10:04 +02:00
Prowler Bot c9bbe7033b fix(ui): sorting and filtering for findings (#10790)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2026-04-20 13:46:36 +02:00
Prowler Bot 76ecb30968 fix(api): detect silent failures in ResourceFindingMapping (#10781)
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-20 09:15:49 +02:00
Prowler Bot 84a60fe06b fix(ui): correct IaC findings counters (#10773)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2026-04-17 13:55:17 +02:00
Prowler Bot f71743b95b fix(cloudflare): guard validate_credentials against paginator infinite loops (#10772)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2026-04-17 11:38:12 +02:00
Prowler Bot 68dcc5a75c fix(ui): exclude muted findings and polish filter selectors (#10770)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2026-04-17 11:16:41 +02:00
Prowler Bot 407ae24f04 perf(attack-paths): cleanup task prioritization, restore default batch sizes to 1000, upgrade Cartography to 0.135.0 (#10768)
Co-authored-by: Josema Camacho <josema@prowler.com>
2026-04-17 11:01:19 +02:00
Prowler Bot 17c4a286af chore(deps): bump msgraph-sdk to 1.55.0 and azure-mgmt-resource to 24.0.0, remove marshmallow (#10766)
Co-authored-by: Josema Camacho <josema@prowler.com>
2026-04-17 10:22:17 +02:00
Prowler Bot 69ee2cdcef fix(googleworkspace): treat secure Google defaults as PASS for Drive checks (#10765)
Co-authored-by: lydiavilchez <114735608+lydiavilchez@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-17 09:12:57 +02:00
Prowler Bot 3544ff5e75 fix: CHANGELOG minor issue (#10759)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2026-04-16 17:10:44 +02:00
Prowler Bot 69287dc3a1 fix(api): exclude muted findings from pass_count, fail_count and manual_count (#10755) 2026-04-16 16:16:25 +02:00
Prowler Bot cf5848d11d fix(ui): upgrade React 19.2.5 and Next.js 16.2.3 to mitigate CVE-2026-23869 (#10754)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2026-04-16 15:39:30 +02:00
Prowler Bot 8ead3fa6bb fix(api): add fallback handling for missing resources in findings (#10751)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-16 14:54:27 +02:00
Prowler Bot 21483cc12f fix(googleworkspace): treat secure Google defaults as PASS for Calendar checks (#10735)
Co-authored-by: lydiavilchez <114735608+lydiavilchez@users.noreply.github.com>
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2026-04-16 13:36:14 +02:00
Prowler Bot 628de4bd06 fix(image): --registry-list crashes with AttributeError on global_provider (#10730)
Co-authored-by: Erich Blume <725328+eblume@users.noreply.github.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2026-04-16 13:31:08 +02:00
Prowler Bot 043f1ef138 fix(sdk): allow account-scoped tokens in Cloudflare connection test (#10731)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2026-04-16 13:25:09 +02:00
Prowler Bot a120da9409 fix(db): add missing tenant_id filter in queries (#10725)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-16 12:11:28 +02:00
Prowler Bot d5b71c6436 chore(ui): Bump version to v5.24.1 (#10713)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 20:14:37 +02:00
Prowler Bot 9114d09ba5 docs: Update version to v5.24.0 (#10716)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 20:14:27 +02:00
Prowler Bot d2b1224a30 chore(release): Bump version to v5.24.1 (#10712)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 20:13:54 +02:00
Prowler Bot 54b54e25e2 chore(api): Bump version to v1.25.1 (#10717)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 20:13:43 +02:00
Prowler Bot 1b45724ca8 chore(api): Update prowler dependency to v5.24 for release 5.24.0 (#10709)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 18:57:37 +02:00
1236 changed files with 11379 additions and 81634 deletions
-23
View File
@@ -1,23 +0,0 @@
# Prowler worktree automation for worktrunk (wt CLI).
# Runs automatically on `wt switch --create`.
# Block 1: setup + copy gitignored env files (.envrc, ui/.env.local)
# from the primary worktree — patterns selected via .worktreeinclude.
[[pre-start]]
skills = "./skills/setup.sh --claude"
python = "poetry env use python3.12"
envs = "wt step copy-ignored"
# Block 2: install Python deps (requires `poetry env use` from block 1).
[[pre-start]]
deps = "poetry install --with dev"
# Block 3: reminder — last visible output before `wt switch` returns.
# Hooks can't mutate the parent shell, so venv activation is manual.
[[pre-start]]
reminder = "echo '>> Reminder: activate the venv in this shell with: eval $(poetry env activate)'"
# Background: pnpm install runs while you start working.
# Tail logs via `wt config state logs`.
[post-start]
ui = "cd ui && pnpm install"
+1 -1
View File
@@ -145,7 +145,7 @@ SENTRY_RELEASE=local
NEXT_PUBLIC_SENTRY_ENVIRONMENT=${SENTRY_ENVIRONMENT}
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.27.0
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.24.3
# Social login credentials
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
+11 -12
View File
@@ -1,15 +1,14 @@
# SDK
/* @prowler-cloud/detection-remediation
/prowler/ @prowler-cloud/detection-remediation
/prowler/compliance/ @prowler-cloud/compliance
/tests/ @prowler-cloud/detection-remediation
/dashboard/ @prowler-cloud/detection-remediation
/docs/ @prowler-cloud/detection-remediation
/examples/ @prowler-cloud/detection-remediation
/util/ @prowler-cloud/detection-remediation
/contrib/ @prowler-cloud/detection-remediation
/permissions/ @prowler-cloud/detection-remediation
/codecov.yml @prowler-cloud/detection-remediation @prowler-cloud/api
/* @prowler-cloud/sdk
/prowler/ @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
/tests/ @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
/dashboard/ @prowler-cloud/sdk
/docs/ @prowler-cloud/sdk
/examples/ @prowler-cloud/sdk
/util/ @prowler-cloud/sdk
/contrib/ @prowler-cloud/sdk
/permissions/ @prowler-cloud/sdk
/codecov.yml @prowler-cloud/sdk @prowler-cloud/api
# API
/api/ @prowler-cloud/api
@@ -18,7 +17,7 @@
/ui/ @prowler-cloud/ui
# AI
/mcp_server/ @prowler-cloud/detection-remediation
/mcp_server/ @prowler-cloud/ai
# Platform
/.github/ @prowler-cloud/platform
-15
View File
@@ -1,15 +0,0 @@
# These are supported funding model platforms
github: [prowler-cloud]
# patreon: # Replace with a single Patreon username
# open_collective: # Replace with a single Open Collective username
# ko_fi: # Replace with a single Ko-fi username
# tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
# community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
# liberapay: # Replace with a single Liberapay username
# issuehunt: # Replace with a single IssueHunt username
# lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
# polar: # Replace with a single Polar username
# buy_me_a_coffee: # Replace with a single Buy Me a Coffee username
# thanks_dev: # Replace with a single thanks.dev username
# custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
@@ -1,143 +0,0 @@
name: "🔎 New Check Request"
description: Request a new Prowler security check
title: "[New Check]: "
labels: ["feature-request", "status/needs-triage"]
body:
- type: checkboxes
id: search
attributes:
label: Existing check search
description: Confirm this check does not already exist before opening a new request.
options:
- label: I have searched existing issues, Prowler Hub, and the public roadmap, and this check does not already exist.
required: true
- type: markdown
attributes:
value: |
Use this form to describe the security condition that Prowler should evaluate.
The most useful inputs for [Prowler Studio](https://github.com/prowler-cloud/prowler-studio) are:
- What should be detected
- What PASS and FAIL mean
- Vendor docs, API references, SDK methods, CLI commands, or reference code
- type: dropdown
id: provider
attributes:
label: Provider
description: Cloud or platform this check targets.
options:
- AWS
- Azure
- GCP
- Kubernetes
- GitHub
- Microsoft 365
- OCI
- Alibaba Cloud
- Cloudflare
- MongoDB Atlas
- Google Workspace
- OpenStack
- Vercel
- NHN
- Other / New provider
validations:
required: true
- type: input
id: other_provider_name
attributes:
label: New provider name
description: Only fill this if you selected "Other / New provider" above.
placeholder: "NewProviderName"
validations:
required: false
- type: input
id: service_name
attributes:
label: Service or product area
description: Optional. Main service, product, or feature to audit.
placeholder: "s3, bedrock, entra, repository, apiserver"
validations:
required: false
- type: input
id: suggested_check_name
attributes:
label: Suggested check name
description: Optional. Use `snake_case` following `<service>_<resource>_<best_practice>`, with lowercase letters and underscores only.
placeholder: "bedrock_guardrail_sensitive_information_filter_enabled"
validations:
required: false
- type: textarea
id: context
attributes:
label: Context and goal
description: Describe the security problem, why it matters, and what this new check should help detect.
placeholder: |-
- Security condition to validate:
- Why it matters:
- Resource, feature, or configuration involved:
validations:
required: true
- type: textarea
id: expected_behavior
attributes:
label: Expected behavior
description: Explain what the check should evaluate and what PASS, FAIL, or MANUAL should mean.
placeholder: |-
- Resource or scope to evaluate:
- PASS when:
- FAIL when:
- MANUAL when (if applicable):
- Exclusions, thresholds, or edge cases:
validations:
required: true
- type: textarea
id: references
attributes:
label: References
description: Add vendor docs, API references, SDK methods, CLI commands, endpoint docs, sample payloads, or similar reference material.
placeholder: |-
- Product or service documentation:
- API or SDK reference:
- CLI command or endpoint documentation:
- Sample payload or response:
- Security advisory or benchmark:
validations:
required: true
- type: dropdown
id: severity
attributes:
label: Suggested severity
description: Your best estimate. Reviewers will confirm during triage.
options:
- Critical
- High
- Medium
- Low
- Informational
- Not sure
validations:
required: true
- type: textarea
id: implementation_notes
attributes:
label: Additional implementation notes
description: Optional. Add permissions, unsupported regions, config knobs, product limitations, or anything else that may affect implementation.
placeholder: |-
- Required permissions or scopes:
- Region, tenant, or subscription limitations:
- Configurable behavior or thresholds:
- Other constraints:
validations:
required: false
+15 -8
View File
@@ -22,10 +22,6 @@ inputs:
description: 'Run `poetry lock` during setup. Only enable when a prior step mutates pyproject.toml (e.g. API `@master` VCS rewrite). Default: false.'
required: false
default: 'false'
enable-cache:
description: 'Whether to enable Poetry dependency caching via actions/setup-python'
required: false
default: 'true'
runs:
using: 'composite'
@@ -68,6 +64,19 @@ runs:
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update SDK resolved_reference to latest commit (prowler repo on push)
if: github.event_name == 'push' && github.ref == 'refs/heads/master' && github.repository == 'prowler-cloud/prowler'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock (prowler repo only)
if: github.repository == 'prowler-cloud/prowler' && inputs.update-lock == 'true'
shell: bash
@@ -78,10 +87,8 @@ runs:
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ inputs.python-version }}
# Disable cache when callers skip dependency install: Poetry 2.3.4 creates
# the venv in a path setup-python can't hash, breaking the post-step save-cache.
cache: ${{ inputs.enable-cache == 'true' && 'poetry' || '' }}
cache-dependency-path: ${{ inputs.enable-cache == 'true' && format('{0}/poetry.lock', inputs.working-directory) || '' }}
cache: 'poetry'
cache-dependency-path: ${{ inputs.working-directory }}/poetry.lock
- name: Install Python dependencies
if: inputs.install-dependencies == 'true'
-12
View File
@@ -66,18 +66,6 @@ updates:
cooldown:
default-days: 7
- package-ecosystem: "pre-commit"
directory: "/"
schedule:
interval: "monthly"
open-pull-requests-limit: 25
target-branch: master
labels:
- "dependencies"
- "pre-commit"
cooldown:
default-days: 7
# Dependabot Updates are temporary disabled - 2025/04/15
# v4.6
# - package-ecosystem: "pip"
@@ -158,7 +158,7 @@ jobs:
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }},scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
# Create and push multi-architecture manifest
create-manifest:
+17 -13
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-container-checks.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-container-checks.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -63,7 +57,16 @@ jobs:
api-container-build-and-scan:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -116,22 +119,23 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Build container
- name: Build container for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
context: ${{ env.API_WORKING_DIR }}
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }}
platforms: ${{ matrix.platform }}
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
- name: Scan container with Trivy
- name: Scan container with Trivy for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
image-tag: ${{ github.sha }}-${{ matrix.arch }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+4 -13
View File
@@ -5,20 +5,10 @@ on:
branches:
- "master"
- "v5.*"
paths:
- 'api/**'
- '.github/workflows/api-tests.yml'
- '.github/workflows/api-security.yml'
- '.github/actions/setup-python-poetry/**'
pull_request:
branches:
- "master"
- "v5.*"
paths:
- 'api/**'
- '.github/workflows/api-tests.yml'
- '.github/workflows/api-security.yml'
- '.github/actions/setup-python-poetry/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -70,7 +60,6 @@ jobs:
files: |
api/**
.github/workflows/api-security.yml
.safety-policy.yml
files_ignore: |
api/docs/**
api/README.md
@@ -91,8 +80,10 @@ jobs:
- name: Safety
if: steps.check-changes.outputs.any_changed == 'true'
# Accepted CVEs, severity threshold, and ignore expirations live in ../.safety-policy.yml
run: poetry run safety check --policy-file ../.safety-policy.yml
run: poetry run safety check --ignore 79023,79027,86217,71600
# TODO: 79023 & 79027 knack ReDoS until `azure-cli-core` (via `cartography`) allows `knack` >=0.13.0
# TODO: 86217 because `alibabacloud-tea-openapi == 0.4.3` don't let us upgrade `cryptography >= 46.0.0`
# TODO: 71600 CVE-2024-1135 false positive - fixed in gunicorn 22.0.0, project uses 23.0.0
- name: Vulture
if: steps.check-changes.outputs.any_changed == 'true'
-1
View File
@@ -35,7 +35,6 @@ jobs:
egress-policy: block
allowed-endpoints: >
api.github.com:443
github.com:443
- name: Check labels
id: label_check
@@ -4,6 +4,8 @@ on:
pull_request:
branches:
- 'master'
- 'v3'
- 'v4.*'
- 'v5.*'
types:
- 'opened'
@@ -43,11 +43,14 @@ jobs:
echo "Processing release tag: $RELEASE_TAG"
# Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
VERSION_ONLY="${RELEASE_TAG#v}"
# Check if it's a minor version (X.Y.0)
if [[ "$VERSION_ONLY" =~ ^([0-9]+)\.([0-9]+)\.0$ ]]; then
echo "Release $RELEASE_TAG (version $VERSION_ONLY) is a minor version. Proceeding to create backport label."
# Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
MAJOR="${BASH_REMATCH[1]}"
MINOR="${BASH_REMATCH[2]}"
TWO_DIGIT_VERSION="${MAJOR}.${MINOR}"
@@ -59,6 +62,7 @@ jobs:
echo "Label name: $LABEL_NAME"
echo "Label description: $LABEL_DESC"
# Check if label already exists
if gh label list --repo ${{ github.repository }} --limit 1000 | grep -q "^${LABEL_NAME}[[:space:]]"; then
echo "Label '$LABEL_NAME' already exists."
else
+225 -38
View File
@@ -12,12 +12,74 @@ concurrency:
env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
DOCS_FILE: docs/getting-started/installation/prowler-app.mdx
permissions: {}
jobs:
bump-version:
detect-release-type:
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
outputs:
is_minor: ${{ steps.detect.outputs.is_minor }}
is_patch: ${{ steps.detect.outputs.is_patch }}
major_version: ${{ steps.detect.outputs.major_version }}
minor_version: ${{ steps.detect.outputs.minor_version }}
patch_version: ${{ steps.detect.outputs.patch_version }}
current_docs_version: ${{ steps.get_docs_version.outputs.current_docs_version }}
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Get current documentation version
id: get_docs_version
run: |
CURRENT_DOCS_VERSION=$(grep -oP 'PROWLER_UI_VERSION="\K[^"]+' docs/getting-started/installation/prowler-app.mdx)
echo "current_docs_version=${CURRENT_DOCS_VERSION}" >> "${GITHUB_OUTPUT}"
echo "Current documentation version: $CURRENT_DOCS_VERSION"
- name: Detect release type and parse version
id: detect
run: |
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
MAJOR_VERSION=${BASH_REMATCH[1]}
MINOR_VERSION=${BASH_REMATCH[2]}
PATCH_VERSION=${BASH_REMATCH[3]}
echo "major_version=${MAJOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "minor_version=${MINOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "patch_version=${PATCH_VERSION}" >> "${GITHUB_OUTPUT}"
if (( MAJOR_VERSION != 5 )); then
echo "::error::Releasing another Prowler major version, aborting..."
exit 1
fi
if (( PATCH_VERSION == 0 )); then
echo "is_minor=true" >> "${GITHUB_OUTPUT}"
echo "is_patch=false" >> "${GITHUB_OUTPUT}"
echo "✓ Minor release detected: $PROWLER_VERSION"
else
echo "is_minor=false" >> "${GITHUB_OUTPUT}"
echo "is_patch=true" >> "${GITHUB_OUTPUT}"
echo "✓ Patch release detected: $PROWLER_VERSION"
fi
else
echo "::error::Invalid version syntax: '$PROWLER_VERSION' (must be X.Y.Z)"
exit 1
fi
bump-minor-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_minor == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
@@ -29,60 +91,185 @@ jobs:
with:
egress-policy: audit
- name: Validate release version
run: |
if [[ ! $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
echo "::error::Invalid version syntax: '$PROWLER_VERSION' (must be X.Y.Z)"
exit 1
fi
if (( ${BASH_REMATCH[1]} != 5 )); then
echo "::error::Releasing another Prowler major version, aborting..."
exit 1
fi
- name: Checkout master branch
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ env.BASE_BRANCH }}
persist-credentials: false
- name: Read current docs version on master
id: docs_version
- name: Calculate next minor version
run: |
CURRENT_DOCS_VERSION=$(grep -oP 'PROWLER_UI_VERSION="\K[^"]+' "${DOCS_FILE}")
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
NEXT_MINOR_VERSION=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).0
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "Current docs version on master: $CURRENT_DOCS_VERSION"
echo "Target release version: $PROWLER_VERSION"
echo "NEXT_MINOR_VERSION=${NEXT_MINOR_VERSION}" >> "${GITHUB_ENV}"
# Skip if master is already at or ahead of the release version
# (re-run, or patch shipped against an older minor line)
HIGHEST=$(printf '%s\n%s\n' "${CURRENT_DOCS_VERSION}" "${PROWLER_VERSION}" | sort -V | tail -n1)
if [[ "${CURRENT_DOCS_VERSION}" == "${PROWLER_VERSION}" || "${HIGHEST}" != "${PROWLER_VERSION}" ]]; then
echo "skip=true" >> "${GITHUB_OUTPUT}"
echo "Skipping bump: current ($CURRENT_DOCS_VERSION) >= release ($PROWLER_VERSION)"
else
echo "skip=false" >> "${GITHUB_OUTPUT}"
fi
echo "Current documentation version: $CURRENT_DOCS_VERSION"
echo "Current release version: $PROWLER_VERSION"
echo "Next minor version: $NEXT_MINOR_VERSION"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
- name: Bump versions in documentation
if: steps.docs_version.outputs.skip == 'false'
- name: Bump versions in documentation for master
run: |
set -e
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" "${DOCS_FILE}"
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" "${DOCS_FILE}"
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to master
if: steps.docs_version.outputs.skip == 'false'
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8.1.0
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.BASE_BRANCH }}
commit-message: 'chore(docs): Bump version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-bump-to-v${{ env.PROWLER_VERSION }}
title: 'chore(docs): Bump version to v${{ env.PROWLER_VERSION }}'
base: master
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Update Prowler documentation version references to v${{ env.PROWLER_VERSION }} after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `docs/getting-started/installation/prowler-app.mdx`: `PROWLER_UI_VERSION` and `PROWLER_API_VERSION`
- All `*.mdx` files with `<VersionBadge>` components
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Checkout version branch
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
persist-credentials: false
- name: Calculate first patch version
run: |
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
FIRST_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.1
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "FIRST_PATCH_VERSION=${FIRST_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "First patch version: $FIRST_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
- name: Bump versions in documentation for version branch
run: |
set -e
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to version branch
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8.1.0
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}-branch
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Update Prowler documentation version references to v${{ env.PROWLER_VERSION }} in version branch after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `docs/getting-started/installation/prowler-app.mdx`: `PROWLER_UI_VERSION` and `PROWLER_API_VERSION`
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
bump-patch-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_patch == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Calculate next patch version
run: |
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
PATCH_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
NEXT_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.$((PATCH_VERSION + 1))
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "NEXT_PATCH_VERSION=${NEXT_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "Current documentation version: $CURRENT_DOCS_VERSION"
echo "Current release version: $PROWLER_VERSION"
echo "Next patch version: $NEXT_PATCH_VERSION"
echo "Target branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION: ${{ needs.detect-release-type.outputs.patch_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
- name: Bump versions in documentation for patch version
run: |
set -e
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to version branch
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8.1.0
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
+8 -12
View File
@@ -27,23 +27,19 @@ jobs:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
with:
# We can't block as Trufflehog needs to verify secrets against vendors
egress-policy: audit
# allowed-endpoints: >
# github.com:443
# ghcr.io:443
# pkg-containers.githubusercontent.com:443
egress-policy: block
allowed-endpoints: >
github.com:443
ghcr.io:443
pkg-containers.githubusercontent.com:443
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
# PRs only need the diff range; push to master/release walks the new range from event.before.
# 50 is enough headroom for the longest realistic PR/push chain without paying for a full clone.
fetch-depth: 50
fetch-depth: 0
persist-credentials: false
- name: Scan diff for secrets with TruffleHog
# Action auto-injects --since-commit/--branch from event payload; passing them in extra_args produces duplicate flags.
- name: Scan for secrets with TruffleHog
uses: trufflesecurity/trufflehog@ef6e76c3c4023279497fab4721ffa071a722fd05 # v3.92.4
with:
extra_args: --results=verified,unknown
extra_args: '--results=verified,unknown'
+1 -1
View File
@@ -62,7 +62,7 @@ jobs:
"Alan-TheGentleman"
"alejandrobailo"
"amitsharm"
# "andoniaf"
"andoniaf"
"cesararroba"
"danibarranqueroo"
"HugoPBrito"
@@ -152,7 +152,7 @@ jobs:
org.opencontainers.image.created=${{ github.event_name == 'release' && github.event.release.published_at || github.event.head_commit.timestamp }}
${{ github.event_name == 'release' && format('org.opencontainers.image.version={0}', env.RELEASE_TAG) || '' }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }},scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
# Create and push multi-architecture manifest
create-manifest:
+17 -13
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'mcp_server/**'
- '.github/workflows/mcp-container-checks.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'mcp_server/**'
- '.github/workflows/mcp-container-checks.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -62,7 +56,16 @@ jobs:
mcp-container-build-and-scan:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -109,22 +112,23 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Build MCP container
- name: Build MCP container for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
context: ${{ env.MCP_WORKING_DIR }}
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }}
platforms: ${{ matrix.platform }}
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
- name: Scan MCP container with Trivy
- name: Scan MCP container with Trivy for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
image-tag: ${{ github.sha }}-${{ matrix.arch }}
fail-on-critical: 'false'
severity: 'CRITICAL'
-21
View File
@@ -86,32 +86,11 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
# The MCP server version (mcp_server/pyproject.toml) is decoupled from the Prowler release
# version: it only changes when MCP code changes. mcp-bump-version.yml normally keeps it in
# sync with mcp_server/CHANGELOG.md, but this publish workflow still runs on every release.
# Pre-flight PyPI check covers the legitimate "no MCP changes for this release" case (and any
# workflow_dispatch re-runs) without failing with HTTP 400 (version exists).
- name: Check if prowler-mcp version already exists on PyPI
id: pypi-check
working-directory: ${{ env.WORKING_DIRECTORY }}
run: |
MCP_VERSION=$(grep '^version' pyproject.toml | head -1 | sed -E 's/^version[[:space:]]*=[[:space:]]*"([^"]+)".*/\1/')
echo "mcp_version=${MCP_VERSION}" >> "$GITHUB_OUTPUT"
if curl -fsS "https://pypi.org/pypi/prowler-mcp/${MCP_VERSION}/json" >/dev/null 2>&1; then
echo "skip=true" >> "$GITHUB_OUTPUT"
echo "::notice title=Skipping prowler-mcp publish::Version ${MCP_VERSION} already exists on PyPI; bump mcp_server/pyproject.toml to publish a new release."
else
echo "skip=false" >> "$GITHUB_OUTPUT"
echo "::notice title=Publishing prowler-mcp::Version ${MCP_VERSION} not on PyPI yet; proceeding."
fi
- name: Build prowler-mcp package
if: steps.pypi-check.outputs.skip != 'true'
working-directory: ${{ env.WORKING_DIRECTORY }}
run: uv build
- name: Publish prowler-mcp package to PyPI
if: steps.pypi-check.outputs.skip != 'true'
uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0
with:
packages-dir: ${{ env.WORKING_DIRECTORY }}/dist/
@@ -1,98 +0,0 @@
name: 'Nightly: ARM64 Container Builds'
# Mitigation for amd64-only PR container-checks: build amd64+arm64 nightly against
# master to keep arm-specific Dockerfile regressions caught quickly. Build only —
# no push, no Trivy (weekly checks already cover that).
on:
schedule:
- cron: '0 4 * * *'
workflow_dispatch: {}
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: false
permissions: {}
jobs:
build-arm64:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-24.04-arm
timeout-minutes: 60
permissions:
contents: read
strategy:
fail-fast: false
matrix:
include:
- component: sdk
context: .
dockerfile: ./Dockerfile
image_name: prowler
- component: api
context: ./api
dockerfile: ./api/Dockerfile
image_name: prowler-api
- component: ui
context: ./ui
dockerfile: ./ui/Dockerfile
image_name: prowler-ui
target: prod
build_args: |
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_51LwpXXXX
- component: mcp
context: ./mcp_server
dockerfile: ./mcp_server/Dockerfile
image_name: prowler-mcp
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Build ${{ matrix.component }} container (linux/arm64)
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
context: ${{ matrix.context }}
file: ${{ matrix.dockerfile }}
target: ${{ matrix.target }}
push: false
load: false
platforms: linux/arm64
tags: ${{ matrix.image_name }}:nightly-arm64
build-args: ${{ matrix.build_args }}
cache-from: type=gha,scope=arm64
cache-to: type=gha,mode=min,scope=arm64
notify-failure:
needs: build-arm64
if: failure() && github.event_name == 'schedule'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
with:
egress-policy: audit
- name: Notify Slack on failure
uses: slackapi/slack-github-action@91efab103c0de0a537f72a35f6b8cda0ee76bf0a # v2.1.1
with:
method: chat.postMessage
token: ${{ secrets.SLACK_BOT_TOKEN }}
payload: |
channel: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
text: ":rotating_light: Nightly arm64 container build failed for prowler — <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|view run>"
errors: true
+1 -6
View File
@@ -41,15 +41,10 @@ jobs:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
fetch-depth: 0
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Fetch PR base ref for tj-actions/changed-files
env:
BASE_REF: ${{ github.event.pull_request.base.ref }}
run: git fetch --depth=1 origin "${BASE_REF}"
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
@@ -20,13 +20,7 @@ permissions: {}
jobs:
check-compliance-mapping:
if: >-
github.event.pull_request.state == 'open' &&
contains(github.event.pull_request.labels.*.name, 'no-compliance-check') == false &&
(
(github.event.action != 'labeled' && github.event.action != 'unlabeled')
|| github.event.label.name == 'no-compliance-check'
)
if: contains(github.event.pull_request.labels.*.name, 'no-compliance-check') == false
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
@@ -45,15 +39,10 @@ jobs:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
fetch-depth: 0
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Fetch PR base ref for tj-actions/changed-files
env:
BASE_REF: ${{ github.event.pull_request.base.ref }}
run: git fetch --depth=1 origin "${BASE_REF}"
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
+2 -8
View File
@@ -36,14 +36,8 @@ jobs:
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 1
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Fetch PR base ref for tj-actions/changed-files
env:
BASE_REF: ${{ github.event.pull_request.base.ref }}
run: git fetch --depth=1 origin "${BASE_REF}"
fetch-depth: 0
persist-credentials: false
- name: Get changed files
id: changed-files
-1
View File
@@ -45,7 +45,6 @@ jobs:
with:
python-version: '3.12'
install-dependencies: 'false'
enable-cache: 'false'
- name: Configure Git
run: |
+9 -9
View File
@@ -113,9 +113,9 @@ jobs:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: master
commit-message: 'chore(sdk): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
branch: sdk-version-bump-to-v${{ env.NEXT_MINOR_VERSION }}
title: 'chore(sdk): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
commit-message: 'chore(release): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
branch: version-bump-to-v${{ env.NEXT_MINOR_VERSION }}
title: 'chore(release): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
@@ -165,9 +165,9 @@ jobs:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'chore(sdk): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
branch: sdk-version-bump-to-v${{ env.FIRST_PATCH_VERSION }}
title: 'chore(sdk): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
commit-message: 'chore(release): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
branch: version-bump-to-v${{ env.FIRST_PATCH_VERSION }}
title: 'chore(release): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
@@ -233,9 +233,9 @@ jobs:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'chore(sdk): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
branch: sdk-version-bump-to-v${{ env.NEXT_PATCH_VERSION }}
title: 'chore(sdk): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
commit-message: 'chore(release): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
branch: version-bump-to-v${{ env.NEXT_PATCH_VERSION }}
title: 'chore(release): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
@@ -5,9 +5,6 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'tests/providers/**/*_test.py'
- '.github/workflows/sdk-check-duplicate-test-names.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
+66 -9
View File
@@ -3,7 +3,9 @@ name: 'SDK: Container Build and Push'
on:
push:
branches:
- 'master'
- 'v3' # For v3-latest
- 'v4.6' # For v4-latest
- 'master' # For latest
paths-ignore:
- '.github/**'
- '!.github/workflows/sdk-container-build-push.yml'
@@ -54,6 +56,7 @@ jobs:
timeout-minutes: 5
outputs:
prowler_version: ${{ steps.get-prowler-version.outputs.prowler_version }}
prowler_version_major: ${{ steps.get-prowler-version.outputs.prowler_version_major }}
latest_tag: ${{ steps.get-prowler-version.outputs.latest_tag }}
stable_tag: ${{ steps.get-prowler-version.outputs.stable_tag }}
permissions:
@@ -78,7 +81,6 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
enable-cache: 'false'
- name: Inject poetry-bumpversion plugin
run: pipx inject poetry poetry-bumpversion
@@ -89,13 +91,32 @@ jobs:
PROWLER_VERSION="$(poetry version -s 2>/dev/null)"
echo "prowler_version=${PROWLER_VERSION}" >> "${GITHUB_OUTPUT}"
# Extract major version
PROWLER_VERSION_MAJOR="${PROWLER_VERSION%%.*}"
if [[ "${PROWLER_VERSION_MAJOR}" != "5" ]]; then
echo "::error::Unsupported Prowler major version: ${PROWLER_VERSION_MAJOR}"
exit 1
fi
echo "latest_tag=latest" >> "${GITHUB_OUTPUT}"
echo "stable_tag=stable" >> "${GITHUB_OUTPUT}"
echo "prowler_version_major=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_OUTPUT}"
# Set version-specific tags
case ${PROWLER_VERSION_MAJOR} in
3)
echo "latest_tag=v3-latest" >> "${GITHUB_OUTPUT}"
echo "stable_tag=v3-stable" >> "${GITHUB_OUTPUT}"
echo "✓ Prowler v3 detected - tags: v3-latest, v3-stable"
;;
4)
echo "latest_tag=v4-latest" >> "${GITHUB_OUTPUT}"
echo "stable_tag=v4-stable" >> "${GITHUB_OUTPUT}"
echo "✓ Prowler v4 detected - tags: v4-latest, v4-stable"
;;
5)
echo "latest_tag=latest" >> "${GITHUB_OUTPUT}"
echo "stable_tag=stable" >> "${GITHUB_OUTPUT}"
echo "✓ Prowler v5 detected - tags: latest, stable"
;;
*)
echo "::error::Unsupported Prowler major version: ${PROWLER_VERSION_MAJOR}"
exit 1
;;
esac
notify-release-started:
if: github.repository == 'prowler-cloud/prowler' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
@@ -206,7 +227,7 @@ jobs:
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }},scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
# Create and push multi-architecture manifest
create-manifest:
@@ -364,3 +385,39 @@ jobs:
payload-file-path: "./.github/scripts/slack-messages/container-release-completed.json"
step-outcome: ${{ steps.outcome.outputs.outcome }}
update-ts: ${{ needs.notify-release-started.outputs.message-ts }}
dispatch-v3-deployment:
needs: [setup, container-build-push]
if: always() && needs.setup.outputs.prowler_version_major == '3' && needs.setup.result == 'success' && needs.container-build-push.result == 'success'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
with:
egress-policy: audit
- name: Calculate short SHA
id: short-sha
run: echo "short_sha=${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
- name: Dispatch v3 deployment (latest)
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}
event-type: dispatch
client-payload: '{"version":"v3-latest","tag":"${{ steps.short-sha.outputs.short_sha }}"}'
- name: Dispatch v3 deployment (release)
if: github.event_name == 'release'
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}
event-type: dispatch
client-payload: '{"version":"release","tag":"${{ needs.setup.outputs.prowler_version }}"}'
+17 -19
View File
@@ -5,22 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'Dockerfile*'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-container-checks.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'Dockerfile*'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-container-checks.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -68,7 +56,16 @@ jobs:
sdk-container-build-and-scan:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -135,22 +132,23 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Build SDK container
- name: Build SDK container for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
context: .
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }}
platforms: ${{ matrix.platform }}
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
- name: Scan SDK container with Trivy
- name: Scan SDK container with Trivy for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
image-tag: ${{ github.sha }}-${{ matrix.arch }}
fail-on-critical: 'false'
severity: 'CRITICAL'
-2
View File
@@ -80,7 +80,6 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
enable-cache: 'false'
- name: Build Prowler package
run: poetry build
@@ -117,7 +116,6 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
enable-cache: 'false'
- name: Install toml package
run: pip install toml
+1 -18
View File
@@ -5,26 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-tests.yml'
- '.github/workflows/sdk-security.yml'
- '.github/actions/setup-python-poetry/**'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-tests.yml'
- '.github/workflows/sdk-security.yml'
- '.github/actions/setup-python-poetry/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -99,8 +83,7 @@ jobs:
- name: Security scan with Safety
if: steps.check-changes.outputs.any_changed == 'true'
# Accepted CVEs, severity threshold, and ignore expirations live in .safety-policy.yml
run: poetry run safety check -r pyproject.toml --policy-file .safety-policy.yml
run: poetry run safety check -r pyproject.toml
- name: Dead code detection with Vulture
if: steps.check-changes.outputs.any_changed == 'true'
+2 -2
View File
@@ -209,11 +209,11 @@ jobs:
echo "AWS service_paths='${STEPS_AWS_SERVICES_OUTPUTS_SERVICE_PATHS}'"
if [ "${STEPS_AWS_SERVICES_OUTPUTS_RUN_ALL}" = "true" ]; then
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml tests/providers/aws
poetry run pytest -p no:randomly -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml tests/providers/aws
elif [ -z "${STEPS_AWS_SERVICES_OUTPUTS_SERVICE_PATHS}" ]; then
echo "No AWS service paths detected; skipping AWS tests."
else
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml ${STEPS_AWS_SERVICES_OUTPUTS_SERVICE_PATHS}
poetry run pytest -p no:randomly -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml ${STEPS_AWS_SERVICES_OUTPUTS_SERVICE_PATHS}
fi
env:
STEPS_AWS_SERVICES_OUTPUTS_RUN_ALL: ${{ steps.aws-services.outputs.run_all }}
@@ -151,7 +151,7 @@ jobs:
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }},scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
# Create and push multi-architecture manifest
create-manifest:
+17 -13
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'ui/**'
- '.github/workflows/ui-container-checks.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'ui/**'
- '.github/workflows/ui-container-checks.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -63,7 +57,16 @@ jobs:
ui-container-build-and-scan:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -111,7 +114,7 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Build UI container
- name: Build UI container for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
@@ -119,17 +122,18 @@ jobs:
target: prod
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=${{ github.event_name == 'pull_request' && 'min' || 'max' }}
platforms: ${{ matrix.platform }}
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
build-args: |
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_51LwpXXXX
- name: Scan UI container with Trivy
- name: Scan UI container with Trivy for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
image-tag: ${{ github.sha }}-${{ matrix.arch }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+1 -5
View File
@@ -15,10 +15,6 @@ on:
- 'ui/**'
- 'api/**' # API changes can affect UI E2E
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
@@ -270,7 +266,7 @@ jobs:
with:
name: playwright-report
path: ui/playwright-report/
retention-days: 7
retention-days: 30
- name: Cleanup services
if: always()
+9 -30
View File
@@ -1,14 +1,14 @@
name: "UI: Tests"
name: 'UI: Tests'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -16,7 +16,7 @@ concurrency:
env:
UI_WORKING_DIR: ./ui
NODE_VERSION: "24.13.0"
NODE_VERSION: '24.13.0'
permissions: {}
@@ -42,9 +42,6 @@ jobs:
fonts.gstatic.com:443
api.github.com:443
release-assets.githubusercontent.com:443
cdn.playwright.dev:443
objects.githubusercontent.com:443
playwright.download.prss.microsoft.com:443
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
@@ -136,7 +133,7 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed == 'true'
run: |
echo "Critical paths changed - running ALL unit tests"
pnpm run test:unit
pnpm run test:run
- name: Run unit tests (related to changes only)
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed != 'true' && steps.changed-source.outputs.all_changed_files != ''
@@ -145,7 +142,7 @@ jobs:
echo "${STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES}"
# Convert space-separated to vitest related format (remove ui/ prefix for relative paths)
CHANGED_FILES=$(echo "${STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES}" | tr ' ' '\n' | sed 's|^ui/||' | tr '\n' ' ')
pnpm exec vitest related $CHANGED_FILES --run --project unit
pnpm exec vitest related $CHANGED_FILES --run
env:
STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES: ${{ steps.changed-source.outputs.all_changed_files }}
@@ -153,25 +150,7 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed != 'true' && steps.changed-source.outputs.all_changed_files == ''
run: |
echo "Only test files changed - running ALL unit tests"
pnpm run test:unit
- name: Cache Playwright browsers
if: steps.check-changes.outputs.any_changed == 'true'
id: playwright-cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-chromium-${{ hashFiles('ui/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-playwright-chromium-
- name: Install Playwright Chromium browser
if: steps.check-changes.outputs.any_changed == 'true' && steps.playwright-cache.outputs.cache-hit != 'true'
run: pnpm exec playwright install chromium
- name: Run browser tests
if: steps.check-changes.outputs.any_changed == 'true'
run: pnpm run test:browser
pnpm run test:run
- name: Build application
if: steps.check-changes.outputs.any_changed == 'true'
-1
View File
@@ -8,7 +8,6 @@ rules:
- docs-bump-version.yml
- issue-triage.lock.yml
- mcp-container-build-push.yml
- nightly-arm64-container-builds.yml
- pr-merged.yml
- prepare-release.yml
- sdk-bump-version.yml
-2
View File
@@ -151,8 +151,6 @@ node_modules
# Persistent data
_data/
/openspec/
/.gitmodules
# AI Instructions (generated by skills/setup.sh from AGENTS.md)
CLAUDE.md
+45 -95
View File
@@ -1,199 +1,149 @@
# Priority tiers (lower = runs first, same priority = concurrent):
# P0 — fast file fixers
# P10 — validators and guards
# P20 — auto-formatters
# P30 — linters
# P40 — security scanners
# P50 — dependency validation
default_install_hook_types: [pre-commit]
repos:
## GENERAL (prek built-in — no external repo needed)
- repo: builtin
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
priority: 10
- id: check-yaml
args: ["--allow-multiple-documents"]
exclude: (prowler/config/llm_config.yaml|contrib/)
priority: 10
args: ["--unsafe"]
exclude: prowler/config/llm_config.yaml
- id: check-json
priority: 10
- id: end-of-file-fixer
priority: 0
- id: trailing-whitespace
priority: 0
- id: no-commit-to-branch
priority: 10
- id: pretty-format-json
args: ["--autofix", --no-sort-keys, --no-ensure-ascii]
priority: 10
## TOML
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
rev: v2.16.0
rev: v2.13.0
hooks:
- id: pretty-format-toml
args: [--autofix]
files: pyproject.toml
priority: 20
## GITHUB ACTIONS
- repo: https://github.com/zizmorcore/zizmor-pre-commit
rev: v1.24.1
rev: v1.6.0
hooks:
- id: zizmor
# zizmor only audits workflows, composite actions and dependabot
# config; broader paths trip exit 3 ("no audit was performed").
files: ^\.github/(workflows|actions)/.+\.ya?ml$|^\.github/dependabot\.ya?ml$
priority: 30
files: ^\.github/
## BASH
- repo: https://github.com/koalaman/shellcheck-precommit
rev: v0.11.0
rev: v0.10.0
hooks:
- id: shellcheck
exclude: contrib
priority: 30
## PYTHON — SDK (prowler/, tests/, dashboard/, util/, scripts/)
## PYTHON
- repo: https://github.com/myint/autoflake
rev: v2.3.3
rev: v2.3.1
hooks:
- id: autoflake
name: "SDK - autoflake"
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
args: ["--in-place", "--remove-all-unused-imports", "--remove-unused-variable"]
priority: 20
exclude: ^skills/
args:
[
"--in-place",
"--remove-all-unused-imports",
"--remove-unused-variable",
]
- repo: https://github.com/pycqa/isort
rev: 8.0.1
rev: 5.13.2
hooks:
- id: isort
name: "SDK - isort"
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
exclude: ^skills/
args: ["--profile", "black"]
priority: 20
- repo: https://github.com/psf/black
rev: 26.3.1
rev: 24.4.2
hooks:
- id: black
name: "SDK - black"
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
priority: 20
exclude: ^skills/
- repo: https://github.com/pycqa/flake8
rev: 7.3.0
rev: 7.0.0
hooks:
- id: flake8
name: "SDK - flake8"
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
exclude: (contrib|^skills/)
args: ["--ignore=E266,W503,E203,E501,W605"]
priority: 30
## PYTHON — API + MCP Server (ruff)
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.11
hooks:
- id: ruff
name: "API + MCP - ruff check"
files: { glob: ["{api,mcp_server}/**/*.py"] }
args: ["--fix"]
priority: 30
- id: ruff-format
name: "API + MCP - ruff format"
files: { glob: ["{api,mcp_server}/**/*.py"] }
priority: 20
## PYTHON — Poetry
- repo: https://github.com/python-poetry/poetry
rev: 2.3.4
hooks:
- id: poetry-check
name: API - poetry-check
args: ["--directory=./api"]
files: { glob: ["api/{pyproject.toml,poetry.lock}"] }
pass_filenames: false
priority: 50
- id: poetry-lock
name: API - poetry-lock
args: ["--directory=./api"]
files: { glob: ["api/{pyproject.toml,poetry.lock}"] }
pass_filenames: false
priority: 50
- id: poetry-check
name: SDK - poetry-check
args: ["--directory=./"]
files: { glob: ["{pyproject.toml,poetry.lock}"] }
pass_filenames: false
priority: 50
- id: poetry-lock
name: SDK - poetry-lock
args: ["--directory=./"]
files: { glob: ["{pyproject.toml,poetry.lock}"] }
pass_filenames: false
priority: 50
## CONTAINERS
- repo: https://github.com/hadolint/hadolint
rev: v2.14.0
rev: v2.13.0-beta
hooks:
- id: hadolint
args: ["--ignore=DL3013"]
priority: 30
## LOCAL HOOKS
- repo: local
hooks:
- id: pylint
name: "SDK - pylint"
entry: pylint --disable=W,C,R,E -j 0 -rn -sn
name: pylint
entry: bash -c 'pylint --disable=W,C,R,E -j 0 -rn -sn prowler/'
language: system
types: [python]
files: { glob: ["{prowler,tests,dashboard,util,scripts}/**/*.py"] }
priority: 30
files: '.*\.py'
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
entry: bash -c 'trufflehog --no-update git file://. --since-commit HEAD --only-verified --fail'
entry: bash -c 'trufflehog --no-update git file://. --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
# entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
pass_filenames: false
stages: ["pre-commit", "pre-push"]
priority: 40
- id: bandit
name: bandit
description: "Bandit is a tool for finding common security issues in Python code"
entry: bandit -q -lll
entry: bash -c 'bandit -q -lll -x '*_test.py,./contrib/,./.venv/,./skills/' -r .'
language: system
types: [python]
files: '.*\.py'
exclude: { glob: ["{contrib,skills}/**", "**/.venv/**", "**/*_test.py"] }
priority: 40
- id: safety
name: safety
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
# Accepted CVEs, severity threshold, and ignore expirations live in .safety-policy.yml
entry: safety check --policy-file .safety-policy.yml
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
# TODO: 79023 & 79027 knack ReDoS until `azure-cli-core` (via `cartography`) allows `knack` >=0.13.0
# TODO: 86217 because `alibabacloud-tea-openapi == 0.4.3` don't let us upgrade `cryptography >= 46.0.0`
# TODO: 71600 CVE-2024-1135 false positive - fixed in gunicorn 22.0.0, project uses 23.0.0
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745,79023,79027,86217,71600'
language: system
pass_filenames: false
files: { glob: ["**/pyproject.toml", "**/poetry.lock", "**/requirements*.txt", ".safety-policy.yml"] }
priority: 40
- id: vulture
name: vulture
description: "Vulture finds unused code in Python programs."
entry: vulture --min-confidence 100
entry: bash -c 'vulture --exclude "contrib,.venv,api/src/backend/api/tests/,api/src/backend/conftest.py,api/src/backend/tasks/tests/,skills/" --min-confidence 100 .'
language: system
types: [python]
files: '.*\.py'
priority: 40
- id: ui-checks
name: UI - Husky Pre-commit
description: "Run UI pre-commit checks (Claude Code validation + healthcheck)"
entry: bash -c 'cd ui && .husky/pre-commit'
language: system
files: '^ui/.*\.(ts|tsx|js|jsx|json|css)$'
pass_filenames: false
verbose: true
-58
View File
@@ -1,58 +0,0 @@
# Safety policy for `safety check` (Safety CLI 3.x, v2 schema).
# Applied in: .pre-commit-config.yaml, .github/workflows/api-security.yml,
# .github/workflows/sdk-security.yml via `--policy-file`.
#
# Validate: poetry run safety validate policy_file --path .safety-policy.yml
security:
# Scan unpinned requirements too. Prowler pins via poetry.lock, so this is
# defensive against accidental unpinned entries.
ignore-unpinned-requirements: False
# CVSS severity filter. 7 = report only HIGH (7.08.9) and CRITICAL (9.010.0).
# Reference: 9=CRITICAL only, 7=CRITICAL+HIGH, 4=CRITICAL+HIGH+MEDIUM.
ignore-cvss-severity-below: 7
# Unknown severity is unrated, not safe. Keep False so unrated CVEs still fail
# the build and get a human eye. Flip to True only if noise is unmanageable.
ignore-cvss-unknown-severity: False
# Fail the build when a non-ignored vulnerability is found.
continue-on-vulnerability-error: False
# Explicit accepted vulnerabilities. Each entry MUST have a reason and an
# expiry. Expired entries fail the scan, forcing re-audit.
ignore-vulnerabilities:
77744:
reason: "Botocore requires urllib3 1.X. Remove once upgraded to urllib3 2.X."
expires: '2026-10-22'
77745:
reason: "Botocore requires urllib3 1.X. Remove once upgraded to urllib3 2.X."
expires: '2026-10-22'
79023:
reason: "knack ReDoS; blocked until azure-cli-core (via cartography) allows knack >=0.13.0."
expires: '2026-10-22'
79027:
reason: "knack ReDoS; blocked until azure-cli-core (via cartography) allows knack >=0.13.0."
expires: '2026-10-22'
86217:
reason: "alibabacloud-tea-openapi==0.4.3 blocks upgrade to cryptography >=46.0.0."
expires: '2026-10-22'
71600:
reason: "CVE-2024-1135 false positive. Fixed in gunicorn 22.0.0; project uses 23.0.0."
expires: '2026-10-22'
70612:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
66963:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
74429:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
76352:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
76353:
reason: "TBD - audit required. Reason not documented in prior --ignore list."
expires: '2026-07-22'
-2
View File
@@ -1,2 +0,0 @@
.envrc
ui/.env.local
+3 -11
View File
@@ -15,7 +15,7 @@ Use these skills for detailed patterns on-demand:
|-------|-------------|-----|
| `typescript` | Const types, flat interfaces, utility types | [SKILL.md](skills/typescript/SKILL.md) |
| `react-19` | No useMemo/useCallback, React Compiler | [SKILL.md](skills/react-19/SKILL.md) |
| `nextjs-16` | App Router, Server Actions, proxy.ts, streaming | [SKILL.md](skills/nextjs-16/SKILL.md) |
| `nextjs-15` | App Router, Server Actions, streaming | [SKILL.md](skills/nextjs-15/SKILL.md) |
| `tailwind-4` | cn() utility, no var() in className | [SKILL.md](skills/tailwind-4/SKILL.md) |
| `playwright` | Page Object Model, MCP workflow, selectors | [SKILL.md](skills/playwright/SKILL.md) |
| `pytest` | Fixtures, mocking, markers, parametrize | [SKILL.md](skills/pytest/SKILL.md) |
@@ -60,14 +60,11 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
|--------|-------|
| Add changelog entry for a PR or feature | `prowler-changelog` |
| Adding DRF pagination or permissions | `django-drf` |
| Adding a compliance output formatter (per-provider class + table dispatcher) | `prowler-compliance` |
| Adding indexes or constraints to database tables | `django-migration-psql` |
| Adding new providers | `prowler-provider` |
| Adding privilege escalation detection queries | `prowler-attack-paths-query` |
| Adding services to existing providers | `prowler-provider` |
| After creating/modifying a skill | `skill-sync` |
| App Router / Server Actions | `nextjs-16` |
| Auditing check-to-requirement mappings as a cloud auditor | `prowler-compliance` |
| App Router / Server Actions | `nextjs-15` |
| Building AI chat features | `ai-sdk-5` |
| Committing changes | `prowler-commit` |
| Configuring MCP servers in agentic workflows | `gh-aw` |
@@ -81,7 +78,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Creating a git commit | `prowler-commit` |
| Creating new checks | `prowler-sdk-check` |
| Creating new skills | `skill-creator` |
| Creating or reviewing Django migrations | `django-migration-psql` |
| Creating/modifying Prowler UI components | `prowler-ui` |
| Creating/modifying models, views, serializers | `prowler-api` |
| Creating/updating compliance frameworks | `prowler-compliance` |
@@ -89,7 +85,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Debugging gh-aw compilation errors | `gh-aw` |
| Fill .github/pull_request_template.md (Context/Description/Steps to review/Checklist) | `prowler-pr` |
| Fixing bug | `tdd` |
| Fixing compliance JSON bugs (duplicate IDs, empty Section, stale refs) | `prowler-compliance` |
| General Prowler development questions | `prowler` |
| Implementing JSON:API endpoints | `django-drf` |
| Implementing feature | `tdd` |
@@ -107,8 +102,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Review changelog format and conventions | `prowler-changelog` |
| Reviewing JSON:API compliance | `jsonapi` |
| Reviewing compliance framework PRs | `prowler-compliance-review` |
| Running makemigrations or pgmakemigrations | `django-migration-psql` |
| Syncing compliance framework with upstream catalog | `prowler-compliance` |
| Testing RLS tenant isolation | `prowler-test-api` |
| Testing hooks or utilities | `vitest` |
| Troubleshoot why a skill is missing from AGENTS.md auto-invoke | `skill-sync` |
@@ -136,7 +129,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Writing React components | `react-19` |
| Writing TypeScript types/interfaces | `typescript` |
| Writing Vitest tests | `vitest` |
| Writing data backfill or data migration | `django-migration-psql` |
| Writing documentation | `prowler-docs` |
| Writing unit tests for UI | `vitest` |
@@ -150,7 +142,7 @@ Prowler is an open-source cloud security assessment tool supporting AWS, Azure,
|-----------|----------|------------|
| SDK | `prowler/` | Python 3.10+, Poetry 2.3+ |
| API | `api/` | Django 5.1, DRF, Celery |
| UI | `ui/` | Next.js 16, React 19, Tailwind 4 |
| UI | `ui/` | Next.js 15, React 19, Tailwind 4 |
| MCP Server | `mcp_server/` | FastMCP, Python 3.12+ |
| Dashboard | `dashboard/` | Dash, Plotly |
+6 -29
View File
@@ -1,34 +1,11 @@
# Do you want to learn on how to...
- [Contribute with your code or fixes to Prowler](https://docs.prowler.com/developer-guide/introduction)
- [Create a new provider](https://docs.prowler.com/developer-guide/provider)
- [Create a new service](https://docs.prowler.com/developer-guide/services)
- [Create a new check for a provider](https://docs.prowler.com/developer-guide/checks)
- [Create a new security compliance framework](https://docs.prowler.com/developer-guide/security-compliance-framework)
- [Add a custom output format](https://docs.prowler.com/developer-guide/outputs)
- [Add a new integration](https://docs.prowler.com/developer-guide/integrations)
- [Contribute with documentation](https://docs.prowler.com/developer-guide/documentation)
- [Write unit tests](https://docs.prowler.com/developer-guide/unit-testing)
- [Write integration tests](https://docs.prowler.com/developer-guide/integration-testing)
- [Write end-to-end tests](https://docs.prowler.com/developer-guide/end2end-testing)
- [Debug Prowler](https://docs.prowler.com/developer-guide/debugging)
- [Configure checks](https://docs.prowler.com/developer-guide/configurable-checks)
- [Rename checks](https://docs.prowler.com/developer-guide/renaming-checks)
- [Follow the check metadata guidelines](https://docs.prowler.com/developer-guide/check-metadata-guidelines)
- [Extend the MCP server](https://docs.prowler.com/developer-guide/mcp-server)
- [Extend Lighthouse AI](https://docs.prowler.com/developer-guide/lighthouse-architecture)
- [Add AI skills](https://docs.prowler.com/developer-guide/ai-skills)
Provider-specific developer notes:
- [AWS](https://docs.prowler.com/developer-guide/aws-details)
- [Azure](https://docs.prowler.com/developer-guide/azure-details)
- [Google Cloud](https://docs.prowler.com/developer-guide/gcp-details)
- [Alibaba Cloud](https://docs.prowler.com/developer-guide/alibabacloud-details)
- [Kubernetes](https://docs.prowler.com/developer-guide/kubernetes-details)
- [Microsoft 365](https://docs.prowler.com/developer-guide/m365-details)
- [GitHub](https://docs.prowler.com/developer-guide/github-details)
- [LLM](https://docs.prowler.com/developer-guide/llm-details)
- Contribute with your code or fixes to Prowler
- Create a new check for a provider
- Create a new security compliance framework
- Add a custom output format
- Add a new integration
- Contribute with documentation
Want some swag as appreciation for your contribution?
+1 -20
View File
@@ -6,12 +6,9 @@ LABEL org.opencontainers.image.source="https://github.com/prowler-cloud/prowler"
ARG POWERSHELL_VERSION=7.5.0
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
ARG TRIVY_VERSION=0.70.0
ARG TRIVY_VERSION=0.69.2
ENV TRIVY_VERSION=${TRIVY_VERSION}
ARG ZIZMOR_VERSION=1.24.1
ENV ZIZMOR_VERSION=${ZIZMOR_VERSION}
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget libicu72 libunwind8 libssl3 libcurl4 ca-certificates apt-transport-https gnupg \
@@ -51,22 +48,6 @@ RUN ARCH=$(uname -m) && \
mkdir -p /tmp/.cache/trivy && \
chmod 777 /tmp/.cache/trivy
# Install zizmor for GitHub Actions workflow scanning
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
ZIZMOR_ARCH="x86_64-unknown-linux-gnu" ; \
elif [ "$ARCH" = "aarch64" ]; then \
ZIZMOR_ARCH="aarch64-unknown-linux-gnu" ; \
else \
echo "Unsupported architecture for zizmor: $ARCH" && exit 1 ; \
fi && \
wget --progress=dot:giga "https://github.com/zizmorcore/zizmor/releases/download/v${ZIZMOR_VERSION}/zizmor-${ZIZMOR_ARCH}.tar.gz" -O /tmp/zizmor.tar.gz && \
mkdir -p /tmp/zizmor-extract && \
tar zxf /tmp/zizmor.tar.gz -C /tmp/zizmor-extract && \
mv /tmp/zizmor-extract/zizmor /usr/local/bin/zizmor && \
chmod +x /usr/local/bin/zizmor && \
rm -rf /tmp/zizmor.tar.gz /tmp/zizmor-extract
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
+12 -42
View File
@@ -104,22 +104,22 @@ Every AWS provider scan will enqueue an Attack Paths ingestion job automatically
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Interface |
|---|---|---|---|---|---|---|
| AWS | 595 | 84 | 43 | 17 | Official | UI, API, CLI |
| Azure | 167 | 22 | 19 | 16 | Official | UI, API, CLI |
| GCP | 102 | 18 | 17 | 12 | Official | UI, API, CLI |
| Kubernetes | 83 | 7 | 7 | 11 | Official | UI, API, CLI |
| GitHub | 24 | 3 | 1 | 5 | Official | UI, API, CLI |
| M365 | 101 | 10 | 4 | 10 | Official | UI, API, CLI |
| OCI | 51 | 14 | 4 | 10 | Official | UI, API, CLI |
| Alibaba Cloud | 61 | 9 | 4 | 9 | Official | UI, API, CLI |
| Cloudflare | 29 | 3 | 0 | 5 | Official | UI, API, CLI |
| AWS | 572 | 83 | 41 | 17 | Official | UI, API, CLI |
| Azure | 165 | 20 | 18 | 13 | Official | UI, API, CLI |
| GCP | 100 | 13 | 15 | 11 | Official | UI, API, CLI |
| Kubernetes | 83 | 7 | 7 | 9 | Official | UI, API, CLI |
| GitHub | 21 | 2 | 1 | 2 | Official | UI, API, CLI |
| M365 | 89 | 9 | 4 | 5 | Official | UI, API, CLI |
| OCI | 48 | 13 | 3 | 10 | Official | UI, API, CLI |
| Alibaba Cloud | 61 | 9 | 3 | 9 | Official | UI, API, CLI |
| Cloudflare | 29 | 2 | 0 | 5 | Official | UI, API, CLI |
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | UI, API, CLI |
| MongoDB Atlas | 10 | 3 | 0 | 8 | Official | UI, API, CLI |
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | CLI |
| Image | N/A | N/A | N/A | N/A | Official | CLI, API |
| Google Workspace | 25 | 4 | 2 | 4 | Official | CLI |
| OpenStack | 34 | 5 | 0 | 9 | Official | UI, API, CLI |
| Vercel | 26 | 6 | 0 | 5 | Official | CLI |
| Google Workspace | 1 | 1 | 0 | 1 | Official | CLI |
| OpenStack | 27 | 4 | 0 | 8 | Official | UI, API, CLI |
| Vercel | 30 | 6 | 0 | 5 | Official | CLI |
| NHN | 6 | 2 | 1 | 0 | Unofficial | CLI |
> [!Note]
@@ -300,36 +300,6 @@ python prowler-cli.py -v
> If your Poetry version is below v2.0.0, continue using `poetry shell` to activate your environment.
> For further guidance, refer to the Poetry Environment Activation Guide https://python-poetry.org/docs/managing-environments/#activating-the-environment.
# 🛡️ GitHub Action
The official **Prowler GitHub Action** runs Prowler scans in your GitHub workflows using the official [`prowlercloud/prowler`](https://hub.docker.com/r/prowlercloud/prowler) Docker image. Scans run on any [supported provider](https://docs.prowler.com/user-guide/providers/), with optional [`--push-to-cloud`](https://docs.prowler.com/user-guide/tutorials/prowler-app-import-findings) to send findings to Prowler Cloud and optional SARIF upload so findings show up in the repo's **Security → Code scanning** tab and as inline PR annotations.
```yaml
name: Prowler IaC Scan
on:
pull_request:
permissions:
contents: read
security-events: write
actions: read
jobs:
prowler:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: prowler-cloud/prowler@5.25
with:
provider: iac
output-formats: sarif json-ocsf
upload-sarif: true
flags: --severity critical high
```
Full configuration, per-provider authentication, and SARIF examples: [Prowler GitHub Action tutorial](docs/user-guide/tutorials/prowler-app-github-action.mdx). Marketplace listing: [Prowler Security Scan](https://github.com/marketplace/actions/prowler-security-scan).
# ✏️ High level architecture
## Prowler App
-307
View File
@@ -1,307 +0,0 @@
name: Prowler Security Scan
description: Run Prowler cloud security scanner using the official Docker image
branding:
icon: cloud
color: green
inputs:
provider:
description: Cloud provider to scan (e.g. aws, azure, gcp, github, kubernetes, iac). See https://docs.prowler.com for supported providers.
required: true
image-tag:
description: >
Docker image tag for prowlercloud/prowler.
Default is "stable" (latest release). Available tags:
"stable" (latest release), "latest" (master branch, not stable),
"<x.y.z>" (pinned release version).
See all tags at https://hub.docker.com/r/prowlercloud/prowler/tags
required: false
default: stable
output-formats:
description: Output format(s) for scan results (e.g. "json-ocsf", "sarif json-ocsf")
required: false
default: json-ocsf
push-to-cloud:
description: Push scan findings to Prowler Cloud. Requires the PROWLER_CLOUD_API_KEY environment variable. See https://docs.prowler.com/user-guide/tutorials/prowler-app-import-findings#using-the-cli
required: false
default: "false"
flags:
description: 'Additional CLI flags passed to the Prowler scan (e.g. "--severity critical high --compliance cis_aws"). Values containing spaces can be quoted, e.g. "--resource-tag ''Environment=My Server''".'
required: false
default: ""
extra-env:
description: >
Space-, newline-, or comma-separated list of host environment variable NAMES to forward to the Prowler container
(e.g. "AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN" for AWS,
"GITHUB_PERSONAL_ACCESS_TOKEN" for GitHub, "CLOUDFLARE_API_TOKEN" for Cloudflare).
List names only; set the values via `env:` at the workflow or job level (typically from `secrets.*`).
See the README for per-provider examples.
required: false
default: ""
upload-sarif:
description: 'Upload SARIF results to GitHub Code Scanning (requires "sarif" in output-formats and both `security-events: write` and `actions: read` permissions)'
required: false
default: "false"
sarif-file:
description: Path to the SARIF file to upload (auto-detected from output/ if not set)
required: false
default: ""
sarif-category:
description: Category for the SARIF upload (used to distinguish multiple analyses)
required: false
default: prowler
fail-on-findings:
description: Fail the workflow step when Prowler detects findings (exit code 3). By default the action tolerates findings and succeeds.
required: false
default: "false"
runs:
using: composite
steps:
- name: Validate inputs
shell: bash
env:
INPUT_IMAGE_TAG: ${{ inputs.image-tag }}
INPUT_UPLOAD_SARIF: ${{ inputs.upload-sarif }}
INPUT_OUTPUT_FORMATS: ${{ inputs.output-formats }}
run: |
# Validate image tag format (alphanumeric, dots, hyphens, underscores only)
if [[ ! "$INPUT_IMAGE_TAG" =~ ^[a-zA-Z0-9._-]+$ ]]; then
echo "::error::Invalid image-tag '${INPUT_IMAGE_TAG}'. Must contain only alphanumeric characters, dots, hyphens, and underscores."
exit 1
fi
# Warn if upload-sarif is enabled but sarif not in output-formats
if [ "$INPUT_UPLOAD_SARIF" = "true" ]; then
if [[ ! "$INPUT_OUTPUT_FORMATS" =~ (^|[[:space:]])sarif($|[[:space:]]) ]]; then
echo "::warning::upload-sarif is enabled but 'sarif' is not included in output-formats ('${INPUT_OUTPUT_FORMATS}'). SARIF upload will fail unless you add 'sarif' to output-formats."
fi
fi
- name: Run Prowler scan
shell: bash
env:
INPUT_PROVIDER: ${{ inputs.provider }}
INPUT_IMAGE_TAG: ${{ inputs.image-tag }}
INPUT_OUTPUT_FORMATS: ${{ inputs.output-formats }}
INPUT_PUSH_TO_CLOUD: ${{ inputs.push-to-cloud }}
INPUT_FLAGS: ${{ inputs.flags }}
INPUT_EXTRA_ENV: ${{ inputs.extra-env }}
INPUT_FAIL_ON_FINDINGS: ${{ inputs.fail-on-findings }}
run: |
set -e
# Parse space-separated inputs with shlex so values with spaces can be quoted
# (e.g. `--resource-tag 'Environment=My Server'`).
mapfile -t OUTPUT_FORMATS < <(python3 -c 'import shlex, os; [print(t) for t in shlex.split(os.environ.get("INPUT_OUTPUT_FORMATS", ""))]')
mapfile -t EXTRA_FLAGS < <(python3 -c 'import shlex, os; [print(t) for t in shlex.split(os.environ.get("INPUT_FLAGS", ""))]')
mapfile -t EXTRA_ENV_NAMES < <(python3 -c 'import shlex, os; [print(t) for t in shlex.split(os.environ.get("INPUT_EXTRA_ENV", "").replace(",", " "))]')
env_args=()
for var in "${EXTRA_ENV_NAMES[@]}"; do
[ -z "$var" ] && continue
if [[ ! "$var" =~ ^[A-Za-z_][A-Za-z0-9_]*$ ]]; then
echo "::error::Invalid env var name '${var}' in extra-env. Names must match ^[A-Za-z_][A-Za-z0-9_]*$."
exit 1
fi
env_args+=("-e" "$var")
done
push_args=()
if [ "$INPUT_PUSH_TO_CLOUD" = "true" ]; then
push_args=("--push-to-cloud")
env_args+=("-e" "PROWLER_CLOUD_API_KEY")
fi
mkdir -p "$GITHUB_WORKSPACE/output"
chmod 777 "$GITHUB_WORKSPACE/output"
set +e
docker run --rm \
"${env_args[@]}" \
-v "$GITHUB_WORKSPACE:/home/prowler/workspace" \
-v "$GITHUB_WORKSPACE/output:/home/prowler/workspace/output" \
-w /home/prowler/workspace \
"prowlercloud/prowler:${INPUT_IMAGE_TAG}" \
"$INPUT_PROVIDER" \
--output-formats "${OUTPUT_FORMATS[@]}" \
"${push_args[@]}" \
"${EXTRA_FLAGS[@]}"
exit_code=$?
set -e
# Exit code 3 = findings detected
if [ "$exit_code" -eq 3 ] && [ "$INPUT_FAIL_ON_FINDINGS" != "true" ]; then
echo "::notice::Prowler detected findings (exit code 3). Set fail-on-findings to 'true' to fail the workflow on findings."
exit 0
fi
exit $exit_code
- name: Upload scan results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: prowler-${{ inputs.provider }}
path: output/
retention-days: 30
if-no-files-found: warn
- name: Find SARIF file
if: always() && inputs.upload-sarif == 'true'
id: find-sarif
shell: bash
env:
INPUT_SARIF_FILE: ${{ inputs.sarif-file }}
run: |
if [ -n "$INPUT_SARIF_FILE" ]; then
echo "sarif_path=$INPUT_SARIF_FILE" >> "$GITHUB_OUTPUT"
else
sarif_file=$(find output/ -name '*.sarif' -type f | head -1)
if [ -z "$sarif_file" ]; then
echo "::warning::No .sarif file found in output/. Ensure 'sarif' is included in output-formats."
echo "sarif_path=" >> "$GITHUB_OUTPUT"
else
echo "sarif_path=$sarif_file" >> "$GITHUB_OUTPUT"
fi
fi
- name: Upload SARIF to GitHub Code Scanning
if: always() && inputs.upload-sarif == 'true' && steps.find-sarif.outputs.sarif_path != ''
uses: github/codeql-action/upload-sarif@d4b3ca9fa7f69d38bfcd667bdc45bc373d16277e # v4
with:
sarif_file: ${{ steps.find-sarif.outputs.sarif_path }}
category: ${{ inputs.sarif-category }}
- name: Write scan summary
if: always()
shell: bash
env:
INPUT_PROVIDER: ${{ inputs.provider }}
INPUT_UPLOAD_SARIF: ${{ inputs.upload-sarif }}
INPUT_PUSH_TO_CLOUD: ${{ inputs.push-to-cloud }}
RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
REPO_URL: ${{ github.server_url }}/${{ github.repository }}
BRANCH: ${{ github.head_ref || github.ref_name }}
GH_TOKEN: ${{ github.token }}
run: |
set +e
# Build a link to the scan step in the workflow logs. Requires `actions: read`
# on the caller's GITHUB_TOKEN; silently skips the link if unavailable.
scan_step_url=""
if [ -n "${GH_TOKEN:-}" ] && command -v gh >/dev/null 2>&1; then
job_info=$(gh api \
"repos/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}/attempts/${GITHUB_RUN_ATTEMPT:-1}/jobs" \
--jq ".jobs[] | select(.runner_name == \"${RUNNER_NAME:-}\")" 2>/dev/null)
if [ -n "$job_info" ]; then
job_id=$(jq -r '.id // empty' <<<"$job_info")
step_number=$(jq -r '[.steps[]? | select((.name // "") | test("Run Prowler scan"; "i")) | .number] | first // empty' <<<"$job_info")
if [ -z "$step_number" ]; then
step_number=$(jq -r '[.steps[]? | select(.status == "in_progress") | .number] | first // empty' <<<"$job_info")
fi
if [ -n "$job_id" ] && [ -n "$step_number" ]; then
scan_step_url="${REPO_URL}/actions/runs/${GITHUB_RUN_ID}/job/${job_id}#step:${step_number}:1"
elif [ -n "$job_id" ]; then
scan_step_url="${REPO_URL}/actions/runs/${GITHUB_RUN_ID}/job/${job_id}"
fi
fi
fi
# Map provider code to a properly-cased display name.
case "$INPUT_PROVIDER" in
alibabacloud) provider_name="Alibaba Cloud" ;;
aws) provider_name="AWS" ;;
azure) provider_name="Azure" ;;
cloudflare) provider_name="Cloudflare" ;;
gcp) provider_name="GCP" ;;
github) provider_name="GitHub" ;;
googleworkspace) provider_name="Google Workspace" ;;
iac) provider_name="IaC" ;;
image) provider_name="Container Image" ;;
kubernetes) provider_name="Kubernetes" ;;
llm) provider_name="LLM" ;;
m365) provider_name="Microsoft 365" ;;
mongodbatlas) provider_name="MongoDB Atlas" ;;
nhn) provider_name="NHN" ;;
openstack) provider_name="OpenStack" ;;
oraclecloud) provider_name="Oracle Cloud" ;;
vercel) provider_name="Vercel" ;;
*) provider_name="${INPUT_PROVIDER^}" ;;
esac
ocsf_file=$(find output/ -name '*.ocsf.json' -type f 2>/dev/null | head -1)
{
echo "## Prowler ${provider_name} Scan Summary"
echo ""
counts=""
if [ -n "$ocsf_file" ] && [ -s "$ocsf_file" ]; then
counts=$(jq -r '[
length,
([.[] | select(.status_code == "FAIL")] | length),
([.[] | select(.status_code == "PASS")] | length),
([.[] | select(.status_code == "MUTED")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "Critical")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "High")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "Medium")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "Low")] | length),
([.[] | select(.status_code == "FAIL" and .severity == "Informational")] | length)
] | @tsv' "$ocsf_file" 2>/dev/null)
fi
if [ -n "$counts" ]; then
read -r total fail pass muted critical high medium low info <<<"$counts"
line="**${fail:-0} failing** · ${pass:-0} passing"
[ "${muted:-0}" -gt 0 ] && line="${line} · ${muted} muted"
echo "${line} — ${total:-0} checks total"
echo ""
echo "| Severity | Failing |"
echo "|----------|---------|"
echo "| ‼️ Critical | ${critical:-0} |"
echo "| 🔴 High | ${high:-0} |"
echo "| 🟠 Medium | ${medium:-0} |"
echo "| 🔵 Low | ${low:-0} |"
echo "| ⚪ Informational | ${info:-0} |"
echo ""
else
echo "_No findings report was produced. Check the scan logs above._"
echo ""
fi
if [ -n "$scan_step_url" ]; then
echo "**Scan logs:** [view in workflow run](${scan_step_url})"
echo ""
fi
echo "**Get the full report:** [\`prowler-${INPUT_PROVIDER}\` artifact](${RUN_URL}#artifacts)"
if [ "$INPUT_UPLOAD_SARIF" = "true" ] && [ -n "$BRANCH" ]; then
encoded_branch=$(jq -nr --arg b "$BRANCH" '$b|@uri')
echo ""
echo "**See results in GitHub Code Security:** [open alerts on \`${BRANCH}\`](${REPO_URL}/security/code-scanning?query=is%3Aopen+branch%3A${encoded_branch})"
fi
if [ "$INPUT_PUSH_TO_CLOUD" != "true" ]; then
echo ""
echo "---"
echo ""
echo "### Scale ${provider_name} security with Prowler Cloud ☁️"
echo ""
echo "Send this scan's findings to **[Prowler Cloud](https://cloud.prowler.com)** and get:"
echo ""
echo "- **Unified findings** across every cloud, SaaS provider (M365, Google Workspace, GitHub, MongoDB Atlas), IaC repo, Kubernetes cluster, and container image"
echo "- **Posture over time** with alerts, and notifications"
echo "- **Prowler Lighthouse AI**: agentic assistant that triages findings, explains root cause and helps with remediation"
echo "- **50+ Compliance frameworks** mapped automatically"
echo "- **Enterprise-ready platform**: SOC 2 Type 2, SSO/SAML, AWS Security Hub, S3 and Jira integrations"
echo ""
echo "**Get started in 3 steps:**"
echo "1. Create an account at [cloud.prowler.com](https://cloud.prowler.com)"
echo "2. Generate a Prowler Cloud API key ([docs](https://docs.prowler.com/user-guide/tutorials/prowler-app-import-findings#using-the-cli))"
echo "3. Add \`PROWLER_CLOUD_API_KEY\` to your GitHub secrets and set \`push-to-cloud: true\` on this action"
echo ""
echo "See [prowler.com/pricing](https://prowler.com/pricing) for plan details."
fi
} >> "$GITHUB_STEP_SUMMARY"
+1 -91
View File
@@ -2,96 +2,6 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.28.0] (Prowler UNRELEASED)
### 🚀 Added
- GIN index on `findings(categories, resource_services, resource_regions, resource_types)` to speed up `/api/v1/finding-groups` array filters [(#11001)](https://github.com/prowler-cloud/prowler/pull/11001)
### 🔄 Changed
- Remove orphaned `gin_resources_search_idx` declaration from `Resource.Meta.indexes` (DB index dropped in `0072_drop_unused_indexes`) [(#11001)](https://github.com/prowler-cloud/prowler/pull/11001)
- PDF compliance reports cap detail tables at 100 failed findings per check (configurable via `DJANGO_PDF_MAX_FINDINGS_PER_CHECK`) to bound worker memory on large scans [(#11160)](https://github.com/prowler-cloud/prowler/pull/11160)
---
## [1.27.2] (Prowler UNRELEASED)
### 🐞 Fixed
- Attack Paths: BEDROCK-001 and BEDROCK-002 now target roles trusting `bedrock-agentcore.amazonaws.com` instead of `bedrock.amazonaws.com`, eliminating false positives against regular Bedrock service roles (Agents, Knowledge Bases, model invocation) [(#11141)](https://github.com/prowler-cloud/prowler/pull/11141)
---
## [1.27.1] (Prowler v5.26.1)
### 🐞 Fixed
- `POST /api/v1/scans` was intermittently failing with `Scan matching query does not exist` in the `scan-perform` worker; the Celery task is now published via `transaction.on_commit` so the worker cannot read the Scan before the dispatch-wide transaction commits [(#11122)](https://github.com/prowler-cloud/prowler/pull/11122)
---
## [1.27.0] (Prowler v5.26.0)
### 🚀 Added
- `scan-reset-ephemeral-resources` post-scan task zeroes `failed_findings_count` for resources missing from the latest full-scope scan, keeping ephemeral resources from polluting the Resources page sort [(#10929)](https://github.com/prowler-cloud/prowler/pull/10929)
### 🔄 Changed
- ASD Essential Eight (AWS) compliance framework support [(#10982)](https://github.com/prowler-cloud/prowler/pull/10982)
### 🔐 Security
- `trivy` binary from 0.69.2 to 0.70.0 and `cryptography` from 46.0.6 to 46.0.7 (transitive via prowler SDK) in the API image for CVE-2026-33186 and CVE-2026-39892 [(#10978)](https://github.com/prowler-cloud/prowler/pull/10978)
---
## [1.26.1] (Prowler v5.25.1)
### 🐞 Fixed
- Attack Paths: AWS scans no longer fail when enabled regions cannot be retrieved, and scans stuck in `scheduled` state are now cleaned up after the stale threshold [(#10917)](https://github.com/prowler-cloud/prowler/pull/10917)
- Scan report and compliance downloads now redirect to a presigned S3 URL instead of streaming through the API worker, preventing gunicorn timeouts on large files [(#10927)](https://github.com/prowler-cloud/prowler/pull/10927)
---
## [1.26.0] (Prowler v5.25.0)
### 🚀 Added
- CIS Benchmark PDF report generation for scans, exposing the latest CIS version per provider via `GET /scans/{id}/cis/{name}/` [(#10650)](https://github.com/prowler-cloud/prowler/pull/10650)
- `/overviews/resource-groups` (resource inventory), `/overviews/categories` and `/overviews/attack-surfaces` now reflect newly-muted findings without waiting for the next scan. The post-mute `reaggregate-all-finding-group-summaries` task now also dispatches `aggregate_scan_resource_group_summaries_task`, `aggregate_scan_category_summaries_task` and `aggregate_attack_surface_task` per latest scan of every `(provider, day)` pair, rebuilding `ScanGroupSummary`, `ScanCategorySummary` and `AttackSurfaceOverview` alongside the tables already covered in #10827 [(#10843)](https://github.com/prowler-cloud/prowler/pull/10843)
- Install zizmor v1.24.1 in API Docker image for GitHub Actions workflow scanning [(#10607)](https://github.com/prowler-cloud/prowler/pull/10607)
### 🔄 Changed
- Allows tenant owners to expel users from their organizations [(#10787)](https://github.com/prowler-cloud/prowler/pull/10787)
- `aggregate_findings`, `aggregate_attack_surface`, `aggregate_scan_resource_group_summaries` and `aggregate_scan_category_summaries` now upsert via `bulk_create(update_conflicts=True, ...)` instead of the prior `ignore_conflicts=True` / plain INSERT / `already backfilled` short-circuit. Re-runs triggered by the post-mute reaggregation pipeline no longer trip the `unique_*_per_scan` constraints nor silently drop updates, and are race-safe under concurrent writers (e.g. scan completion overlapping with a fresh mute rule) [(#10843)](https://github.com/prowler-cloud/prowler/pull/10843)
- Rename the scan-category and scan-resource-group summary aggregators from `backfill_*` to `aggregate_*` [(#10843)](https://github.com/prowler-cloud/prowler/pull/10843)
### 🐞 Fixed
- `generate_outputs_task` crashing with `KeyError` for compliance frameworks listed by `get_compliance_frameworks` but not loadable by `Compliance.get_bulk` [(#10903)](https://github.com/prowler-cloud/prowler/pull/10903)
---
## [1.25.4] (Prowler v5.24.4)
### 🚀 Added
- `DJANGO_SENTRY_TRACES_SAMPLE_RATE` env var (default `0.02`) enables Sentry performance tracing for the API [(#10873)](https://github.com/prowler-cloud/prowler/pull/10873)
### 🔄 Changed
- Attack Paths: Neo4j driver `connection_acquisition_timeout` is now configurable via `NEO4J_CONN_ACQUISITION_TIMEOUT` (default lowered from 120 s to 15 s) [(#10873)](https://github.com/prowler-cloud/prowler/pull/10873)
### 🐞 Fixed
- `/tmp/prowler_api_output` saturation in compliance report workers: the final `rmtree` in `generate_compliance_reports` now only waits on frameworks actually generated for the provider (so unsupported frameworks no longer leave a placeholder `results` entry that blocks cleanup), output directories are created lazily per enabled framework, and both `generate_compliance_reports` and `generate_outputs_task` run an opportunistic stale cleanup at task start with a 48h age threshold, a per-host `fcntl` throttle, a 50-deletions-per-run cap, and guards that protect EXECUTING scans and scans whose `output_location` still points to a local path (metadata lookups routed through the admin DB so RLS does not hide those rows) [(#10874)](https://github.com/prowler-cloud/prowler/pull/10874)
---
## [1.25.3] (Prowler v5.24.3)
### 🚀 Added
@@ -116,6 +26,7 @@ All notable changes to the **Prowler API** are documented in this file.
- `/finding-groups/latest/<check_id>/resources` now selects the latest completed scan per provider by `-completed_at` (then `-inserted_at`) instead of `-inserted_at`, matching the `/finding-groups/latest` summary path and the daily-summary upsert so overlapping scans no longer produce diverging `delta`/`new_count` between the two endpoints [(#10802)](https://github.com/prowler-cloud/prowler/pull/10802)
---
## [1.25.1] (Prowler v5.24.1)
@@ -129,7 +40,6 @@ All notable changes to the **Prowler API** are documented in this file.
- Attack Paths: Missing `tenant_id` filter while getting related findings after scan completes [(#10722)](https://github.com/prowler-cloud/prowler/pull/10722)
- Finding group counters `pass_count`, `fail_count` and `manual_count` now exclude muted findings [(#10753)](https://github.com/prowler-cloud/prowler/pull/10753)
- Silent data loss in `ResourceFindingMapping` bulk insert that left findings orphaned when `INSERT ... ON CONFLICT DO NOTHING` dropped rows without raising; added explicit `unique_fields` [(#10724)](https://github.com/prowler-cloud/prowler/pull/10724)
- `DELETE /tenants/{tenant_pk}/memberships/{id}` now deletes the expelled user's account when the removed membership was their last one, and blacklists every outstanding refresh token for that user so their existing sessions can no longer mint new access tokens [(#10787)](https://github.com/prowler-cloud/prowler/pull/10787)
---
+1 -21
View File
@@ -5,12 +5,9 @@ LABEL maintainer="https://github.com/prowler-cloud/api"
ARG POWERSHELL_VERSION=7.5.0
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
ARG TRIVY_VERSION=0.70.0
ARG TRIVY_VERSION=0.69.2
ENV TRIVY_VERSION=${TRIVY_VERSION}
ARG ZIZMOR_VERSION=1.24.1
ENV ZIZMOR_VERSION=${ZIZMOR_VERSION}
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
@@ -25,7 +22,6 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libtool \
libxslt1-dev \
python3-dev \
git \
&& rm -rf /var/lib/apt/lists/*
# Install PowerShell
@@ -61,22 +57,6 @@ RUN ARCH=$(uname -m) && \
mkdir -p /tmp/.cache/trivy && \
chmod 777 /tmp/.cache/trivy
# Install zizmor for GitHub Actions workflow scanning
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
ZIZMOR_ARCH="x86_64-unknown-linux-gnu" ; \
elif [ "$ARCH" = "aarch64" ]; then \
ZIZMOR_ARCH="aarch64-unknown-linux-gnu" ; \
else \
echo "Unsupported architecture for zizmor: $ARCH" && exit 1 ; \
fi && \
wget --progress=dot:giga "https://github.com/zizmorcore/zizmor/releases/download/v${ZIZMOR_VERSION}/zizmor-${ZIZMOR_ARCH}.tar.gz" -O /tmp/zizmor.tar.gz && \
mkdir -p /tmp/zizmor-extract && \
tar zxf /tmp/zizmor.tar.gz -C /tmp/zizmor-extract && \
mv /tmp/zizmor-extract/zizmor /usr/local/bin/zizmor && \
chmod +x /usr/local/bin/zizmor && \
rm -rf /tmp/zizmor.tar.gz /tmp/zizmor-extract
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
+71 -71
View File
@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.3.4 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
[[package]]
name = "about-time"
@@ -2504,61 +2504,61 @@ dev = ["bandit", "coverage", "flake8", "pydocstyle", "pylint", "pytest", "pytest
[[package]]
name = "cryptography"
version = "46.0.7"
version = "46.0.6"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.8"
groups = ["main", "dev"]
files = [
{file = "cryptography-46.0.7-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:ea42cbe97209df307fdc3b155f1b6fa2577c0defa8f1f7d3be7d31d189108ad4"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b36a4695e29fe69215d75960b22577197aca3f7a25b9cf9d165dcfe9d80bc325"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5ad9ef796328c5e3c4ceed237a183f5d41d21150f972455a9d926593a1dcb308"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:73510b83623e080a2c35c62c15298096e2a5dc8d51c3b4e1740211839d0dea77"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cbd5fb06b62bd0721e1170273d3f4d5a277044c47ca27ee257025146c34cbdd1"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:420b1e4109cc95f0e5700eed79908cef9268265c773d3a66f7af1eef53d409ef"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:24402210aa54baae71d99441d15bb5a1919c195398a87b563df84468160a65de"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:8a469028a86f12eb7d2fe97162d0634026d92a21f3ae0ac87ed1c4a447886c83"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:9694078c5d44c157ef3162e3bf3946510b857df5a3955458381d1c7cfc143ddb"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:42a1e5f98abb6391717978baf9f90dc28a743b7d9be7f0751a6f56a75d14065b"},
{file = "cryptography-46.0.7-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91bbcb08347344f810cbe49065914fe048949648f6bd5c2519f34619142bbe85"},
{file = "cryptography-46.0.7-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5d1c02a14ceb9148cc7816249f64f623fbfee39e8c03b3650d842ad3f34d637e"},
{file = "cryptography-46.0.7-cp311-abi3-win32.whl", hash = "sha256:d23c8ca48e44ee015cd0a54aeccdf9f09004eba9fc96f38c911011d9ff1bd457"},
{file = "cryptography-46.0.7-cp311-abi3-win_amd64.whl", hash = "sha256:397655da831414d165029da9bc483bed2fe0e75dde6a1523ec2fe63f3c46046b"},
{file = "cryptography-46.0.7-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:d151173275e1728cf7839aaa80c34fe550c04ddb27b34f48c232193df8db5842"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:db0f493b9181c7820c8134437eb8b0b4792085d37dbb24da050476ccb664e59c"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ebd6daf519b9f189f85c479427bbd6e9c9037862cf8fe89ee35503bd209ed902"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:b7b412817be92117ec5ed95f880defe9cf18a832e8cafacf0a22337dc1981b4d"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:fbfd0e5f273877695cb93baf14b185f4878128b250cc9f8e617ea0c025dfb022"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:ffca7aa1d00cf7d6469b988c581598f2259e46215e0140af408966a24cf086ce"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:60627cf07e0d9274338521205899337c5d18249db56865f943cbe753aa96f40f"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:80406c3065e2c55d7f49a9550fe0c49b3f12e5bfff5dedb727e319e1afb9bf99"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:c5b1ccd1239f48b7151a65bc6dd54bcfcc15e028c8ac126d3fada09db0e07ef1"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:d5f7520159cd9c2154eb61eb67548ca05c5774d39e9c2c4339fd793fe7d097b2"},
{file = "cryptography-46.0.7-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:fcd8eac50d9138c1d7fc53a653ba60a2bee81a505f9f8850b6b2888555a45d0e"},
{file = "cryptography-46.0.7-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:65814c60f8cc400c63131584e3e1fad01235edba2614b61fbfbfa954082db0ee"},
{file = "cryptography-46.0.7-cp314-cp314t-win32.whl", hash = "sha256:fdd1736fed309b4300346f88f74cd120c27c56852c3838cab416e7a166f67298"},
{file = "cryptography-46.0.7-cp314-cp314t-win_amd64.whl", hash = "sha256:e06acf3c99be55aa3b516397fe42f5855597f430add9c17fa46bf2e0fb34c9bb"},
{file = "cryptography-46.0.7-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:462ad5cb1c148a22b2e3bcc5ad52504dff325d17daf5df8d88c17dda1f75f2a4"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:84d4cced91f0f159a7ddacad249cc077e63195c36aac40b4150e7a57e84fffe7"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:128c5edfe5e5938b86b03941e94fac9ee793a94452ad1365c9fc3f4f62216832"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5e51be372b26ef4ba3de3c167cd3d1022934bc838ae9eaad7e644986d2a3d163"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cdf1a610ef82abb396451862739e3fc93b071c844399e15b90726ef7470eeaf2"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1d25aee46d0c6f1a501adcddb2d2fee4b979381346a78558ed13e50aa8a59067"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:cdfbe22376065ffcf8be74dc9a909f032df19bc58a699456a21712d6e5eabfd0"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:abad9dac36cbf55de6eb49badd4016806b3165d396f64925bf2999bcb67837ba"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:935ce7e3cfdb53e3536119a542b839bb94ec1ad081013e9ab9b7cfd478b05006"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:35719dc79d4730d30f1c2b6474bd6acda36ae2dfae1e3c16f2051f215df33ce0"},
{file = "cryptography-46.0.7-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:7bbc6ccf49d05ac8f7d7b5e2e2c33830d4fe2061def88210a126d130d7f71a85"},
{file = "cryptography-46.0.7-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a1529d614f44b863a7b480c6d000fe93b59acee9c82ffa027cfadc77521a9f5e"},
{file = "cryptography-46.0.7-cp38-abi3-win32.whl", hash = "sha256:f247c8c1a1fb45e12586afbb436ef21ff1e80670b2861a90353d9b025583d246"},
{file = "cryptography-46.0.7-cp38-abi3-win_amd64.whl", hash = "sha256:506c4ff91eff4f82bdac7633318a526b1d1309fc07ca76a3ad182cb5b686d6d3"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:fc9ab8856ae6cf7c9358430e49b368f3108f050031442eaeb6b9d87e4dcf4e4f"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:d3b99c535a9de0adced13d159c5a9cf65c325601aa30f4be08afd680643e9c15"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d02c738dacda7dc2a74d1b2b3177042009d5cab7c7079db74afc19e56ca1b455"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:04959522f938493042d595a736e7dbdff6eb6cc2339c11465b3ff89343b65f65"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3986ac1dee6def53797289999eabe84798ad7817f3e97779b5061a95b0ee4968"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:258514877e15963bd43b558917bc9f54cf7cf866c38aa576ebf47a77ddbc43a4"},
{file = "cryptography-46.0.7.tar.gz", hash = "sha256:e4cfd68c5f3e0bfdad0d38e023239b96a2fe84146481852dffbcca442c245aa5"},
{file = "cryptography-46.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:64235194bad039a10bb6d2d930ab3323baaec67e2ce36215fd0952fad0930ca8"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:26031f1e5ca62fcb9d1fcb34b2b60b390d1aacaa15dc8b895a9ed00968b97b30"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9a693028b9cbe51b5a1136232ee8f2bc242e4e19d456ded3fa7c86e43c713b4a"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:67177e8a9f421aa2d3a170c3e56eca4e0128883cf52a071a7cbf53297f18b175"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:d9528b535a6c4f8ff37847144b8986a9a143585f0540fbcb1a98115b543aa463"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:22259338084d6ae497a19bae5d4c66b7ca1387d3264d1c2c0e72d9e9b6a77b97"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:760997a4b950ff00d418398ad73fbc91aa2894b5c1db7ccb45b4f68b42a63b3c"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3dfa6567f2e9e4c5dceb8ccb5a708158a2a871052fa75c8b78cb0977063f1507"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:cdcd3edcbc5d55757e5f5f3d330dd00007ae463a7e7aa5bf132d1f22a4b62b19"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:d4e4aadb7fc1f88687f47ca20bb7227981b03afaae69287029da08096853b738"},
{file = "cryptography-46.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2b417edbe8877cda9022dde3a008e2deb50be9c407eef034aeeb3a8b11d9db3c"},
{file = "cryptography-46.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:380343e0653b1c9d7e1f55b52aaa2dbb2fdf2730088d48c43ca1c7c0abb7cc2f"},
{file = "cryptography-46.0.6-cp311-abi3-win32.whl", hash = "sha256:bcb87663e1f7b075e48c3be3ecb5f0b46c8fc50b50a97cf264e7f60242dca3f2"},
{file = "cryptography-46.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:6739d56300662c468fddb0e5e291f9b4d084bead381667b9e654c7dd81705124"},
{file = "cryptography-46.0.6-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:2ef9e69886cbb137c2aef9772c2e7138dc581fad4fcbcf13cc181eb5a3ab6275"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7f417f034f91dcec1cb6c5c35b07cdbb2ef262557f701b4ecd803ee8cefed4f4"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d24c13369e856b94892a89ddf70b332e0b70ad4a5c43cf3e9cb71d6d7ffa1f7b"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:aad75154a7ac9039936d50cf431719a2f8d4ed3d3c277ac03f3339ded1a5e707"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:3c21d92ed15e9cfc6eb64c1f5a0326db22ca9c2566ca46d845119b45b4400361"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:4668298aef7cddeaf5c6ecc244c2302a2b8e40f384255505c22875eebb47888b"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:8ce35b77aaf02f3b59c90b2c8a05c73bac12cea5b4e8f3fbece1f5fddea5f0ca"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:c89eb37fae9216985d8734c1afd172ba4927f5a05cfd9bf0e4863c6d5465b013"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:ed418c37d095aeddf5336898a132fba01091f0ac5844e3e8018506f014b6d2c4"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:69cf0056d6947edc6e6760e5f17afe4bea06b56a9ac8a06de9d2bd6b532d4f3a"},
{file = "cryptography-46.0.6-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8e7304c4f4e9490e11efe56af6713983460ee0780f16c63f219984dab3af9d2d"},
{file = "cryptography-46.0.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b928a3ca837c77a10e81a814a693f2295200adb3352395fad024559b7be7a736"},
{file = "cryptography-46.0.6-cp314-cp314t-win32.whl", hash = "sha256:97c8115b27e19e592a05c45d0dd89c57f81f841cc9880e353e0d3bf25b2139ed"},
{file = "cryptography-46.0.6-cp314-cp314t-win_amd64.whl", hash = "sha256:c797e2517cb7880f8297e2c0f43bb910e91381339336f75d2c1c2cbf811b70b4"},
{file = "cryptography-46.0.6-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:12cae594e9473bca1a7aceb90536060643128bb274fcea0fc459ab90f7d1ae7a"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:639301950939d844a9e1c4464d7e07f902fe9a7f6b215bb0d4f28584729935d8"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ed3775295fb91f70b4027aeba878d79b3e55c0b3e97eaa4de71f8f23a9f2eb77"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8927ccfbe967c7df312ade694f987e7e9e22b2425976ddbf28271d7e58845290"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:b12c6b1e1651e42ab5de8b1e00dc3b6354fdfd778e7fa60541ddacc27cd21410"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:063b67749f338ca9c5a0b7fe438a52c25f9526b851e24e6c9310e7195aad3b4d"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:02fad249cb0e090b574e30b276a3da6a149e04ee2f049725b1f69e7b8351ec70"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:7e6142674f2a9291463e5e150090b95a8519b2fb6e6aaec8917dd8d094ce750d"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:456b3215172aeefb9284550b162801d62f5f264a081049a3e94307fe20792cfa"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:341359d6c9e68834e204ceaf25936dffeafea3829ab80e9503860dcc4f4dac58"},
{file = "cryptography-46.0.6-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9a9c42a2723999a710445bc0d974e345c32adfd8d2fac6d8a251fa829ad31cfb"},
{file = "cryptography-46.0.6-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6617f67b1606dfd9fe4dbfa354a9508d4a6d37afe30306fe6c101b7ce3274b72"},
{file = "cryptography-46.0.6-cp38-abi3-win32.whl", hash = "sha256:7f6690b6c55e9c5332c0b59b9c8a3fb232ebf059094c17f9019a51e9827df91c"},
{file = "cryptography-46.0.6-cp38-abi3-win_amd64.whl", hash = "sha256:79e865c642cfc5c0b3eb12af83c35c5aeff4fa5c672dc28c43721c2c9fdd2f0f"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:2ea0f37e9a9cf0df2952893ad145fd9627d326a59daec9b0802480fa3bcd2ead"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a3e84d5ec9ba01f8fd03802b2147ba77f0c8f2617b2aff254cedd551844209c8"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:12f0fa16cc247b13c43d56d7b35287ff1569b5b1f4c5e87e92cc4fcc00cd10c0"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:50575a76e2951fe7dbd1f56d181f8c5ceeeb075e9ff88e7ad997d2f42af06e7b"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:90e5f0a7b3be5f40c3a0a0eafb32c681d8d2c181fc2a1bdabe9b3f611d9f6b1a"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:6728c49e3b2c180ef26f8e9f0a883a2c585638db64cf265b49c9ba10652d430e"},
{file = "cryptography-46.0.6.tar.gz", hash = "sha256:27550628a518c5c6c903d84f637fbecf287f6cb9ced3804838a1295dc1fd0759"},
]
[package.dependencies]
@@ -2571,7 +2571,7 @@ nox = ["nox[uv] (>=2024.4.15)"]
pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.7)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
@@ -2974,7 +2974,7 @@ files = [
[package.dependencies]
autopep8 = "*"
Django = ">=4.2"
gprof2dot = ">=2017.9.19"
gprof2dot = ">=2017.09.19"
sqlparse = "*"
[[package]]
@@ -4582,7 +4582,7 @@ files = [
[package.dependencies]
attrs = ">=22.2.0"
jsonschema-specifications = ">=2023.3.6"
jsonschema-specifications = ">=2023.03.6"
referencing = ">=0.28.4"
rpds-py = ">=0.7.1"
@@ -4790,7 +4790,7 @@ librabbitmq = ["librabbitmq (>=2.0.0) ; python_version < \"3.11\""]
mongodb = ["pymongo (==4.15.3)"]
msgpack = ["msgpack (==1.1.2)"]
pyro = ["pyro4 (==4.82)"]
qpid = ["qpid-python (==1.36.0.post1)", "qpid-tools (==1.36.0.post1)"]
qpid = ["qpid-python (==1.36.0-1)", "qpid-tools (==1.36.0-1)"]
redis = ["redis (>=4.5.2,!=4.5.5,!=5.0.2,<6.5)"]
slmq = ["softlayer_messaging (>=1.0.3)"]
sqlalchemy = ["sqlalchemy (>=1.4.48,<2.1)"]
@@ -4811,7 +4811,7 @@ files = [
]
[package.dependencies]
certifi = ">=14.5.14"
certifi = ">=14.05.14"
durationpy = ">=0.7"
google-auth = ">=1.0.1"
oauthlib = ">=3.2.2"
@@ -6665,7 +6665,7 @@ files = [
[[package]]
name = "prowler"
version = "5.26.0"
version = "5.24.0"
description = "Prowler is an Open Source security tool to perform AWS, GCP and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks."
optional = false
python-versions = ">=3.10,<3.13"
@@ -6720,7 +6720,7 @@ boto3 = "1.40.61"
botocore = "1.40.61"
cloudflare = "4.3.1"
colorama = "0.4.6"
cryptography = "46.0.7"
cryptography = "46.0.6"
dash = "3.1.1"
dash-bootstrap-components = "2.0.3"
defusedxml = "0.7.1"
@@ -6754,8 +6754,8 @@ uuid6 = "2024.7.10"
[package.source]
type = "git"
url = "https://github.com/prowler-cloud/prowler.git"
reference = "master"
resolved_reference = "16798e293da365965120961e6539e3a9756564f9"
reference = "v5.24"
resolved_reference = "ba5b23245f4805f46d67e67fc059aefd6831f7b3"
[[package]]
name = "psutil"
@@ -7194,7 +7194,7 @@ files = [
]
[package.dependencies]
astroid = ">=3.2.2,<=3.3.0.dev0"
astroid = ">=3.2.2,<=3.3.0-dev0"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
dill = [
{version = ">=0.3.7", markers = "python_version >= \"3.12\""},
@@ -7912,26 +7912,26 @@ shaping = ["uharfbuzz"]
[[package]]
name = "requests"
version = "2.33.1"
version = "2.32.5"
description = "Python HTTP for Humans."
optional = false
python-versions = ">=3.10"
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "requests-2.33.1-py3-none-any.whl", hash = "sha256:4e6d1ef462f3626a1f0a0a9c42dd93c63bad33f9f1c1937509b8c5c8718ab56a"},
{file = "requests-2.33.1.tar.gz", hash = "sha256:18817f8c57c6263968bc123d237e3b8b08ac046f5456bd1e307ee8f4250d3517"},
{file = "requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6"},
{file = "requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf"},
]
[package.dependencies]
certifi = ">=2023.5.7"
certifi = ">=2017.4.17"
charset_normalizer = ">=2,<4"
idna = ">=2.5,<4"
PySocks = {version = ">=1.5.6,<1.5.7 || >1.5.7", optional = true, markers = "extra == \"socks\""}
urllib3 = ">=1.26,<3"
urllib3 = ">=1.21.1,<3"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use-chardet-on-py3 = ["chardet (>=3.0.2,<8)"]
use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-file"
@@ -8209,10 +8209,10 @@ files = [
]
[package.dependencies]
botocore = ">=1.37.4,<2.0a0"
botocore = ">=1.37.4,<2.0a.0"
[package.extras]
crt = ["botocore[crt] (>=1.37.4,<2.0a0)"]
crt = ["botocore[crt] (>=1.37.4,<2.0a.0)"]
[[package]]
name = "safety"
@@ -9424,4 +9424,4 @@ files = [
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "a3ab982d11a87d951ff15694d2ca7fd51f1f51a451abb0baa067ccf6966367a8"
content-hash = "5781e74b0692aed541fe445d6713d2dfd792bb226789501420aac4a8cb45aa2a"
+3 -3
View File
@@ -25,7 +25,7 @@ dependencies = [
"defusedxml==0.7.1",
"gunicorn==23.0.0",
"lxml==5.3.2",
"prowler @ git+https://github.com/prowler-cloud/prowler.git@master",
"prowler @ git+https://github.com/prowler-cloud/prowler.git@v5.24",
"psycopg2-binary==2.9.9",
"pytest-celery[redis] (==1.3.0)",
"sentry-sdk[django] (==2.56.0)",
@@ -50,7 +50,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.28.0"
version = "1.25.3"
[project.scripts]
celery = "src.backend.config.settings.celery"
@@ -63,7 +63,6 @@ docker = "7.1.0"
filelock = "3.20.3"
freezegun = "1.5.1"
mypy = "1.10.1"
prek = "0.3.9"
pylint = "3.2.5"
pytest = "9.0.3"
pytest-cov = "5.0.0"
@@ -75,3 +74,4 @@ ruff = "0.5.0"
safety = "3.7.0"
tqdm = "4.67.1"
vulture = "2.14"
prek = "0.3.9"
+2 -2
View File
@@ -52,7 +52,7 @@ class ApiConfig(AppConfig):
"check_and_fix_socialaccount_sites_migration",
]
# Skip eager Neo4j init for tests, some Django commands, and Celery (prefork pool: driver must stay lazy, no post_fork hook)
# Skip Neo4j initialization during tests, some Django commands, and Celery
if getattr(settings, "TESTING", False) or (
len(sys.argv) > 1
and (
@@ -64,7 +64,7 @@ class ApiConfig(AppConfig):
)
):
logger.info(
"Skipping eager Neo4j init: tests, some Django commands, or Celery prefork pool (driver stays lazy)"
"Skipping Neo4j initialization because tests, some Django commands or Celery"
)
else:
+1 -2
View File
@@ -28,7 +28,6 @@ READ_QUERY_TIMEOUT_SECONDS = env.int(
"ATTACK_PATHS_READ_QUERY_TIMEOUT_SECONDS", default=30
)
MAX_CUSTOM_QUERY_NODES = env.int("ATTACK_PATHS_MAX_CUSTOM_QUERY_NODES", default=250)
CONN_ACQUISITION_TIMEOUT = env.int("NEO4J_CONN_ACQUISITION_TIMEOUT", default=15)
READ_EXCEPTION_CODES = [
"Neo.ClientError.Statement.AccessMode",
"Neo.ClientError.Procedure.ProcedureNotFound",
@@ -63,7 +62,7 @@ def init_driver() -> neo4j.Driver:
auth=(config["USER"], config["PASSWORD"]),
keep_alive=True,
max_connection_lifetime=7200,
connection_acquisition_timeout=CONN_ACQUISITION_TIMEOUT,
connection_acquisition_timeout=120,
max_connection_pool_size=50,
)
_driver.verify_connectivity()
@@ -484,8 +484,8 @@ AWS_BEDROCK_PRIVESC_PASSROLE_CODE_INTERPRETER = AttackPathsQueryDefinition(
OR action = '*'
)
// Find roles that trust the Bedrock AgentCore service (can be passed to a code interpreter)
MATCH path_target = (aws)--(target_role:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(:AWSPrincipal {{arn: 'bedrock-agentcore.amazonaws.com'}})
// Find roles that trust Bedrock service (can be passed to Bedrock)
MATCH path_target = (aws)--(target_role:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(:AWSPrincipal {{arn: 'bedrock.amazonaws.com'}})
WHERE any(resource IN stmt_passrole.resource WHERE
resource = '*'
OR target_role.arn CONTAINS resource
@@ -536,8 +536,8 @@ AWS_BEDROCK_PRIVESC_INVOKE_CODE_INTERPRETER = AttackPathsQueryDefinition(
OR action = '*'
)
// Find roles that trust the Bedrock AgentCore service (already attached to existing code interpreters)
MATCH path_target = (aws)--(target_role:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(:AWSPrincipal {{arn: 'bedrock-agentcore.amazonaws.com'}})
// Find roles that trust Bedrock service (already attached to existing code interpreters)
MATCH path_target = (aws)--(target_role:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(:AWSPrincipal {{arn: 'bedrock.amazonaws.com'}})
WITH collect(path_principal) + collect(path_target) AS paths
UNWIND paths AS p
+8 -7
View File
@@ -1,6 +1,7 @@
from collections.abc import Iterable, Mapping
from api.models import Provider
from prowler.config.config import get_available_compliance_frameworks
from prowler.lib.check.compliance_models import Compliance
from prowler.lib.check.models import CheckMetadata
@@ -94,12 +95,12 @@ PROWLER_CHECKS = LazyChecksMapping()
def get_compliance_frameworks(provider_type: Provider.ProviderChoices) -> list[str]:
"""List compliance frameworks the API can load for `provider_type`.
"""
Retrieve and cache the list of available compliance frameworks for a specific cloud provider.
The list is sourced from `Compliance.get_bulk` so that the names
returned here are guaranteed to be loadable by the bulk loader. This
prevents downstream key mismatches (e.g. CSV report generation iterating
framework names and looking them up in the bulk dict).
This function lazily loads and caches the available compliance frameworks (e.g., CIS, MITRE, ISO)
for each provider type (AWS, Azure, GCP, etc.) on first access. Subsequent calls for the same
provider will return the cached result.
Args:
provider_type (Provider.ProviderChoices): The cloud provider type for which to retrieve
@@ -111,8 +112,8 @@ def get_compliance_frameworks(provider_type: Provider.ProviderChoices) -> list[s
"""
global AVAILABLE_COMPLIANCE_FRAMEWORKS
if provider_type not in AVAILABLE_COMPLIANCE_FRAMEWORKS:
AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type] = list(
Compliance.get_bulk(provider_type).keys()
AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type] = (
get_available_compliance_frameworks(provider_type)
)
return AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type]
-1
View File
@@ -330,7 +330,6 @@ class MembershipFilter(FilterSet):
model = Membership
fields = {
"tenant": ["exact"],
"user": ["exact"],
"role": ["exact"],
"date_joined": ["date", "gte", "lte"],
}
@@ -1,31 +0,0 @@
from functools import partial
from django.db import migrations
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0090_attack_paths_cleanup_priority"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="gin_find_arrays_idx",
columns="categories, resource_services, resource_regions, resource_types",
method="GIN",
all_partitions=True,
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="gin_find_arrays_idx",
),
)
]
@@ -1,73 +0,0 @@
import django.contrib.postgres.indexes
from django.db import migrations
INDEX_NAME = "gin_find_arrays_idx"
PARENT_TABLE = "findings"
def create_parent_and_attach(apps, schema_editor):
with schema_editor.connection.cursor() as cursor:
# Idempotent: the parent index may already exist if it was created
# manually on an environment before this migration ran.
cursor.execute(
f"CREATE INDEX IF NOT EXISTS {INDEX_NAME} ON ONLY {PARENT_TABLE} "
f"USING gin (categories, resource_services, resource_regions, resource_types)"
)
cursor.execute(
"SELECT inhrelid::regclass::text "
"FROM pg_inherits "
"WHERE inhparent = %s::regclass",
[PARENT_TABLE],
)
for (partition,) in cursor.fetchall():
child_idx = f"{partition.replace('.', '_')}_{INDEX_NAME}"
# ALTER INDEX ... ATTACH PARTITION has no IF NOT ATTACHED clause,
# so check pg_inherits first to keep the migration re-runnable.
cursor.execute(
"""
SELECT 1
FROM pg_inherits i
JOIN pg_class p ON p.oid = i.inhparent
JOIN pg_class c ON c.oid = i.inhrelid
WHERE p.relname = %s AND c.relname = %s
""",
[INDEX_NAME, child_idx],
)
if cursor.fetchone() is None:
cursor.execute(f"ALTER INDEX {INDEX_NAME} ATTACH PARTITION {child_idx}")
def drop_parent_index(apps, schema_editor):
with schema_editor.connection.cursor() as cursor:
cursor.execute(f"DROP INDEX IF EXISTS {INDEX_NAME}")
class Migration(migrations.Migration):
dependencies = [
("api", "0091_findings_arrays_gin_index_partitions"),
]
operations = [
migrations.SeparateDatabaseAndState(
state_operations=[
migrations.AddIndex(
model_name="finding",
index=django.contrib.postgres.indexes.GinIndex(
fields=[
"categories",
"resource_services",
"resource_regions",
"resource_types",
],
name=INDEX_NAME,
),
),
],
database_operations=[
migrations.RunPython(
create_parent_and_attach,
reverse_code=drop_parent_index,
),
],
),
]
+1 -57
View File
@@ -595,40 +595,10 @@ class Scan(RowLevelSecurityProtectedModel):
objects = ActiveProviderManager()
all_objects = models.Manager()
_SCOPING_SCANNER_ARG_KEYS_CACHE: tuple[str, ...] | None = None
@classmethod
def get_scoping_scanner_arg_keys(cls) -> tuple[str, ...]:
"""Return the scanner_args keys that mark a scan as scoped.
Derived from ``prowler.lib.scan.scan.Scan.__init__`` so the API stays
in sync with whatever the SDK actually accepts as filters. Cached at
class level — the signature is stable for the process lifetime.
"""
if cls._SCOPING_SCANNER_ARG_KEYS_CACHE is None:
import inspect
from prowler.lib.scan.scan import Scan as ProwlerScan
params = inspect.signature(ProwlerScan.__init__).parameters
cls._SCOPING_SCANNER_ARG_KEYS_CACHE = tuple(
name for name in params if name not in ("self", "provider")
)
return cls._SCOPING_SCANNER_ARG_KEYS_CACHE
class TriggerChoices(models.TextChoices):
SCHEDULED = "scheduled", _("Scheduled")
MANUAL = "manual", _("Manual")
# Trigger values for scans that ran the SDK end-to-end. Imported scans (or
# any future trigger) are intentionally NOT in this set — they may carry
# only a partial slice of resources, so post-scan logic that depends on a
# full-scope sweep (e.g. resetting ephemeral resource findings) must skip
# them by default.
LIVE_SCAN_TRIGGERS = frozenset(
(TriggerChoices.SCHEDULED.value, TriggerChoices.MANUAL.value)
)
id = models.UUIDField(primary_key=True, default=uuid7, editable=False)
name = models.CharField(
blank=True, null=True, max_length=100, validators=[MinLengthValidator(3)]
@@ -711,24 +681,6 @@ class Scan(RowLevelSecurityProtectedModel):
class JSONAPIMeta:
resource_name = "scans"
def is_full_scope(self) -> bool:
"""Return True if this scan ran with no scoping filters at all.
Used to gate post-scan operations (such as resetting the
failed_findings_count of resources missing from the scan) that are only
safe when the scan covered every check, service, and category. Imported
scans are NOT full-scope by definition — they may carry only a partial
slice of resources, so they're rejected via ``trigger`` even before the
scanner_args check.
"""
if self.trigger not in self.LIVE_SCAN_TRIGGERS:
return False
scanner_args = self.scanner_args or {}
for key in self.get_scoping_scanner_arg_keys():
if scanner_args.get(key):
return False
return True
class AttackPathsScan(RowLevelSecurityProtectedModel):
objects = ActiveProviderManager()
@@ -946,6 +898,7 @@ class Resource(RowLevelSecurityProtectedModel):
OpClass(Upper("name"), name="gin_trgm_ops"),
name="res_name_trgm_idx",
),
GinIndex(fields=["text_search"], name="gin_resources_search_idx"),
models.Index(fields=["tenant_id", "id"], name="resources_tenant_id_idx"),
models.Index(
fields=["tenant_id", "provider_id"],
@@ -1151,15 +1104,6 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
fields=["tenant_id", "scan_id", "check_id"],
name="find_tenant_scan_check_idx",
),
GinIndex(
fields=[
"categories",
"resource_services",
"resource_regions",
"resource_types",
],
name="gin_find_arrays_idx",
),
]
class JSONAPIMeta:
File diff suppressed because it is too large Load Diff
@@ -12,8 +12,6 @@ from unittest.mock import MagicMock, patch
import neo4j
import pytest
import api.attack_paths.database as db_module
class TestLazyInitialization:
"""Test that Neo4j driver is initialized lazily on first use."""
@@ -21,6 +19,8 @@ class TestLazyInitialization:
@pytest.fixture(autouse=True)
def reset_module_state(self):
"""Reset module-level singleton state before each test."""
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
@@ -31,6 +31,8 @@ class TestLazyInitialization:
def test_driver_not_initialized_at_import(self):
"""Driver should be None after module import (no eager connection)."""
import api.attack_paths.database as db_module
assert db_module._driver is None
@patch("api.attack_paths.database.settings")
@@ -39,6 +41,8 @@ class TestLazyInitialization:
self, mock_driver_factory, mock_settings
):
"""init_driver() should create connection only when called."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver_factory.return_value = mock_driver
mock_settings.DATABASES = {
@@ -65,6 +69,8 @@ class TestLazyInitialization:
self, mock_driver_factory, mock_settings
):
"""Subsequent calls should return cached driver without reconnecting."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver_factory.return_value = mock_driver
mock_settings.DATABASES = {
@@ -93,6 +99,8 @@ class TestLazyInitialization:
self, mock_driver_factory, mock_settings
):
"""get_driver() should use init_driver() for lazy initialization."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver_factory.return_value = mock_driver
mock_settings.DATABASES = {
@@ -110,50 +118,14 @@ class TestLazyInitialization:
mock_driver_factory.assert_called_once()
class TestConnectionAcquisitionTimeout:
"""Test that the connection acquisition timeout is configurable."""
@pytest.fixture(autouse=True)
def reset_module_state(self):
original_driver = db_module._driver
original_timeout = db_module.CONN_ACQUISITION_TIMEOUT
db_module._driver = None
yield
db_module._driver = original_driver
db_module.CONN_ACQUISITION_TIMEOUT = original_timeout
@patch("api.attack_paths.database.settings")
@patch("api.attack_paths.database.neo4j.GraphDatabase.driver")
def test_driver_receives_configured_timeout(
self, mock_driver_factory, mock_settings
):
"""init_driver() should pass CONN_ACQUISITION_TIMEOUT to the neo4j driver."""
mock_driver_factory.return_value = MagicMock()
mock_settings.DATABASES = {
"neo4j": {
"HOST": "localhost",
"PORT": 7687,
"USER": "neo4j",
"PASSWORD": "password",
}
}
db_module.CONN_ACQUISITION_TIMEOUT = 42
db_module.init_driver()
_, kwargs = mock_driver_factory.call_args
assert kwargs["connection_acquisition_timeout"] == 42
class TestAtexitRegistration:
"""Test that atexit cleanup handler is registered correctly."""
@pytest.fixture(autouse=True)
def reset_module_state(self):
"""Reset module-level singleton state before each test."""
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
@@ -169,6 +141,8 @@ class TestAtexitRegistration:
self, mock_driver_factory, mock_atexit_register, mock_settings
):
"""atexit.register should be called on first initialization."""
import api.attack_paths.database as db_module
mock_driver_factory.return_value = MagicMock()
mock_settings.DATABASES = {
"neo4j": {
@@ -194,6 +168,8 @@ class TestAtexitRegistration:
The double-checked locking on _driver ensures the atexit registration
block only executes once (when _driver is first created).
"""
import api.attack_paths.database as db_module
mock_driver_factory.return_value = MagicMock()
mock_settings.DATABASES = {
"neo4j": {
@@ -218,6 +194,8 @@ class TestCloseDriver:
@pytest.fixture(autouse=True)
def reset_module_state(self):
"""Reset module-level singleton state before each test."""
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
@@ -228,6 +206,8 @@ class TestCloseDriver:
def test_close_driver_closes_and_clears_driver(self):
"""close_driver() should close the driver and set it to None."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
db_module._driver = mock_driver
@@ -238,6 +218,8 @@ class TestCloseDriver:
def test_close_driver_handles_none_driver(self):
"""close_driver() should handle case where driver is None."""
import api.attack_paths.database as db_module
db_module._driver = None
# Should not raise
@@ -247,6 +229,8 @@ class TestCloseDriver:
def test_close_driver_clears_driver_even_on_close_error(self):
"""Driver should be cleared even if close() raises an exception."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver.close.side_effect = Exception("Connection error")
db_module._driver = mock_driver
@@ -262,6 +246,8 @@ class TestExecuteReadQuery:
"""Test read query execution helper."""
def test_execute_read_query_calls_read_session_and_returns_result(self):
import api.attack_paths.database as db_module
tx = MagicMock()
expected_graph = MagicMock()
run_result = MagicMock()
@@ -303,6 +289,8 @@ class TestExecuteReadQuery:
assert result is expected_graph
def test_execute_read_query_defaults_parameters_to_empty_dict(self):
import api.attack_paths.database as db_module
tx = MagicMock()
run_result = MagicMock()
run_result.graph.return_value = MagicMock()
@@ -337,6 +325,8 @@ class TestGetSessionReadOnly:
@pytest.fixture(autouse=True)
def reset_module_state(self):
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
yield
@@ -351,6 +341,8 @@ class TestGetSessionReadOnly:
)
def test_get_session_raises_write_query_not_allowed(self, neo4j_code):
"""Read-mode Neo4j errors should raise `WriteQueryNotAllowedException`."""
import api.attack_paths.database as db_module
mock_session = MagicMock()
neo4j_error = neo4j.exceptions.Neo4jError._hydrate_neo4j(
code=neo4j_code,
@@ -370,6 +362,8 @@ class TestGetSessionReadOnly:
def test_get_session_raises_generic_exception_for_other_errors(self):
"""Non-read-mode Neo4j errors should raise GraphDatabaseQueryException."""
import api.attack_paths.database as db_module
mock_session = MagicMock()
neo4j_error = neo4j.exceptions.Neo4jError._hydrate_neo4j(
code="Neo.ClientError.Statement.SyntaxError",
@@ -394,6 +388,8 @@ class TestThreadSafety:
@pytest.fixture(autouse=True)
def reset_module_state(self):
"""Reset module-level singleton state before each test."""
import api.attack_paths.database as db_module
original_driver = db_module._driver
db_module._driver = None
@@ -408,6 +404,8 @@ class TestThreadSafety:
self, mock_driver_factory, mock_settings
):
"""Multiple threads calling init_driver() should create only one driver."""
import api.attack_paths.database as db_module
mock_driver = MagicMock()
mock_driver_factory.return_value = mock_driver
mock_settings.DATABASES = {
@@ -450,6 +448,8 @@ class TestHasProviderData:
"""Test has_provider_data helper for checking provider nodes in Neo4j."""
def test_returns_true_when_nodes_exist(self):
import api.attack_paths.database as db_module
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.single.return_value = MagicMock() # non-None record
@@ -468,6 +468,8 @@ class TestHasProviderData:
mock_session.run.assert_called_once()
def test_returns_false_when_no_nodes(self):
import api.attack_paths.database as db_module
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.single.return_value = None
@@ -484,6 +486,8 @@ class TestHasProviderData:
assert db_module.has_provider_data("db-tenant-abc", "provider-123") is False
def test_returns_false_when_database_not_found(self):
import api.attack_paths.database as db_module
session_ctx = MagicMock()
session_ctx.__enter__.side_effect = db_module.GraphDatabaseQueryException(
message="Database does not exist",
@@ -499,6 +503,8 @@ class TestHasProviderData:
)
def test_raises_on_other_errors(self):
import api.attack_paths.database as db_module
session_ctx = MagicMock()
session_ctx.__enter__.side_effect = db_module.GraphDatabaseQueryException(
message="Connection refused",
@@ -1,18 +1,13 @@
from unittest.mock import MagicMock, patch
import pytest
from api import compliance as compliance_module
from api.compliance import (
generate_compliance_overview_template,
generate_scan_compliance,
get_compliance_frameworks,
get_prowler_provider_checks,
get_prowler_provider_compliance,
load_prowler_checks,
)
from api.models import Provider
from prowler.lib.check.compliance_models import Compliance
class TestCompliance:
@@ -255,58 +250,3 @@ class TestCompliance:
}
assert template == expected_template
@pytest.fixture
def reset_compliance_cache():
"""Reset the module-level cache so each test starts cold."""
previous = dict(compliance_module.AVAILABLE_COMPLIANCE_FRAMEWORKS)
compliance_module.AVAILABLE_COMPLIANCE_FRAMEWORKS.clear()
try:
yield
finally:
compliance_module.AVAILABLE_COMPLIANCE_FRAMEWORKS.clear()
compliance_module.AVAILABLE_COMPLIANCE_FRAMEWORKS.update(previous)
class TestGetComplianceFrameworks:
def test_returns_keys_from_compliance_get_bulk(self, reset_compliance_cache):
with patch("api.compliance.Compliance") as mock_compliance:
mock_compliance.get_bulk.return_value = {
"cis_1.4_aws": MagicMock(),
"mitre_attack_aws": MagicMock(),
}
result = get_compliance_frameworks(Provider.ProviderChoices.AWS)
assert sorted(result) == ["cis_1.4_aws", "mitre_attack_aws"]
mock_compliance.get_bulk.assert_called_once_with(Provider.ProviderChoices.AWS)
def test_caches_result_per_provider(self, reset_compliance_cache):
with patch("api.compliance.Compliance") as mock_compliance:
mock_compliance.get_bulk.return_value = {"cis_1.4_aws": MagicMock()}
get_compliance_frameworks(Provider.ProviderChoices.AWS)
get_compliance_frameworks(Provider.ProviderChoices.AWS)
# Cached after first call.
assert mock_compliance.get_bulk.call_count == 1
@pytest.mark.parametrize(
"provider_type",
[choice.value for choice in Provider.ProviderChoices],
)
def test_listing_is_subset_of_bulk(self, reset_compliance_cache, provider_type):
"""Regression for CLOUD-API-40S: every name returned by
``get_compliance_frameworks`` must be loadable via ``Compliance.get_bulk``.
A divergence here is what produced ``KeyError: 'csa_ccm_4.0'`` in
``generate_outputs_task`` after universal/multi-provider compliance
JSONs were introduced at the top-level ``prowler/compliance/`` path.
"""
bulk_keys = set(Compliance.get_bulk(provider_type).keys())
listed = set(get_compliance_frameworks(provider_type))
missing = listed - bulk_keys
assert not missing, (
f"get_compliance_frameworks({provider_type!r}) returned names not "
f"loadable by Compliance.get_bulk: {sorted(missing)}"
)
+28 -387
View File
@@ -32,11 +32,6 @@ from django_celery_results.models import TaskResult
from rest_framework import status
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework_simplejwt.token_blacklist.models import (
BlacklistedToken,
OutstandingToken,
)
from rest_framework_simplejwt.tokens import RefreshToken
from api.attack_paths import (
AttackPathsQueryDefinition,
@@ -52,7 +47,6 @@ from api.models import (
Finding,
Integration,
Invitation,
InvitationRoleRelationship,
LighthouseProviderConfiguration,
LighthouseProviderModels,
LighthouseTenantConfiguration,
@@ -752,39 +746,6 @@ class TestTenantViewSet:
# Test user + 2 extra users for tenant 2
assert len(response.json()["data"]) == 3
def test_tenants_list_memberships_filter_by_user(
self, authenticated_client, tenants_fixture, extra_users
):
_, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
response = authenticated_client.get(
reverse("tenant-membership-list", kwargs={"tenant_pk": tenant2.id}),
{"filter[user]": str(user3.id)},
)
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert len(data) == 1
assert data[0]["id"] == str(membership3.id)
def test_tenants_list_memberships_filter_by_user_no_match(
self, authenticated_client, tenants_fixture, extra_users
):
_, tenant2, _ = tenants_fixture
unrelated_user = User.objects.create_user(
name="unrelated",
password=TEST_PASSWORD,
email="unrelated@gmail.com",
)
response = authenticated_client.get(
reverse("tenant-membership-list", kwargs={"tenant_pk": tenant2.id}),
{"filter[user]": str(unrelated_user.id)},
)
assert response.status_code == status.HTTP_200_OK
assert response.json()["data"] == []
def test_tenants_list_memberships_as_member(
self, authenticated_client, tenants_fixture, extra_users
):
@@ -842,7 +803,6 @@ class TestTenantViewSet:
):
_, tenant2, _ = tenants_fixture
user_membership = Membership.objects.get(tenant=tenant2, user__email=TEST_USER)
user_id = user_membership.user_id
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
@@ -851,127 +811,6 @@ class TestTenantViewSet:
)
assert response.status_code == status.HTTP_403_FORBIDDEN
assert Membership.objects.filter(id=user_membership.id).exists()
assert User.objects.filter(id=user_id).exists()
def test_expel_user_deletes_account_if_last_membership(
self, authenticated_client, tenants_fixture, extra_users
):
# TEST_USER is OWNER of tenant2; user3 is MEMBER only in tenant2
_, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
assert Membership.objects.filter(user=user3).count() == 1
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership3.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
assert not Membership.objects.filter(id=membership3.id).exists()
assert not User.objects.filter(id=user3.id).exists()
def test_expel_user_blacklists_refresh_tokens(
self, authenticated_client, tenants_fixture, extra_users
):
_, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
# Issue two refresh tokens to simulate active sessions
RefreshToken.for_user(user3)
RefreshToken.for_user(user3)
outstanding_ids = list(
OutstandingToken.objects.filter(user=user3).values_list("id", flat=True)
)
assert len(outstanding_ids) == 2
assert not BlacklistedToken.objects.filter(
token_id__in=outstanding_ids
).exists()
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership3.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
assert (
BlacklistedToken.objects.filter(token_id__in=outstanding_ids).count() == 2
)
def test_expel_user_blacklists_refresh_tokens_is_idempotent(
self, authenticated_client, tenants_fixture, extra_users
):
# Regression test for the bulk blacklisting path: if one of the
# user's refresh tokens is already blacklisted when the expel
# endpoint runs, the remaining tokens must still be blacklisted
# and the already-blacklisted one must not be duplicated.
tenant1, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
# Keep the user alive after the expel so the assertions below can
# still query OutstandingToken by user_id.
Membership.objects.create(
user=user3,
tenant=tenant1,
role=Membership.RoleChoices.MEMBER,
)
RefreshToken.for_user(user3)
RefreshToken.for_user(user3)
outstanding_ids = list(
OutstandingToken.objects.filter(user=user3).values_list("id", flat=True)
)
assert len(outstanding_ids) == 2
# Pre-blacklist one of the two tokens to simulate a prior revocation.
BlacklistedToken.objects.create(token_id=outstanding_ids[0])
assert (
BlacklistedToken.objects.filter(token_id__in=outstanding_ids).count() == 1
)
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership3.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
blacklisted = BlacklistedToken.objects.filter(token_id__in=outstanding_ids)
assert blacklisted.count() == 2
assert set(blacklisted.values_list("token_id", flat=True)) == set(
outstanding_ids
)
def test_expel_user_keeps_account_if_has_other_memberships(
self, authenticated_client, tenants_fixture, extra_users
):
tenant1, tenant2, _ = tenants_fixture
_, user3_membership = extra_users
user3, membership3 = user3_membership
# Give user3 an additional membership in tenant1 so they are not orphaned
other_membership = Membership.objects.create(
user=user3,
tenant=tenant1,
role=Membership.RoleChoices.MEMBER,
)
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership3.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
assert not Membership.objects.filter(id=membership3.id).exists()
assert User.objects.filter(id=user3.id).exists()
assert Membership.objects.filter(id=other_membership.id).exists()
def test_tenants_delete_another_membership_as_owner(
self, authenticated_client, tenants_fixture, extra_users
@@ -1043,128 +882,6 @@ class TestTenantViewSet:
assert response.status_code == status.HTTP_404_NOT_FOUND
assert Membership.objects.filter(id=other_membership.id).exists()
def test_delete_membership_cleans_up_orphaned_role_grants(
self, authenticated_client, tenants_fixture
):
"""Test that deleting a membership removes UserRoleRelationship records
for that tenant while preserving grants in other tenants."""
tenant1, tenant2, _ = tenants_fixture
# Create a user with memberships in both tenants
user = User.objects.create_user(
name="Multi-tenant User",
password=TEST_PASSWORD,
email="multitenant@test.com",
)
# Create memberships in both tenants
Membership.objects.create(
user=user, tenant=tenant1, role=Membership.RoleChoices.MEMBER
)
membership2 = Membership.objects.create(
user=user, tenant=tenant2, role=Membership.RoleChoices.MEMBER
)
# Create roles in both tenants
role1 = Role.objects.create(
name="Test Role 1", tenant=tenant1, manage_providers=True
)
role2 = Role.objects.create(
name="Test Role 2", tenant=tenant2, manage_scans=True
)
# Create user role relationships for both tenants
UserRoleRelationship.objects.create(user=user, role=role1, tenant=tenant1)
UserRoleRelationship.objects.create(user=user, role=role2, tenant=tenant2)
# Verify initial state
assert UserRoleRelationship.objects.filter(user=user, tenant=tenant1).exists()
assert UserRoleRelationship.objects.filter(user=user, tenant=tenant2).exists()
assert Role.objects.filter(id=role1.id).exists()
assert Role.objects.filter(id=role2.id).exists()
# Delete membership from tenant2 (authenticated user is owner of tenant2)
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership2.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
# Verify the membership was deleted
assert not Membership.objects.filter(id=membership2.id).exists()
# Verify UserRoleRelationship for tenant2 was deleted
assert not UserRoleRelationship.objects.filter(
user=user, tenant=tenant2
).exists()
# Verify UserRoleRelationship for tenant1 is preserved
assert UserRoleRelationship.objects.filter(user=user, tenant=tenant1).exists()
# Verify orphaned role2 was deleted (no more user or invitation relationships)
assert not Role.objects.filter(id=role2.id).exists()
# Verify role1 is preserved (still has user relationship)
assert Role.objects.filter(id=role1.id).exists()
# Verify the user still exists (has other memberships)
assert User.objects.filter(id=user.id).exists()
def test_delete_membership_preserves_role_with_invitation_relationship(
self, authenticated_client, tenants_fixture
):
"""Test that roles are not deleted if they have invitation relationships."""
_, tenant2, _ = tenants_fixture
# Create a user with membership
user = User.objects.create_user(
name="Test User", password=TEST_PASSWORD, email="testuser@test.com"
)
membership = Membership.objects.create(
user=user, tenant=tenant2, role=Membership.RoleChoices.MEMBER
)
# Create a role and user relationship
role = Role.objects.create(
name="Shared Role", tenant=tenant2, manage_providers=True
)
UserRoleRelationship.objects.create(user=user, role=role, tenant=tenant2)
# Create an invitation with the same role
invitation = Invitation.objects.create(email="pending@test.com", tenant=tenant2)
InvitationRoleRelationship.objects.create(
invitation=invitation, role=role, tenant=tenant2
)
# Verify initial state
assert UserRoleRelationship.objects.filter(user=user, role=role).exists()
assert InvitationRoleRelationship.objects.filter(
invitation=invitation, role=role
).exists()
assert Role.objects.filter(id=role.id).exists()
# Delete the membership
response = authenticated_client.delete(
reverse(
"tenant-membership-detail",
kwargs={"tenant_pk": tenant2.id, "pk": membership.id},
)
)
assert response.status_code == status.HTTP_204_NO_CONTENT
# Verify UserRoleRelationship was deleted
assert not UserRoleRelationship.objects.filter(user=user, role=role).exists()
# Verify role is preserved because invitation relationship exists
assert Role.objects.filter(id=role.id).exists()
assert InvitationRoleRelationship.objects.filter(
invitation=invitation, role=role
).exists()
def test_tenants_list_no_permissions(
self, authenticated_client_no_permissions_rbac, tenants_fixture
):
@@ -3841,14 +3558,9 @@ class TestScanViewSet:
"prowler-output-123_threatscore_report.pdf",
)
presigned_url = (
"https://test-bucket.s3.amazonaws.com/"
"tenant-id/scan-id/threatscore/prowler-output-123_threatscore_report.pdf"
"?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=300"
)
mock_s3_client = Mock()
mock_s3_client.list_objects_v2.return_value = {"Contents": [{"Key": pdf_key}]}
mock_s3_client.generate_presigned_url.return_value = presigned_url
mock_s3_client.get_object.return_value = {"Body": io.BytesIO(b"pdf-bytes")}
mock_env_str.return_value = bucket
mock_get_s3_client.return_value = mock_s3_client
@@ -3857,26 +3569,19 @@ class TestScanViewSet:
url = reverse("scan-threatscore", kwargs={"pk": scan.id})
response = authenticated_client.get(url)
assert response.status_code == status.HTTP_302_FOUND
assert response["Location"] == presigned_url
mock_s3_client.list_objects_v2.assert_called_once()
mock_s3_client.generate_presigned_url.assert_called_once_with(
"get_object",
Params={
"Bucket": bucket,
"Key": pdf_key,
"ResponseContentDisposition": (
'attachment; filename="prowler-output-123_threatscore_report.pdf"'
),
"ResponseContentType": "application/pdf",
},
ExpiresIn=300,
assert response.status_code == status.HTTP_200_OK
assert response["Content-Type"] == "application/pdf"
assert response["Content-Disposition"].endswith(
'"prowler-output-123_threatscore_report.pdf"'
)
assert response.content == b"pdf-bytes"
mock_s3_client.list_objects_v2.assert_called_once()
mock_s3_client.get_object.assert_called_once_with(Bucket=bucket, Key=pdf_key)
def test_report_s3_success(self, authenticated_client, scans_fixture, monkeypatch):
"""
When output_location is an S3 URL and the object exists,
the view should return a 302 redirect to a presigned S3 URL.
When output_location is an S3 URL and the S3 client returns the file successfully,
the view should return the ZIP file with HTTP 200 and proper headers.
"""
scan = scans_fixture[0]
bucket = "test-bucket"
@@ -3890,33 +3595,22 @@ class TestScanViewSet:
type("env", (), {"str": lambda self, *args, **kwargs: "test-bucket"})(),
)
presigned_url = (
"https://test-bucket.s3.amazonaws.com/report.zip"
"?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=300"
)
class FakeS3Client:
def head_object(self, Bucket, Key):
def get_object(self, Bucket, Key):
assert Bucket == bucket
assert Key == key
return {}
def generate_presigned_url(self, ClientMethod, Params, ExpiresIn):
assert ClientMethod == "get_object"
assert Params["Bucket"] == bucket
assert Params["Key"] == key
assert Params["ResponseContentDisposition"] == (
'attachment; filename="report.zip"'
)
assert ExpiresIn == 300
return presigned_url
return {"Body": io.BytesIO(b"s3 zip content")}
monkeypatch.setattr("api.v1.views.get_s3_client", lambda: FakeS3Client())
url = reverse("scan-report", kwargs={"pk": scan.id})
response = authenticated_client.get(url)
assert response.status_code == status.HTTP_302_FOUND
assert response["Location"] == presigned_url
assert response.status_code == 200
expected_filename = os.path.basename("report.zip")
content_disposition = response.get("Content-Disposition")
assert content_disposition.startswith('attachment; filename="')
assert f'filename="{expected_filename}"' in content_disposition
assert response.content == b"s3 zip content"
def test_report_s3_success_no_local_files(
self, authenticated_client, scans_fixture, monkeypatch
@@ -4055,31 +3749,23 @@ class TestScanViewSet:
)
match_key = "path/compliance/mitre_attack_aws.csv"
presigned_url = (
"https://test-bucket.s3.amazonaws.com/path/compliance/mitre_attack_aws.csv"
"?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=300"
)
class FakeS3Client:
def list_objects_v2(self, Bucket, Prefix):
return {"Contents": [{"Key": match_key}]}
def generate_presigned_url(self, ClientMethod, Params, ExpiresIn):
assert ClientMethod == "get_object"
assert Params["Key"] == match_key
assert Params["ResponseContentDisposition"] == (
'attachment; filename="mitre_attack_aws.csv"'
)
assert ExpiresIn == 300
return presigned_url
def get_object(self, Bucket, Key):
return {"Body": io.BytesIO(b"ignored")}
monkeypatch.setattr("api.v1.views.get_s3_client", lambda: FakeS3Client())
framework = match_key.split("/")[-1].split(".")[0]
url = reverse("scan-compliance", kwargs={"pk": scan.id, "name": framework})
resp = authenticated_client.get(url)
assert resp.status_code == status.HTTP_302_FOUND
assert resp["Location"] == presigned_url
assert resp.status_code == status.HTTP_200_OK
cd = resp["Content-Disposition"]
assert cd.startswith('attachment; filename="')
assert cd.endswith('filename="mitre_attack_aws.csv"')
def test_compliance_s3_not_found(
self, authenticated_client, scans_fixture, monkeypatch
@@ -4144,51 +3830,6 @@ class TestScanViewSet:
assert cd.startswith('attachment; filename="')
assert cd.endswith(f'filename="{fname.name}"')
def test_cis_no_output(self, authenticated_client, scans_fixture):
"""CIS PDF endpoint must 404 when the scan has no output_location."""
scan = scans_fixture[0]
scan.state = StateChoices.COMPLETED
scan.output_location = ""
scan.save()
url = reverse("scan-cis", kwargs={"pk": scan.id})
resp = authenticated_client.get(url)
assert resp.status_code == status.HTTP_404_NOT_FOUND
assert (
resp.json()["errors"]["detail"]
== "The scan has no reports, or the CIS report generation task has not started yet."
)
def test_cis_local_file(self, authenticated_client, scans_fixture, monkeypatch):
"""CIS PDF endpoint must serve the latest generated PDF."""
scan = scans_fixture[0]
scan.state = StateChoices.COMPLETED
with tempfile.TemporaryDirectory() as tmp:
tmp_path = Path(tmp)
base = tmp_path / "reports"
cis_dir = base / "cis"
cis_dir.mkdir(parents=True, exist_ok=True)
fname = cis_dir / "prowler-output-aws-20260101000000_cis_report.pdf"
fname.write_bytes(b"%PDF-1.4 fake pdf")
scan.output_location = str(base / "scan.zip")
scan.save()
monkeypatch.setattr(
glob,
"glob",
lambda p: [str(fname)] if p.endswith("*_cis_report.pdf") else [],
)
url = reverse("scan-cis", kwargs={"pk": scan.id})
resp = authenticated_client.get(url)
assert resp.status_code == status.HTTP_200_OK
assert resp["Content-Type"] == "application/pdf"
cd = resp["Content-Disposition"]
assert cd.startswith('attachment; filename="')
assert cd.endswith(f'filename="{fname.name}"')
@patch("api.v1.views.Task.objects.get")
@patch("api.v1.views.TaskSerializer")
def test__get_task_status_returns_none_if_task_not_executing(
@@ -4282,8 +3923,8 @@ class TestScanViewSet:
scan.save()
fake_client = MagicMock()
fake_client.head_object.side_effect = ClientError(
{"Error": {"Code": "NoSuchKey"}}, "HeadObject"
fake_client.get_object.side_effect = ClientError(
{"Error": {"Code": "NoSuchKey"}}, "GetObject"
)
mock_get_s3_client.return_value = fake_client
@@ -4306,8 +3947,8 @@ class TestScanViewSet:
scan.save()
fake_client = MagicMock()
fake_client.head_object.side_effect = ClientError(
{"Error": {"Code": "AccessDenied"}}, "HeadObject"
fake_client.get_object.side_effect = ClientError(
{"Error": {"Code": "AccessDenied"}}, "GetObject"
)
mock_get_s3_client.return_value = fake_client
+58 -299
View File
@@ -4,7 +4,6 @@ import json
import logging
import os
import time
import uuid
from collections import defaultdict
from copy import deepcopy
from datetime import datetime, timedelta, timezone
@@ -17,7 +16,7 @@ from allauth.socialaccount.providers.github.views import GitHubOAuth2Adapter
from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter
from allauth.socialaccount.providers.saml.views import FinishACSView, LoginView
from botocore.exceptions import ClientError, NoCredentialsError, ParamValidationError
from celery import chain, states
from celery import chain
from celery.result import AsyncResult
from config.custom_logging import BackendLogger
from config.env import env
@@ -54,14 +53,13 @@ from django.db.models import (
)
from django.db.models.fields.json import KeyTextTransform
from django.db.models.functions import Cast, Coalesce, RowNumber
from django.http import HttpResponse, HttpResponseBase, HttpResponseRedirect, QueryDict
from django.http import HttpResponse, QueryDict
from django.shortcuts import redirect
from django.urls import reverse
from django.utils.dateparse import parse_date
from django.utils.decorators import method_decorator
from django.views.decorators.cache import cache_control
from django_celery_beat.models import PeriodicTask
from django_celery_results.models import TaskResult
from drf_spectacular.settings import spectacular_settings
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import (
@@ -85,10 +83,6 @@ from rest_framework.permissions import SAFE_METHODS
from rest_framework_json_api import filters as jsonapi_filters
from rest_framework_json_api.views import RelationshipView, Response
from rest_framework_simplejwt.exceptions import InvalidToken, TokenError
from rest_framework_simplejwt.token_blacklist.models import (
BlacklistedToken,
OutstandingToken,
)
from tasks.beat import schedule_provider_scan
from tasks.jobs.attack_paths import db_utils as attack_paths_db_utils
from tasks.jobs.export import get_s3_client
@@ -175,7 +169,6 @@ from api.models import (
FindingGroupDailySummary,
Integration,
Invitation,
InvitationRoleRelationship,
LighthouseConfiguration,
LighthouseProviderConfiguration,
LighthouseProviderModels,
@@ -424,7 +417,7 @@ class SchemaView(SpectacularAPIView):
def get(self, request, *args, **kwargs):
spectacular_settings.TITLE = "Prowler API"
spectacular_settings.VERSION = "1.28.0"
spectacular_settings.VERSION = "1.25.3"
spectacular_settings.DESCRIPTION = (
"Prowler API specification.\n\nThis file is auto-generated."
)
@@ -1337,11 +1330,9 @@ class MembershipViewSet(BaseTenantViewset):
),
destroy=extend_schema(
summary="Delete tenant memberships",
description="Delete a user's membership from a tenant. This action: (1) removes the membership, "
"(2) revokes all refresh tokens for the expelled user, (3) removes their role grants for this tenant, "
"(4) cleans up orphaned roles, and (5) deletes the user account if this was their last membership. "
"You must be a tenant owner to delete another user's membership. The last owner of a tenant cannot "
"delete their own membership.",
description="Delete the membership details of users in a tenant. You need to be one of the owners to delete a "
"membership that is not yours. If you are the last owner of a tenant, you cannot delete your own "
"membership.",
tags=["Tenant"],
),
)
@@ -1350,7 +1341,6 @@ class TenantMembersViewSet(BaseTenantViewset):
http_method_names = ["get", "delete"]
serializer_class = MembershipSerializer
queryset = Membership.objects.none()
filterset_class = MembershipFilter
# Authorization is handled by get_requesting_membership (owner/member checks),
# not by RBAC, since the target tenant differs from the JWT tenant.
required_permissions = []
@@ -1408,84 +1398,7 @@ class TenantMembersViewSet(BaseTenantViewset):
"You do not have permission to delete this membership."
)
user_to_check_id = membership_to_delete.user_id
tenant_id = membership_to_delete.tenant_id
# All writes run on the admin connection so that the uncommitted
# membership delete is visible to the subsequent "other memberships"
# check. Splitting the delete and the check across the default
# (prowler_user, RLS) and admin connections caused the admin side to
# miss the just-deleted row and leave the User row orphaned.
with transaction.atomic(using=MainRouter.admin_db):
Membership.objects.using(MainRouter.admin_db).filter(
id=membership_to_delete.id
).delete()
# Remove role grants for this user in this tenant to prevent
# orphaned permissions that could allow access after expulsion
deleted_role_relationships = UserRoleRelationship.objects.using(
MainRouter.admin_db
).filter(user_id=user_to_check_id, tenant_id=tenant_id)
# Collect role IDs that might become orphaned after deletion
role_ids_to_check = list(
deleted_role_relationships.values_list("role_id", flat=True)
)
# Delete the user role relationships for this tenant
deleted_role_relationships.delete()
# Clean up orphaned roles that have no remaining user or invitation relationships
if role_ids_to_check:
for role_id in role_ids_to_check:
has_user_relationships = (
UserRoleRelationship.objects.using(MainRouter.admin_db)
.filter(role_id=role_id)
.exists()
)
has_invitation_relationships = (
InvitationRoleRelationship.objects.using(MainRouter.admin_db)
.filter(role_id=role_id)
.exists()
)
if not has_user_relationships and not has_invitation_relationships:
Role.objects.using(MainRouter.admin_db).filter(
id=role_id
).delete()
# Revoke any refresh tokens the expelled user still holds so they
# cannot mint fresh access tokens. This must happen before the
# User row is deleted, because OutstandingToken.user is
# on_delete=SET_NULL in djangorestframework-simplejwt 5.5.1
# (see rest_framework_simplejwt/token_blacklist/models.py): once
# the user row is gone, user_id becomes NULL and we can no longer
# look up that user's outstanding tokens. Access tokens already
# issued remain valid until SIMPLE_JWT["ACCESS_TOKEN_LIFETIME"]
# expires.
outstanding_token_ids = list(
OutstandingToken.objects.using(MainRouter.admin_db)
.filter(user_id=user_to_check_id)
.values_list("id", flat=True)
)
if outstanding_token_ids:
BlacklistedToken.objects.using(MainRouter.admin_db).bulk_create(
[
BlacklistedToken(token_id=token_id)
for token_id in outstanding_token_ids
],
ignore_conflicts=True,
)
has_other_memberships = (
Membership.objects.using(MainRouter.admin_db)
.filter(user_id=user_to_check_id)
.exists()
)
if not has_other_memberships:
User.objects.using(MainRouter.admin_db).filter(
id=user_to_check_id
).delete()
membership_to_delete.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
@@ -1928,27 +1841,6 @@ class ProviderViewSet(DisablePaginationMixin, BaseRLSViewSet):
),
},
),
cis=extend_schema(
tags=["Scan"],
summary="Retrieve CIS Benchmark compliance report",
description="Download the CIS Benchmark compliance report as a PDF file. "
"When a provider ships multiple CIS versions, the report is generated "
"for the highest available version.",
request=None,
responses={
200: OpenApiResponse(
description="PDF file containing the CIS compliance report"
),
202: OpenApiResponse(description="The task is in progress"),
401: OpenApiResponse(
description="API key missing or user not Authenticated"
),
403: OpenApiResponse(description="There is a problem with credentials"),
404: OpenApiResponse(
description="The scan has no CIS reports, or the CIS report generation task has not started yet"
),
},
),
)
@method_decorator(CACHE_DECORATOR, name="list")
@method_decorator(CACHE_DECORATOR, name="retrieve")
@@ -2017,9 +1909,6 @@ class ScanViewSet(BaseRLSViewSet):
elif self.action == "csa":
if hasattr(self, "response_serializer_class"):
return self.response_serializer_class
elif self.action == "cis":
if hasattr(self, "response_serializer_class"):
return self.response_serializer_class
return super().get_serializer_class()
def partial_update(self, request, *args, **kwargs):
@@ -2082,38 +1971,24 @@ class ScanViewSet(BaseRLSViewSet):
},
)
def _load_file(
self,
path_pattern,
s3=False,
bucket=None,
list_objects=False,
content_type=None,
):
def _load_file(self, path_pattern, s3=False, bucket=None, list_objects=False):
"""
Resolve a report file location and return the bytes (filesystem) or a redirect (S3).
Loads a binary file (e.g., ZIP or CSV) and returns its content and filename.
Depending on the input parameters, this method supports loading:
- From S3 using a direct key, returns a 302 to a short-lived presigned URL.
- From S3 by listing objects under a prefix and matching suffix, returns a 302 to a short-lived presigned URL.
- From the local filesystem using glob pattern matching, returns the file bytes.
The S3 branch never streams bytes through the worker; this prevents gunicorn
worker timeouts on large reports.
- From S3 using a direct key.
- From S3 by listing objects under a prefix and matching suffix.
- From the local filesystem using glob pattern matching.
Args:
path_pattern (str): The key or glob pattern representing the file location.
s3 (bool, optional): Whether the file is stored in S3. Defaults to False.
bucket (str, optional): The name of the S3 bucket, required if `s3=True`. Defaults to None.
list_objects (bool, optional): If True and `s3=True`, list objects by prefix to find the file. Defaults to False.
content_type (str, optional): On the S3 branch, forwarded as `ResponseContentType`
so the presigned download advertises the same Content-Type the API used to send.
Ignored on the filesystem branch.
Returns:
tuple[bytes, str]: For the filesystem branch, the file content and filename.
HttpResponseRedirect: For the S3 branch on success, a 302 redirect to a presigned `GetObject` URL.
Response: For any error path, a DRF `Response` with an appropriate status and detail.
tuple[bytes, str]: A tuple containing the file content as bytes and the filename if successful.
Response: A DRF `Response` object with an appropriate status and error detail if an error occurs.
"""
if s3:
try:
@@ -2160,45 +2035,25 @@ class ScanViewSet(BaseRLSViewSet):
# path_pattern here is prefix, but in compliance we build correct suffix check before
key = keys[0]
else:
# path_pattern is exact key; HEAD before presigning to preserve the 404 contract.
# path_pattern is exact key
key = path_pattern
try:
client.head_object(Bucket=bucket, Key=key)
except ClientError as e:
code = e.response.get("Error", {}).get("Code")
if code in ("NoSuchKey", "404"):
return Response(
{
"detail": "The scan has no reports, or the report generation task has not started yet."
},
status=status.HTTP_404_NOT_FOUND,
)
try:
s3_obj = client.get_object(Bucket=bucket, Key=key)
except ClientError as e:
code = e.response.get("Error", {}).get("Code")
if code == "NoSuchKey":
return Response(
{"detail": "There is a problem with credentials."},
status=status.HTTP_403_FORBIDDEN,
{
"detail": "The scan has no reports, or the report generation task has not started yet."
},
status=status.HTTP_404_NOT_FOUND,
)
return Response(
{"detail": "There is a problem with credentials."},
status=status.HTTP_403_FORBIDDEN,
)
content = s3_obj["Body"].read()
filename = os.path.basename(key)
# escape quotes and strip CR/LF so a malformed key cannot break out of the header
safe_filename = (
filename.replace("\\", "\\\\")
.replace('"', '\\"')
.replace("\r", "")
.replace("\n", "")
)
params = {
"Bucket": bucket,
"Key": key,
"ResponseContentDisposition": f'attachment; filename="{safe_filename}"',
}
if content_type:
params["ResponseContentType"] = content_type
url = client.generate_presigned_url(
"get_object",
Params=params,
ExpiresIn=300,
)
return HttpResponseRedirect(url)
else:
files = glob.glob(path_pattern)
if not files:
@@ -2241,16 +2096,12 @@ class ScanViewSet(BaseRLSViewSet):
bucket = env.str("DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET", "")
key_prefix = scan.output_location.removeprefix(f"s3://{bucket}/")
loader = self._load_file(
key_prefix,
s3=True,
bucket=bucket,
list_objects=False,
content_type="application/x-zip-compressed",
key_prefix, s3=True, bucket=bucket, list_objects=False
)
else:
loader = self._load_file(scan.output_location, s3=False)
if isinstance(loader, HttpResponseBase):
if isinstance(loader, Response):
return loader
content, filename = loader
@@ -2288,69 +2139,18 @@ class ScanViewSet(BaseRLSViewSet):
prefix = os.path.join(
os.path.dirname(key_prefix), "compliance", f"{name}.csv"
)
loader = self._load_file(
prefix,
s3=True,
bucket=bucket,
list_objects=True,
content_type="text/csv",
)
loader = self._load_file(prefix, s3=True, bucket=bucket, list_objects=True)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "compliance", f"*_{name}.csv")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, HttpResponseBase):
if isinstance(loader, Response):
return loader
content, filename = loader
return self._serve_file(content, filename, "text/csv")
@action(
detail=True,
methods=["get"],
url_name="cis",
)
def cis(self, request, pk=None):
scan = self.get_object()
running_resp = self._get_task_status(scan)
if running_resp:
return running_resp
if not scan.output_location:
return Response(
{
"detail": "The scan has no reports, or the CIS report generation task has not started yet."
},
status=status.HTTP_404_NOT_FOUND,
)
if scan.output_location.startswith("s3://"):
bucket = env.str("DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET", "")
key_prefix = scan.output_location.removeprefix(f"s3://{bucket}/")
prefix = os.path.join(
os.path.dirname(key_prefix),
"cis",
"*_cis_report.pdf",
)
loader = self._load_file(
prefix,
s3=True,
bucket=bucket,
list_objects=True,
content_type="application/pdf",
)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "cis", "*_cis_report.pdf")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, HttpResponseBase):
return loader
content, filename = loader
return self._serve_file(content, filename, "application/pdf")
@action(
detail=True,
methods=["get"],
@@ -2379,19 +2179,13 @@ class ScanViewSet(BaseRLSViewSet):
"threatscore",
"*_threatscore_report.pdf",
)
loader = self._load_file(
prefix,
s3=True,
bucket=bucket,
list_objects=True,
content_type="application/pdf",
)
loader = self._load_file(prefix, s3=True, bucket=bucket, list_objects=True)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "threatscore", "*_threatscore_report.pdf")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, HttpResponseBase):
if isinstance(loader, Response):
return loader
content, filename = loader
@@ -2425,19 +2219,13 @@ class ScanViewSet(BaseRLSViewSet):
"ens",
"*_ens_report.pdf",
)
loader = self._load_file(
prefix,
s3=True,
bucket=bucket,
list_objects=True,
content_type="application/pdf",
)
loader = self._load_file(prefix, s3=True, bucket=bucket, list_objects=True)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "ens", "*_ens_report.pdf")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, HttpResponseBase):
if isinstance(loader, Response):
return loader
content, filename = loader
@@ -2470,19 +2258,13 @@ class ScanViewSet(BaseRLSViewSet):
"nis2",
"*_nis2_report.pdf",
)
loader = self._load_file(
prefix,
s3=True,
bucket=bucket,
list_objects=True,
content_type="application/pdf",
)
loader = self._load_file(prefix, s3=True, bucket=bucket, list_objects=True)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "nis2", "*_nis2_report.pdf")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, HttpResponseBase):
if isinstance(loader, Response):
return loader
content, filename = loader
@@ -2515,19 +2297,13 @@ class ScanViewSet(BaseRLSViewSet):
"csa",
"*_csa_report.pdf",
)
loader = self._load_file(
prefix,
s3=True,
bucket=bucket,
list_objects=True,
content_type="application/pdf",
)
loader = self._load_file(prefix, s3=True, bucket=bucket, list_objects=True)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "csa", "*_csa_report.pdf")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, HttpResponseBase):
if isinstance(loader, Response):
return loader
content, filename = loader
@@ -2536,45 +2312,28 @@ class ScanViewSet(BaseRLSViewSet):
def create(self, request, *args, **kwargs):
input_serializer = self.get_serializer(data=request.data)
input_serializer.is_valid(raise_exception=True)
# Broker publish is deferred to on_commit so the worker cannot read
# Scan before BaseRLSViewSet's dispatch-wide atomic commits.
pre_task_id = str(uuid.uuid4())
with transaction.atomic():
scan = input_serializer.save()
scan.task_id = pre_task_id
scan.save(update_fields=["task_id"])
attack_paths_db_utils.create_attack_paths_scan(
tenant_id=self.request.tenant_id,
scan_id=str(scan.id),
provider_id=str(scan.provider_id),
with transaction.atomic():
task = perform_scan_task.apply_async(
kwargs={
"tenant_id": self.request.tenant_id,
"scan_id": str(scan.id),
"provider_id": str(scan.provider_id),
# Disabled for now
# checks_to_execute=scan.scanner_args.get("checks_to_execute")
},
)
task_result, _ = TaskResult.objects.get_or_create(
task_id=pre_task_id,
defaults={"status": states.PENDING, "task_name": "scan-perform"},
)
prowler_task, _ = Task.objects.update_or_create(
id=pre_task_id,
tenant_id=self.request.tenant_id,
defaults={"task_runner_task": task_result},
)
attack_paths_db_utils.create_attack_paths_scan(
tenant_id=self.request.tenant_id,
scan_id=str(scan.id),
provider_id=str(scan.provider_id),
)
scan_kwargs = {
"tenant_id": self.request.tenant_id,
"scan_id": str(scan.id),
"provider_id": str(scan.provider_id),
# Disabled for now
# checks_to_execute=scan.scanner_args.get("checks_to_execute")
}
transaction.on_commit(
lambda: perform_scan_task.apply_async(
kwargs=scan_kwargs, task_id=pre_task_id
)
)
prowler_task = Task.objects.get(id=task.id)
scan.task_id = task.id
scan.save(update_fields=["task_id"])
self.response_serializer_class = TaskSerializer
output_serializer = self.get_serializer(prowler_task)
@@ -120,7 +120,6 @@ sentry_sdk.init(
# see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info
before_send=before_send,
send_default_pii=True,
traces_sample_rate=env.float("DJANGO_SENTRY_TRACES_SAMPLE_RATE", default=0.02),
_experiments={
# Set continuous_profiling_auto_start to True
# to automatically start the profiler on when
+4 -4
View File
@@ -14,8 +14,8 @@ from rest_framework import status
from rest_framework.test import APIClient
from tasks.jobs.backfill import (
backfill_resource_scan_summaries,
aggregate_scan_category_summaries,
aggregate_scan_resource_group_summaries,
backfill_scan_category_summaries,
backfill_scan_resource_group_summaries,
)
from api.attack_paths import (
@@ -1445,8 +1445,8 @@ def latest_scan_finding_with_categories(
)
finding.add_resources([resource])
backfill_resource_scan_summaries(tenant_id, str(scan.id))
aggregate_scan_category_summaries(tenant_id, str(scan.id))
aggregate_scan_resource_group_summaries(tenant_id, str(scan.id))
backfill_scan_category_summaries(tenant_id, str(scan.id))
backfill_scan_resource_group_summaries(tenant_id, str(scan.id))
return finding
Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

+1 -43
View File
@@ -49,7 +49,7 @@ def start_aws_ingestion(
}
boto3_session = get_boto3_session(prowler_api_provider, prowler_sdk_provider)
regions: list[str] = resolve_aws_regions(prowler_api_provider, prowler_sdk_provider)
regions: list[str] = list(prowler_sdk_provider._enabled_regions)
requested_syncs = list(cartography_aws.RESOURCE_FUNCTIONS.keys())
sync_args = cartography_aws._build_aws_sync_kwargs(
@@ -226,48 +226,6 @@ def get_boto3_session(
return boto3_session
def resolve_aws_regions(
prowler_api_provider: ProwlerAPIProvider,
prowler_sdk_provider: ProwlerSDKProvider,
) -> list[str]:
"""Resolve the regions to scan, falling back when `_enabled_regions` is `None`.
The SDK silently sets `_enabled_regions` to `None` when `ec2:DescribeRegions`
fails (missing IAM permission, transient error). Without a fallback the
Cartography ingestion crashes with a non-actionable `TypeError`. Try the
user's `audited_regions` next, then the partition's static region list.
Excluded regions are honored on every branch.
"""
if prowler_sdk_provider._enabled_regions is not None:
regions = set(prowler_sdk_provider._enabled_regions)
elif prowler_sdk_provider.identity.audited_regions:
regions = set(prowler_sdk_provider.identity.audited_regions)
else:
partition = prowler_sdk_provider.identity.partition
try:
regions = prowler_sdk_provider.get_available_aws_service_regions(
"ec2", partition
)
except KeyError:
raise RuntimeError(
f"No region data available for partition {partition!r}; "
f"cannot determine regions to scan for "
f"{prowler_api_provider.uid}"
)
logger.warning(
f"Could not enumerate enabled regions for AWS account "
f"{prowler_api_provider.uid}; falling back to all regions in "
f"partition {partition!r}"
)
excluded = set(getattr(prowler_sdk_provider, "_excluded_regions", None) or ())
return sorted(regions - excluded)
def get_aioboto3_session(boto3_session: boto3.Session) -> aioboto3.Session:
return aioboto3.Session(botocore_session=boto3_session._session)
@@ -18,45 +18,28 @@ logger = get_task_logger(__name__)
def cleanup_stale_attack_paths_scans() -> dict:
"""
Mark stale `AttackPathsScan` rows as `FAILED`.
Find `EXECUTING` `AttackPathsScan` scans whose workers are dead or that have
exceeded the stale threshold, and mark them as `FAILED`.
Covers two stuck-state scenarios:
1. `EXECUTING` scans whose workers are dead, or that have exceeded the
stale threshold while alive.
2. `SCHEDULED` scans that never made it to a worker parent scan
crashed before dispatch, broker lost the message, etc. Detected by
age plus the parent `Scan` no longer being in flight.
"""
threshold = timedelta(minutes=ATTACK_PATHS_SCAN_STALE_THRESHOLD_MINUTES)
now = datetime.now(tz=timezone.utc)
cutoff = now - threshold
cleaned_up: list[str] = []
cleaned_up.extend(_cleanup_stale_executing_scans(cutoff))
cleaned_up.extend(_cleanup_stale_scheduled_scans(cutoff))
logger.info(
f"Stale `AttackPathsScan` cleanup: {len(cleaned_up)} scan(s) cleaned up"
)
return {"cleaned_up_count": len(cleaned_up), "scan_ids": cleaned_up}
def _cleanup_stale_executing_scans(cutoff: datetime) -> list[str]:
"""
Two-pass detection for `EXECUTING` scans:
Two-pass detection:
1. If `TaskResult.worker` exists, ping the worker.
- Dead worker: cleanup immediately (any age).
- Alive + past threshold: revoke the task, then cleanup.
- Alive + within threshold: skip.
2. If no worker field: fall back to time-based heuristic only.
"""
executing_scans = list(
threshold = timedelta(minutes=ATTACK_PATHS_SCAN_STALE_THRESHOLD_MINUTES)
now = datetime.now(tz=timezone.utc)
cutoff = now - threshold
executing_scans = (
AttackPathsScan.all_objects.using(MainRouter.admin_db)
.filter(state=StateChoices.EXECUTING)
.select_related("task__task_runner_task")
)
# Cache worker liveness so each worker is pinged at most once
executing_scans = list(executing_scans)
workers = {
tr.worker
for scan in executing_scans
@@ -65,7 +48,7 @@ def _cleanup_stale_executing_scans(cutoff: datetime) -> list[str]:
}
worker_alive = {w: _is_worker_alive(w) for w in workers}
cleaned_up: list[str] = []
cleaned_up = []
for scan in executing_scans:
task_result = (
@@ -82,7 +65,9 @@ def _cleanup_stale_executing_scans(cutoff: datetime) -> list[str]:
# Alive but stale — revoke before cleanup
_revoke_task(task_result)
reason = "Scan exceeded stale threshold — cleaned up by periodic task"
reason = (
"Scan exceeded stale threshold — " "cleaned up by periodic task"
)
else:
reason = "Worker dead — cleaned up by periodic task"
else:
@@ -97,57 +82,10 @@ def _cleanup_stale_executing_scans(cutoff: datetime) -> list[str]:
if _cleanup_scan(scan, task_result, reason):
cleaned_up.append(str(scan.id))
return cleaned_up
def _cleanup_stale_scheduled_scans(cutoff: datetime) -> list[str]:
"""
Cleanup `SCHEDULED` scans that never reached a worker.
Detection:
- `state == SCHEDULED`
- `started_at < cutoff`
- parent `Scan` is no longer in flight (terminal state or missing). This
avoids cleaning up rows whose parent Prowler scan is legitimately still
running.
For each match: revoke the queued task (best-effort; harmless if already
consumed), atomically flip to `FAILED`, and mark the `TaskResult`. The
temp Neo4j database is never created while `SCHEDULED`, so no drop is
needed.
"""
scheduled_scans = list(
AttackPathsScan.all_objects.using(MainRouter.admin_db)
.filter(
state=StateChoices.SCHEDULED,
started_at__lt=cutoff,
)
.select_related("task__task_runner_task", "scan")
logger.info(
f"Stale `AttackPathsScan` cleanup: {len(cleaned_up)} scan(s) cleaned up"
)
cleaned_up: list[str] = []
parent_terminal = (
StateChoices.COMPLETED,
StateChoices.FAILED,
StateChoices.CANCELLED,
)
for scan in scheduled_scans:
parent_scan = scan.scan
if parent_scan is not None and parent_scan.state not in parent_terminal:
continue
task_result = (
getattr(scan.task, "task_runner_task", None) if scan.task else None
)
if task_result:
_revoke_task(task_result, terminate=False)
reason = "Scan never started — cleaned up by periodic task"
if _cleanup_scheduled_scan(scan, task_result, reason):
cleaned_up.append(str(scan.id))
return cleaned_up
return {"cleaned_up_count": len(cleaned_up), "scan_ids": cleaned_up}
def _is_worker_alive(worker: str) -> bool:
@@ -160,17 +98,12 @@ def _is_worker_alive(worker: str) -> bool:
return True
def _revoke_task(task_result, terminate: bool = True) -> None:
"""Revoke a Celery task. Non-fatal on failure.
`terminate=True` SIGTERMs the worker if the task is mid-execution; use
for EXECUTING cleanup. `terminate=False` only marks the task id revoked
across workers, so any worker pulling the queued message discards it;
use for SCHEDULED cleanup where the task hasn't run yet.
"""
def _revoke_task(task_result) -> None:
"""Send `SIGTERM` to a hung Celery task. Non-fatal on failure."""
try:
kwargs = {"terminate": True, "signal": "SIGTERM"} if terminate else {}
current_app.control.revoke(task_result.task_id, **kwargs)
current_app.control.revoke(
task_result.task_id, terminate=True, signal="SIGTERM"
)
logger.info(f"Revoked task {task_result.task_id}")
except Exception:
logger.exception(f"Failed to revoke task {task_result.task_id}")
@@ -192,64 +125,28 @@ def _cleanup_scan(scan, task_result, reason: str) -> bool:
except Exception:
logger.exception(f"Failed to drop temp database {tmp_db_name}")
fresh_scan = _finalize_failed_scan(scan, StateChoices.EXECUTING, reason)
if fresh_scan is None:
return False
# Mark `TaskResult` as `FAILURE` (not RLS-protected, outside lock)
if task_result:
task_result.status = states.FAILURE
task_result.date_done = datetime.now(tz=timezone.utc)
task_result.save(update_fields=["status", "date_done"])
recover_graph_data_ready(fresh_scan)
logger.info(f"Cleaned up stale scan {scan_id_str}: {reason}")
return True
def _cleanup_scheduled_scan(scan, task_result, reason: str) -> bool:
"""
Clean up a `SCHEDULED` scan that never reached a worker.
Skips the temp Neo4j drop the database is only created once the worker
enters `EXECUTING`, so dropping it here just produces noisy log output.
Returns `True` if the scan was actually cleaned up, `False` if skipped.
"""
scan_id_str = str(scan.id)
fresh_scan = _finalize_failed_scan(scan, StateChoices.SCHEDULED, reason)
if fresh_scan is None:
return False
if task_result:
task_result.status = states.FAILURE
task_result.date_done = datetime.now(tz=timezone.utc)
task_result.save(update_fields=["status", "date_done"])
logger.info(f"Cleaned up scheduled scan {scan_id_str}: {reason}")
return True
def _finalize_failed_scan(scan, expected_state: str, reason: str):
"""
Atomically lock the row, verify it's still in `expected_state`, and
mark it `FAILED`. Returns the locked row on success, `None` if the
row is gone or has already moved on.
"""
scan_id_str = str(scan.id)
# 2. Lock row, verify still EXECUTING, mark FAILED — all atomic
with rls_transaction(str(scan.tenant_id)):
try:
fresh_scan = AttackPathsScan.objects.select_for_update().get(id=scan.id)
except AttackPathsScan.DoesNotExist:
logger.warning(f"Scan {scan_id_str} no longer exists, skipping")
return None
return False
if fresh_scan.state != expected_state:
if fresh_scan.state != StateChoices.EXECUTING:
logger.info(f"Scan {scan_id_str} is now {fresh_scan.state}, skipping")
return None
return False
_mark_scan_finished(fresh_scan, StateChoices.FAILED, {"global_error": reason})
return fresh_scan
# 3. Mark `TaskResult` as `FAILURE` (not RLS-protected, outside lock)
if task_result:
task_result.status = states.FAILURE
task_result.date_done = datetime.now(tz=timezone.utc)
task_result.save(update_fields=["status", "date_done"])
# 4. Recover graph_data_ready if provider data still exists
recover_graph_data_ready(fresh_scan)
logger.info(f"Cleaned up stale scan {scan_id_str}: {reason}")
return True
@@ -67,52 +67,25 @@ def retrieve_attack_paths_scan(
return None
def set_attack_paths_scan_task_id(
tenant_id: str,
scan_pk: str,
task_id: str,
) -> None:
"""Persist the Celery `task_id` on the `AttackPathsScan` row.
Called at dispatch time (when `apply_async` returns) so the row carries
the task id even while still `SCHEDULED`. This lets the periodic
cleanup revoke queued messages for scans that never reached a worker.
"""
with rls_transaction(tenant_id):
ProwlerAPIAttackPathsScan.objects.filter(id=scan_pk).update(task_id=task_id)
def starting_attack_paths_scan(
attack_paths_scan: ProwlerAPIAttackPathsScan,
task_id: str,
cartography_config: CartographyConfig,
) -> bool:
"""Flip the row from `SCHEDULED` to `EXECUTING` atomically.
Returns `False` if the row is gone or has already moved past
`SCHEDULED` (e.g., periodic cleanup raced ahead and marked it
`FAILED` while the worker message was still in flight).
"""
) -> None:
with rls_transaction(attack_paths_scan.tenant_id):
try:
locked = ProwlerAPIAttackPathsScan.objects.select_for_update().get(
id=attack_paths_scan.id
)
except ProwlerAPIAttackPathsScan.DoesNotExist:
return False
attack_paths_scan.task_id = task_id
attack_paths_scan.state = StateChoices.EXECUTING
attack_paths_scan.started_at = datetime.now(tz=timezone.utc)
attack_paths_scan.update_tag = cartography_config.update_tag
if locked.state != StateChoices.SCHEDULED:
return False
locked.state = StateChoices.EXECUTING
locked.started_at = datetime.now(tz=timezone.utc)
locked.update_tag = cartography_config.update_tag
locked.save(update_fields=["state", "started_at", "update_tag"])
# Keep the in-memory object the caller is holding in sync.
attack_paths_scan.state = locked.state
attack_paths_scan.started_at = locked.started_at
attack_paths_scan.update_tag = locked.update_tag
return True
attack_paths_scan.save(
update_fields=[
"task_id",
"state",
"started_at",
"update_tag",
]
)
def _mark_scan_finished(
@@ -97,19 +97,6 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
)
attack_paths_scan = db_utils.retrieve_attack_paths_scan(tenant_id, scan_id)
# Idempotency guard: cleanup may have flipped this row to a terminal state
# while the message was still in flight. Bail out before touching state.
if attack_paths_scan and attack_paths_scan.state in (
StateChoices.FAILED,
StateChoices.COMPLETED,
StateChoices.CANCELLED,
):
logger.warning(
f"Attack Paths scan {attack_paths_scan.id} already in terminal "
f"state {attack_paths_scan.state}; skipping execution"
)
return {}
# Checks before starting the scan
if not cartography_ingestion_function:
ingestion_exceptions = {
@@ -127,17 +114,12 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
else:
if not attack_paths_scan:
# Safety net for in-flight messages or direct task invocations; dispatcher normally pre-creates the row.
logger.warning(
f"No Attack Paths Scan found for scan {scan_id} and tenant {tenant_id}, let's create it then"
)
attack_paths_scan = db_utils.create_attack_paths_scan(
tenant_id, scan_id, prowler_api_provider.id
)
if attack_paths_scan and task_id:
db_utils.set_attack_paths_scan_task_id(
tenant_id, attack_paths_scan.id, task_id
)
tmp_database_name = graph_database.get_database_name(
attack_paths_scan.id, temporary=True
@@ -159,13 +141,9 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
)
# Starting the Attack Paths scan
if not db_utils.starting_attack_paths_scan(
attack_paths_scan, tenant_cartography_config
):
logger.warning(
f"Attack Paths scan {attack_paths_scan.id} no longer in SCHEDULED state; cleanup likely raced ahead"
)
return {}
db_utils.starting_attack_paths_scan(
attack_paths_scan, task_id, tenant_cartography_config
)
scan_t0 = time.perf_counter()
logger.info(
+26 -46
View File
@@ -297,15 +297,12 @@ def backfill_daily_severity_summaries(tenant_id: str, days: int = None):
}
def aggregate_scan_category_summaries(tenant_id: str, scan_id: str):
def backfill_scan_category_summaries(tenant_id: str, scan_id: str):
"""
Backfill ScanCategorySummary for a completed scan.
Aggregates category counts from all findings in the scan and creates
one ScanCategorySummary row per (category, severity) combination.
Idempotent: re-runs replace the scan's existing rows so counts stay in
sync with `Finding.muted` updates triggered outside scan completion
(e.g. mute rules).
Args:
tenant_id: Target tenant UUID
@@ -315,6 +312,11 @@ def aggregate_scan_category_summaries(tenant_id: str, scan_id: str):
dict: Status indicating whether backfill was performed
"""
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
if ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists():
return {"status": "already backfilled"}
if not Scan.objects.filter(
tenant_id=tenant_id,
id=scan_id,
@@ -335,6 +337,9 @@ def aggregate_scan_category_summaries(tenant_id: str, scan_id: str):
cache=category_counts,
)
if not category_counts:
return {"status": "no categories to backfill"}
category_summaries = [
ScanCategorySummary(
tenant_id=tenant_id,
@@ -348,38 +353,20 @@ def aggregate_scan_category_summaries(tenant_id: str, scan_id: str):
for (category, severity), counts in category_counts.items()
]
if category_summaries:
with rls_transaction(tenant_id):
# Upsert so re-runs (post-mute reaggregation) don't trip
# `unique_category_severity_per_scan`; race-safe under concurrent writers.
ScanCategorySummary.objects.bulk_create(
category_summaries,
batch_size=500,
update_conflicts=True,
unique_fields=["tenant_id", "scan_id", "category", "severity"],
update_fields=[
"total_findings",
"failed_findings",
"new_failed_findings",
],
)
if not category_counts:
return {"status": "no categories to backfill"}
with rls_transaction(tenant_id):
ScanCategorySummary.objects.bulk_create(
category_summaries, batch_size=500, ignore_conflicts=True
)
return {"status": "backfilled", "categories_count": len(category_counts)}
def aggregate_scan_resource_group_summaries(tenant_id: str, scan_id: str):
def backfill_scan_resource_group_summaries(tenant_id: str, scan_id: str):
"""
Backfill ScanGroupSummary for a completed scan.
Aggregates resource group counts from all findings in the scan and creates
one ScanGroupSummary row per (resource_group, severity) combination.
Idempotent: re-runs replace the scan's existing rows so counts stay in
sync with `Finding.muted` updates triggered outside scan completion
(e.g. mute rules) and with resource-inventory views reading from this
table.
Args:
tenant_id: Target tenant UUID
@@ -389,6 +376,11 @@ def aggregate_scan_resource_group_summaries(tenant_id: str, scan_id: str):
dict: Status indicating whether backfill was performed
"""
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
if ScanGroupSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists():
return {"status": "already backfilled"}
if not Scan.objects.filter(
tenant_id=tenant_id,
id=scan_id,
@@ -426,6 +418,9 @@ def aggregate_scan_resource_group_summaries(tenant_id: str, scan_id: str):
group_resources_cache=group_resources_cache,
)
if not resource_group_counts:
return {"status": "no resource groups to backfill"}
# Compute group-level resource counts (same value for all severity rows in a group)
group_resource_counts = {
grp: len(uids) for grp, uids in group_resources_cache.items()
@@ -444,25 +439,10 @@ def aggregate_scan_resource_group_summaries(tenant_id: str, scan_id: str):
for (grp, severity), counts in resource_group_counts.items()
]
if resource_group_summaries:
with rls_transaction(tenant_id):
# Upsert so re-runs (post-mute reaggregation) don't trip
# `unique_resource_group_severity_per_scan`; race-safe under concurrent writers.
ScanGroupSummary.objects.bulk_create(
resource_group_summaries,
batch_size=500,
update_conflicts=True,
unique_fields=["tenant_id", "scan_id", "resource_group", "severity"],
update_fields=[
"total_findings",
"failed_findings",
"new_failed_findings",
"resources_count",
],
)
if not resource_group_counts:
return {"status": "no resource groups to backfill"}
with rls_transaction(tenant_id):
ScanGroupSummary.objects.bulk_create(
resource_group_summaries, batch_size=500, ignore_conflicts=True
)
return {"status": "backfilled", "resource_groups_count": len(resource_group_counts)}
-4
View File
@@ -47,9 +47,6 @@ from prowler.lib.outputs.compliance.csa.csa_oraclecloud import OracleCloudCSA
from prowler.lib.outputs.compliance.ens.ens_aws import AWSENS
from prowler.lib.outputs.compliance.ens.ens_azure import AzureENS
from prowler.lib.outputs.compliance.ens.ens_gcp import GCPENS
from prowler.lib.outputs.compliance.asd_essential_eight.asd_essential_eight_aws import (
ASDEssentialEightAWS,
)
from prowler.lib.outputs.compliance.iso27001.iso27001_aws import AWSISO27001
from prowler.lib.outputs.compliance.iso27001.iso27001_azure import AzureISO27001
from prowler.lib.outputs.compliance.iso27001.iso27001_gcp import GCPISO27001
@@ -103,7 +100,6 @@ COMPLIANCE_CLASS_MAP = {
(lambda name: name.startswith("ccc_"), CCC_AWS),
(lambda name: name.startswith("c5_"), AWSC5),
(lambda name: name.startswith("csa_"), AWSCSA),
(lambda name: name == "asd_essential_eight_aws", ASDEssentialEightAWS),
],
"azure": [
(lambda name: name.startswith("cis_"), AzureCIS),
File diff suppressed because it is too large Load Diff
@@ -17,9 +17,6 @@ from .charts import (
get_chart_color_for_percentage,
)
# Framework-specific generators
from .cis import CISReportGenerator
# Reusable components
# Reusable components: Color helpers, Badge components, Risk component,
# Table components, Section components
@@ -34,12 +31,10 @@ from .components import (
create_section_header,
create_status_badge,
create_summary_table,
escape_html,
get_color_for_compliance,
get_color_for_risk_level,
get_color_for_weight,
get_status_color,
truncate_text,
)
# Framework configuration: Main configuration, Color constants, ENS colors,
@@ -95,6 +90,8 @@ from .config import (
FrameworkConfig,
get_framework_config,
)
# Framework-specific generators
from .csa import CSAReportGenerator
from .ens import ENSReportGenerator
from .nis2 import NIS2ReportGenerator
@@ -112,7 +109,6 @@ __all__ = [
"ENSReportGenerator",
"NIS2ReportGenerator",
"CSAReportGenerator",
"CISReportGenerator",
# Configuration
"FrameworkConfig",
"FRAMEWORK_REGISTRY",
@@ -186,9 +182,6 @@ __all__ = [
# Section components
"create_section_header",
"create_summary_table",
# Text helpers
"truncate_text",
"escape_html",
# Chart functions
"get_chart_color_for_percentage",
"create_vertical_bar_chart",
+75 -288
View File
@@ -1,9 +1,6 @@
import gc
import os
import resource as _resource_module
import time
from abc import ABC, abstractmethod
from contextlib import contextmanager
from dataclasses import dataclass, field
from typing import Any
@@ -44,7 +41,6 @@ from .config import (
COLOR_LIGHT_BLUE,
COLOR_LIGHTER_BLUE,
COLOR_PROWLER_DARK_GREEN,
FINDINGS_TABLE_CHUNK_SIZE,
PADDING_LARGE,
PADDING_SMALL,
FrameworkConfig,
@@ -52,53 +48,6 @@ from .config import (
logger = get_task_logger(__name__)
@contextmanager
def _log_phase(phase: str, *, scan_id: str, framework: str):
"""Log start/end timing and RSS deltas around a report-building section.
Emits structured key=value logs so Grafana/Datadog/CloudWatch queries
can pivot by ``phase``, ``framework`` and ``scan_id`` to find the
slow/heavy section on any given scan. ``getrusage`` returns KB on
Linux and bytes on macOS; the values are still useful in relative
terms even though units differ across platforms.
"""
start = time.perf_counter()
rss_before = _resource_module.getrusage(_resource_module.RUSAGE_SELF).ru_maxrss
logger.info(
"phase_start phase=%s scan_id=%s framework=%s rss_kb=%d",
phase,
scan_id,
framework,
rss_before,
)
try:
yield
except Exception:
elapsed = time.perf_counter() - start
logger.exception(
"phase_failed phase=%s scan_id=%s framework=%s elapsed_s=%.2f",
phase,
scan_id,
framework,
elapsed,
)
raise
else:
elapsed = time.perf_counter() - start
rss_after = _resource_module.getrusage(_resource_module.RUSAGE_SELF).ru_maxrss
logger.info(
"phase_end phase=%s scan_id=%s framework=%s elapsed_s=%.2f "
"rss_kb=%d delta_rss_kb=%d",
phase,
scan_id,
framework,
elapsed,
rss_after,
rss_after - rss_before,
)
# Register fonts (done once at module load)
_fonts_registered: bool = False
@@ -386,7 +335,6 @@ class BaseComplianceReportGenerator(ABC):
provider_obj: Provider | None = None,
requirement_statistics: dict[str, dict[str, int]] | None = None,
findings_cache: dict[str, list[FindingOutput]] | None = None,
prowler_provider: Any | None = None,
**kwargs,
) -> None:
"""Generate the PDF compliance report.
@@ -403,35 +351,23 @@ class BaseComplianceReportGenerator(ABC):
provider_obj: Optional pre-fetched Provider object
requirement_statistics: Optional pre-aggregated statistics
findings_cache: Optional pre-loaded findings cache
prowler_provider: Optional pre-initialized Prowler provider. When
generating multiple reports for the same scan the master
function initializes this once and passes it in to avoid
re-running boto3/Azure-SDK setup per framework.
**kwargs: Additional framework-specific arguments
"""
framework = self.config.display_name
logger.info(
"report_generation_start framework=%s scan_id=%s compliance_id=%s",
framework,
scan_id,
compliance_id,
"Generating %s report for scan %s", self.config.display_name, scan_id
)
try:
# 1. Load compliance data
with _log_phase(
"load_compliance_data", scan_id=scan_id, framework=framework
):
data = self._load_compliance_data(
tenant_id=tenant_id,
scan_id=scan_id,
compliance_id=compliance_id,
provider_id=provider_id,
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
)
data = self._load_compliance_data(
tenant_id=tenant_id,
scan_id=scan_id,
compliance_id=compliance_id,
provider_id=provider_id,
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
)
# 2. Create PDF document
doc = self._create_document(output_path, data)
@@ -441,54 +377,37 @@ class BaseComplianceReportGenerator(ABC):
elements = []
# Cover page (lightweight)
with _log_phase("cover_page", scan_id=scan_id, framework=framework):
elements.extend(self.create_cover_page(data))
elements.append(PageBreak())
elements.extend(self.create_cover_page(data))
elements.append(PageBreak())
# Executive summary (framework-specific)
with _log_phase("executive_summary", scan_id=scan_id, framework=framework):
elements.extend(self.create_executive_summary(data))
elements.extend(self.create_executive_summary(data))
# Body sections (charts + requirements index)
# Override _build_body_sections() in subclasses to change section order
with _log_phase("body_sections", scan_id=scan_id, framework=framework):
elements.extend(self._build_body_sections(data))
elements.extend(self._build_body_sections(data))
# Detailed findings - heaviest section, loads findings on-demand
with _log_phase("detailed_findings", scan_id=scan_id, framework=framework):
elements.extend(self.create_detailed_findings(data, **kwargs))
gc.collect() # Free findings data after processing
logger.info("Building detailed findings section...")
elements.extend(self.create_detailed_findings(data, **kwargs))
gc.collect() # Free findings data after processing
# 4. Build the PDF
logger.info(
"doc_build_about_to_run framework=%s scan_id=%s elements=%d",
framework,
scan_id,
len(elements),
)
with _log_phase("doc_build", scan_id=scan_id, framework=framework):
self._build_pdf(doc, elements, data)
logger.info("Building PDF document with %d elements...", len(elements))
self._build_pdf(doc, elements, data)
# Final cleanup
del elements
gc.collect()
logger.info(
"report_generation_end framework=%s scan_id=%s output_path=%s",
framework,
scan_id,
output_path,
)
logger.info("Successfully generated report at %s", output_path)
except Exception:
# logger.exception captures the full traceback; the contextual
# keys keep production search-by-scan-id viable.
logger.exception(
"report_generation_failed framework=%s scan_id=%s compliance_id=%s",
framework,
scan_id,
compliance_id,
)
except Exception as e:
import traceback
tb_lineno = e.__traceback__.tb_lineno if e.__traceback__ else "unknown"
logger.error("Error generating report, line %s -- %s", tb_lineno, e)
logger.error("Full traceback:\n%s", traceback.format_exc())
raise
def _build_body_sections(self, data: ComplianceData) -> list:
@@ -719,25 +638,15 @@ class BaseComplianceReportGenerator(ABC):
for req in requirements:
check_ids_to_load.extend(req.checks)
# Load findings on-demand only for the checks that will be displayed.
# When ``only_failed`` is active at requirement level, also push the
# FAIL filter down to the finding level: a requirement marked FAIL
# because 1/1000 findings failed must not render a table dominated by
# 999 PASS rows. That hides the actual failure under noise and
# makes the per-check cap truncate the wrong rows.
# ``total_counts`` is populated with the pre-cap total per check_id
# (FAIL-only when only_failed is active) so the "Showing first N of
# M" banner uses the same denominator the reader cares about.
# Load findings on-demand only for the checks that will be displayed
# Uses the shared findings cache to avoid duplicate queries across reports
logger.info("Loading findings on-demand for %d requirements", len(requirements))
total_counts: dict[str, int] = {}
findings_by_check_id = _load_findings_for_requirement_checks(
data.tenant_id,
data.scan_id,
check_ids_to_load,
data.prowler_provider,
data.findings_by_check_id, # Pass the cache to update it
total_counts_out=total_counts,
only_failed_findings=only_failed,
)
for req in requirements:
@@ -769,31 +678,9 @@ class BaseComplianceReportGenerator(ABC):
)
)
else:
# Surface truncation BEFORE the tables so readers see it
# at the same scroll position as the data itself, not
# after thousands of rendered rows.
loaded = len(findings)
total = total_counts.get(check_id, loaded)
if total > loaded:
kind = "failed findings" if only_failed else "findings"
elements.append(
Paragraph(
f"<b>&#9888; Showing first {loaded:,} of "
f"{total:,} {kind} for this check.</b> "
f"Use the CSV or JSON export for the full "
f"list. The PDF caps detail rows to keep "
f"the report readable and bounded in size.",
self.styles["normal"],
)
)
elements.append(Spacer(1, 0.05 * inch))
# Create chunked findings tables to prevent OOM when a
# single check has thousands of findings (ReportLab
# resolves layout per Flowable, so many small tables
# render contiguously with a bounded memory peak).
findings_tables = self._create_findings_tables(findings)
elements.extend(findings_tables)
# Create findings table
findings_table = self._create_findings_table(findings)
elements.append(findings_table)
elements.append(Spacer(1, 0.1 * inch))
@@ -848,7 +735,6 @@ class BaseComplianceReportGenerator(ABC):
provider_obj: Provider | None,
requirement_statistics: dict | None,
findings_cache: dict | None,
prowler_provider: Any | None = None,
) -> ComplianceData:
"""Load and aggregate compliance data from the database.
@@ -860,9 +746,6 @@ class BaseComplianceReportGenerator(ABC):
provider_obj: Optional pre-fetched Provider
requirement_statistics: Optional pre-aggregated statistics
findings_cache: Optional pre-loaded findings
prowler_provider: Optional pre-initialized Prowler provider. When
the master function initializes it once and passes it in,
we skip the per-report ``initialize_prowler_provider`` call.
Returns:
Aggregated ComplianceData object
@@ -872,8 +755,7 @@ class BaseComplianceReportGenerator(ABC):
if provider_obj is None:
provider_obj = Provider.objects.get(id=provider_id)
if prowler_provider is None:
prowler_provider = initialize_prowler_provider(provider_obj)
prowler_provider = initialize_prowler_provider(provider_obj)
provider_type = provider_obj.provider
# Load compliance framework
@@ -941,32 +823,13 @@ class BaseComplianceReportGenerator(ABC):
) -> SimpleDocTemplate:
"""Create the PDF document template.
Validates that ``output_path`` is a filesystem path string with an
existing parent directory. SimpleDocTemplate technically accepts a
BytesIO too, but we want every report to land on disk so the
Celery worker doesn't hold the full PDF in memory while uploading
to S3.
Args:
output_path: Path for the output PDF
data: Compliance data for metadata
Returns:
Configured SimpleDocTemplate
Raises:
TypeError: ``output_path`` is not a string.
FileNotFoundError: The parent directory does not exist.
"""
if not isinstance(output_path, str):
raise TypeError(
"output_path must be a filesystem path string; "
f"got {type(output_path).__name__}"
)
parent_dir = os.path.dirname(output_path)
if parent_dir and not os.path.isdir(parent_dir):
raise FileNotFoundError(f"Output directory does not exist: {parent_dir}")
return SimpleDocTemplate(
output_path,
pagesize=letter,
@@ -1013,10 +876,47 @@ class BaseComplianceReportGenerator(ABC):
onLaterPages=add_footer,
)
# Column layout shared by all findings sub-tables. Defined as a method so
# subclasses can override it without re-implementing the chunking logic.
def _findings_table_columns(self) -> list[ColumnConfig]:
return [
def _create_findings_table(self, findings: list[FindingOutput]) -> Any:
"""Create a findings table.
Args:
findings: List of finding objects
Returns:
ReportLab Table element
"""
def get_finding_title(f):
metadata = getattr(f, "metadata", None)
if metadata:
return getattr(metadata, "CheckTitle", getattr(f, "check_id", ""))
return getattr(f, "check_id", "")
def get_resource_name(f):
name = getattr(f, "resource_name", "")
if not name:
name = getattr(f, "resource_uid", "")
return name
def get_severity(f):
metadata = getattr(f, "metadata", None)
if metadata:
return getattr(metadata, "Severity", "").capitalize()
return ""
# Convert findings to dicts for the table
data = []
for f in findings:
item = {
"title": get_finding_title(f),
"resource_name": get_resource_name(f),
"severity": get_severity(f),
"status": getattr(f, "status", "").upper(),
"region": getattr(f, "region", "global"),
}
data.append(item)
columns = [
ColumnConfig("Finding", 2.5 * inch, "title"),
ColumnConfig("Resource", 3 * inch, "resource_name"),
ColumnConfig("Severity", 0.9 * inch, "severity"),
@@ -1024,122 +924,9 @@ class BaseComplianceReportGenerator(ABC):
ColumnConfig("Region", 0.9 * inch, "region"),
]
@staticmethod
def _finding_to_row(f: FindingOutput) -> dict[str, str]:
"""Project a FindingOutput onto the row dict the table expects.
Kept defensive: missing metadata or attributes return empty strings
rather than raising, so a single malformed finding never breaks the
whole report.
"""
metadata = getattr(f, "metadata", None)
title = (
getattr(metadata, "CheckTitle", getattr(f, "check_id", ""))
if metadata
else getattr(f, "check_id", "")
)
resource_name = getattr(f, "resource_name", "") or getattr(
f, "resource_uid", ""
)
severity = getattr(metadata, "Severity", "").capitalize() if metadata else ""
return {
"title": title,
"resource_name": resource_name,
"severity": severity,
"status": getattr(f, "status", "").upper(),
"region": getattr(f, "region", "global"),
}
def _create_findings_tables(
self,
findings: list[FindingOutput],
chunk_size: int | None = None,
) -> list[Any]:
"""Build a list of small findings tables to keep ``doc.build()`` memory bounded.
ReportLab resolves layout (column widths, row heights, page-breaks)
per Flowable. A single ``LongTable`` of 15k rows forces all of that
to be computed at once and reliably OOMs the worker on large scans.
Splitting into chunks of ``chunk_size`` rows produces an equivalent-
looking PDF (LongTable repeats headers; chunks render contiguously)
with a bounded memory peak per chunk.
Args:
findings: List of finding objects for a single check.
chunk_size: Rows per sub-table. ``None`` uses
``FINDINGS_TABLE_CHUNK_SIZE`` from config.
Returns:
List of ReportLab flowables (interleaved ``Table``/``LongTable``
and small ``Spacer`` between chunks). Empty list when there are
no findings.
"""
if not findings:
return []
chunk_size = chunk_size or FINDINGS_TABLE_CHUNK_SIZE
# Build all rows first so we can chunk without re-walking the
# FindingOutput list. Malformed findings are skipped with a logged
# exception, never enough to abort the entire report.
rows: list[dict[str, str]] = []
for f in findings:
try:
rows.append(self._finding_to_row(f))
except Exception:
logger.exception(
"Skipping malformed finding while building table for check %s",
getattr(f, "check_id", "unknown"),
)
if not rows:
return []
columns = self._findings_table_columns()
flowables: list = []
total = len(rows)
for start in range(0, total, chunk_size):
chunk = rows[start : start + chunk_size]
flowables.append(
create_data_table(
data=chunk,
columns=columns,
header_color=self.config.primary_color,
normal_style=self.styles["normal_center"],
)
)
# A tiny spacer between chunks keeps them visually contiguous
# without forcing a page-break (KeepTogether would negate the
# memory benefit of chunking).
if start + chunk_size < total:
flowables.append(Spacer(1, 0.05 * inch))
if total > chunk_size:
logger.debug(
"Built %d findings sub-tables (chunk_size=%d, total_findings=%d)",
(total + chunk_size - 1) // chunk_size,
chunk_size,
total,
)
return flowables
def _create_findings_table(self, findings: list[FindingOutput]) -> Any:
"""Deprecated alias kept for backwards compatibility.
Returns the first chunk produced by ``_create_findings_tables``.
New callers MUST use ``_create_findings_tables``, which returns a
list of flowables and is what ``create_detailed_findings`` invokes.
"""
flowables = self._create_findings_tables(findings)
if flowables:
return flowables[0]
# Empty input → return an empty (header-only) table so callers that
# used to receive a Table never get None.
return create_data_table(
data=[],
columns=self._findings_table_columns(),
data=data,
columns=columns,
header_color=self.config.primary_color,
normal_style=self.styles["normal_center"],
)
@@ -1,11 +1,9 @@
import gc
import io
import math
import time
from typing import Callable
import matplotlib
from celery.utils.log import get_task_logger
# Use non-interactive Agg backend for memory efficiency in server environments
# This MUST be set before importing pyplot
@@ -22,26 +20,6 @@ from .config import ( # noqa: E402
CHART_DPI_DEFAULT,
)
logger = get_task_logger(__name__)
def _log_chart_built(name: str, dpi: int, buffer: io.BytesIO, started: float) -> None:
"""Emit a structured DEBUG line summarising a chart render.
Centralised so the formatting stays consistent across all chart helpers
and so we never accidentally pay for buffer.getbuffer().nbytes when
debug logging is disabled.
"""
if logger.isEnabledFor(10): # logging.DEBUG
logger.debug(
"chart_built name=%s dpi=%d bytes=%d elapsed_s=%.2f",
name,
dpi,
buffer.getbuffer().nbytes,
time.perf_counter() - started,
)
# Use centralized DPI setting from config
DEFAULT_CHART_DPI = CHART_DPI_DEFAULT
@@ -99,7 +77,6 @@ def create_vertical_bar_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
if color_func is None:
color_func = get_chart_color_for_percentage
@@ -145,7 +122,6 @@ def create_vertical_bar_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("vertical_bar", dpi, buffer, _started)
return buffer
@@ -180,7 +156,6 @@ def create_horizontal_bar_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
if color_func is None:
color_func = get_chart_color_for_percentage
@@ -232,7 +207,6 @@ def create_horizontal_bar_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("horizontal_bar", dpi, buffer, _started)
return buffer
@@ -265,7 +239,6 @@ def create_radar_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
num_vars = len(labels)
angles = [n / float(num_vars) * 2 * math.pi for n in range(num_vars)]
@@ -302,7 +275,6 @@ def create_radar_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("radar", dpi, buffer, _started)
return buffer
@@ -331,7 +303,6 @@ def create_pie_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
fig, ax = plt.subplots(figsize=figsize)
_, _, autotexts = ax.pie(
@@ -359,7 +330,6 @@ def create_pie_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("pie", dpi, buffer, _started)
return buffer
@@ -392,7 +362,6 @@ def create_stacked_bar_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
fig, ax = plt.subplots(figsize=figsize)
# Default colors if not provided
@@ -432,5 +401,4 @@ def create_stacked_bar_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("stacked_bar", dpi, buffer, _started)
return buffer
-755
View File
@@ -1,755 +0,0 @@
import os
import re
from collections import defaultdict
from typing import Any
from reportlab.lib.units import inch
from reportlab.platypus import Image, PageBreak, Paragraph, Spacer, Table, TableStyle
from api.models import StatusChoices
from .base import (
BaseComplianceReportGenerator,
ComplianceData,
RequirementData,
get_requirement_metadata,
)
from .charts import (
create_horizontal_bar_chart,
create_pie_chart,
create_stacked_bar_chart,
get_chart_color_for_percentage,
)
from .components import ColumnConfig, create_data_table, escape_html, truncate_text
from .config import (
CHART_COLOR_GREEN_1,
CHART_COLOR_RED,
CHART_COLOR_YELLOW,
COLOR_BG_BLUE,
COLOR_BLUE,
COLOR_BORDER_GRAY,
COLOR_DARK_GRAY,
COLOR_GRAY,
COLOR_GRID_GRAY,
COLOR_HIGH_RISK,
COLOR_LIGHT_BLUE,
COLOR_SAFE,
COLOR_WHITE,
)
# Ordered buckets used both in the executive summary tables and the charts
# section. Exposed as module constants so the two call sites never drift.
_PROFILE_BUCKET_ORDER: tuple[str, ...] = ("L1", "L2", "Other")
_ASSESSMENT_BUCKET_ORDER: tuple[str, ...] = ("Automated", "Manual")
# Anchored matchers for profile normalization — substring checks on "L1"/"L2"
# would happily match unrelated tokens like "CL2 Worker" or "HL2" coming from
# future CIS profile enum values.
_LEVEL_2_RE = re.compile(r"(?:\bLevel\s*2\b|\bL2\b|Level_2)")
_LEVEL_1_RE = re.compile(r"(?:\bLevel\s*1\b|\bL1\b|Level_1)")
def _normalize_profile(profile: Any) -> str:
"""Bucket a CIS Profile enum/string into one of: ``L1``, ``L2``, ``Other``.
The ``CIS_Requirement_Attribute_Profile`` enum has values like
``"Level 1"``, ``"Level 2"``, ``"E3 Level 1"``, ``"E5 Level 2"``. We
collapse them into three buckets to keep charts and badges readable
across CIS variants, using anchored regex matches so that future enum
values cannot accidentally promote e.g. ``"CL2 Worker"`` into ``L2``.
Args:
profile: The profile value (enum member, string, or ``None``).
Returns:
One of ``"L1"``, ``"L2"``, ``"Other"``.
"""
if profile is None:
return "Other"
value = getattr(profile, "value", None) or str(profile)
if _LEVEL_2_RE.search(value):
return "L2"
if _LEVEL_1_RE.search(value):
return "L1"
return "Other"
def _profile_badge_text(bucket: str) -> str:
"""Map a normalized profile bucket (L1/L2/Other) to a short badge label."""
return {"L1": "Level 1", "L2": "Level 2"}.get(bucket, "Other")
# =============================================================================
# CIS Report Generator
# =============================================================================
class CISReportGenerator(BaseComplianceReportGenerator):
"""
PDF report generator for CIS (Center for Internet Security) Benchmarks.
CIS differs from single-version frameworks (ENS, NIS2, CSA) in that:
- Each provider has multiple CIS versions (e.g. AWS: 1.4, 1.5, ..., 6.0).
- Section names differ across versions and providers and MUST be derived
at runtime from the loaded compliance data.
- Requirements carry Profile (Level 1/Level 2) and AssessmentStatus
(Automated/Manual) attributes that drive the executive summary and
charts.
This generator produces:
- Cover page with Prowler logo and dynamic CIS version/provider metadata
- Executive summary with overall compliance score, counts, and breakdowns
by Profile and AssessmentStatus
- Charts: overall status pie, pass rate by section (horizontal bar),
Level 1 vs Level 2 pass/fail distribution (stacked bar)
- Requirements index grouped by dynamic section
- Detailed findings for FAIL requirements with CIS-specific audit /
remediation / rationale details
"""
# Per-run memoization cache for ``_compute_statistics``. ``generate()``
# is the public entry point and is called once per PDF, so scoping the
# cache to the last seen ComplianceData instance is enough to avoid the
# double computation between executive summary and charts section.
_stats_cache_key: int | None = None
_stats_cache_value: dict | None = None
# Body section ordering — ensure every top-level section starts on its
# own clean page. The base class only puts a PageBreak AFTER Charts and
# Requirements Index, so Executive Summary and Charts end up sharing a
# page. This override prepends a PageBreak so Compliance Analysis always
# begins on a fresh page.
def _build_body_sections(self, data: ComplianceData) -> list:
return [PageBreak(), *super()._build_body_sections(data)]
# -------------------------------------------------------------------------
# Cover page override — shows dynamic CIS version + provider in the title
# -------------------------------------------------------------------------
def create_cover_page(self, data: ComplianceData) -> list:
"""Create the CIS report cover page with Prowler + CIS logos side by side."""
elements = []
# Create logos side by side (same pattern as NIS2 / ENS)
prowler_logo_path = os.path.join(
os.path.dirname(__file__), "../../assets/img/prowler_logo.png"
)
cis_logo_path = os.path.join(
os.path.dirname(__file__), "../../assets/img/cis_logo.png"
)
if os.path.exists(cis_logo_path):
prowler_logo = Image(prowler_logo_path, width=3.5 * inch, height=0.7 * inch)
cis_logo = Image(cis_logo_path, width=2.3 * inch, height=1.1 * inch)
logos_table = Table(
[[prowler_logo, cis_logo]], colWidths=[4 * inch, 2.5 * inch]
)
logos_table.setStyle(
TableStyle(
[
("ALIGN", (0, 0), (0, 0), "LEFT"),
("ALIGN", (1, 0), (1, 0), "RIGHT"),
("VALIGN", (0, 0), (0, 0), "MIDDLE"),
("VALIGN", (1, 0), (1, 0), "MIDDLE"),
]
)
)
elements.append(logos_table)
elif os.path.exists(prowler_logo_path):
# Fallback: only the Prowler logo if the CIS asset is missing
elements.append(Image(prowler_logo_path, width=5 * inch, height=1 * inch))
elements.append(Spacer(1, 0.5 * inch))
# Dynamic title: "CIS Benchmark v5.0 — AWS Compliance Report"
provider_label = ""
if data.provider_obj:
provider_label = f"{data.provider_obj.provider.upper()}"
title_text = (
f"CIS Benchmark v{data.version}{provider_label}<br/>Compliance Report"
)
elements.append(Paragraph(title_text, self.styles["title"]))
elements.append(Spacer(1, 0.5 * inch))
# Metadata table via base class helper
info_rows = self._build_info_rows(data, language=self.config.language)
metadata_data = []
for label, value in info_rows:
if label in ("Name:", "Description:") and value:
metadata_data.append(
[label, Paragraph(str(value), self.styles["normal_center"])]
)
else:
metadata_data.append([label, value])
metadata_table = Table(metadata_data, colWidths=[2 * inch, 4 * inch])
metadata_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (0, -1), COLOR_BLUE),
("TEXTCOLOR", (0, 0), (0, -1), COLOR_WHITE),
("FONTNAME", (0, 0), (0, -1), "FiraCode"),
("BACKGROUND", (1, 0), (1, -1), COLOR_BG_BLUE),
("TEXTCOLOR", (1, 0), (1, -1), COLOR_GRAY),
("FONTNAME", (1, 0), (1, -1), "PlusJakartaSans"),
("ALIGN", (0, 0), (-1, -1), "LEFT"),
("VALIGN", (0, 0), (-1, -1), "TOP"),
("FONTSIZE", (0, 0), (-1, -1), 11),
("GRID", (0, 0), (-1, -1), 1, COLOR_BORDER_GRAY),
("LEFTPADDING", (0, 0), (-1, -1), 10),
("RIGHTPADDING", (0, 0), (-1, -1), 10),
("TOPPADDING", (0, 0), (-1, -1), 8),
("BOTTOMPADDING", (0, 0), (-1, -1), 8),
]
)
)
elements.append(metadata_table)
return elements
# -------------------------------------------------------------------------
# Executive Summary
# -------------------------------------------------------------------------
def create_executive_summary(self, data: ComplianceData) -> list:
"""Create the CIS executive summary section."""
elements = []
elements.append(Paragraph("Executive Summary", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
stats = self._compute_statistics(data)
# --- Summary metrics table ---
summary_data = [
["Metric", "Value"],
["Total Requirements", str(stats["total"])],
["Passed", str(stats["passed"])],
["Failed", str(stats["failed"])],
["Manual", str(stats["manual"])],
["Overall Compliance", f"{stats['overall_compliance']:.1f}%"],
]
summary_table = Table(summary_data, colWidths=[3 * inch, 2 * inch])
summary_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (-1, 0), COLOR_BLUE),
("TEXTCOLOR", (0, 0), (-1, 0), COLOR_WHITE),
("BACKGROUND", (0, 2), (0, 2), COLOR_SAFE),
("TEXTCOLOR", (0, 2), (0, 2), COLOR_WHITE),
("BACKGROUND", (0, 3), (0, 3), COLOR_HIGH_RISK),
("TEXTCOLOR", (0, 3), (0, 3), COLOR_WHITE),
("BACKGROUND", (0, 4), (0, 4), COLOR_DARK_GRAY),
("TEXTCOLOR", (0, 4), (0, 4), COLOR_WHITE),
("FONTNAME", (0, 0), (-1, 0), "PlusJakartaSans"),
("FONTSIZE", (0, 0), (-1, 0), 12),
("FONTSIZE", (0, 1), (-1, -1), 10),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-1, -1), "MIDDLE"),
("GRID", (0, 0), (-1, -1), 0.5, COLOR_BORDER_GRAY),
("BOTTOMPADDING", (0, 0), (-1, 0), 10),
(
"ROWBACKGROUNDS",
(1, 1),
(1, -1),
[COLOR_WHITE, COLOR_BG_BLUE],
),
]
)
)
elements.append(summary_table)
elements.append(Spacer(1, 0.25 * inch))
# --- Profile breakdown table ---
elements.append(Paragraph("Breakdown by Profile", self.styles["h2"]))
elements.append(Spacer(1, 0.1 * inch))
profile_counts = stats["profile_counts"]
profile_table_data = [["Profile", "Passed", "Failed", "Manual", "Total"]]
for bucket in _PROFILE_BUCKET_ORDER:
counts = profile_counts.get(bucket, {"passed": 0, "failed": 0, "manual": 0})
total = counts["passed"] + counts["failed"] + counts["manual"]
if total == 0:
continue
profile_table_data.append(
[
_profile_badge_text(bucket),
str(counts["passed"]),
str(counts["failed"]),
str(counts["manual"]),
str(total),
]
)
profile_table = Table(
profile_table_data,
colWidths=[1.5 * inch, 1 * inch, 1 * inch, 1 * inch, 1 * inch],
)
profile_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (-1, 0), COLOR_BLUE),
("TEXTCOLOR", (0, 0), (-1, 0), COLOR_WHITE),
("FONTNAME", (0, 0), (-1, 0), "FiraCode"),
("FONTSIZE", (0, 0), (-1, 0), 10),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-1, -1), "MIDDLE"),
("FONTSIZE", (0, 1), (-1, -1), 9),
("GRID", (0, 0), (-1, -1), 0.5, COLOR_GRID_GRAY),
(
"ROWBACKGROUNDS",
(0, 1),
(-1, -1),
[COLOR_WHITE, COLOR_BG_BLUE],
),
]
)
)
elements.append(profile_table)
elements.append(Spacer(1, 0.25 * inch))
# --- Assessment status breakdown ---
elements.append(Paragraph("Breakdown by Assessment Status", self.styles["h2"]))
elements.append(Spacer(1, 0.1 * inch))
assessment_counts = stats["assessment_counts"]
assessment_table_data = [["Assessment", "Passed", "Failed", "Manual", "Total"]]
for bucket in _ASSESSMENT_BUCKET_ORDER:
counts = assessment_counts.get(
bucket, {"passed": 0, "failed": 0, "manual": 0}
)
total = counts["passed"] + counts["failed"] + counts["manual"]
if total == 0:
continue
assessment_table_data.append(
[
bucket,
str(counts["passed"]),
str(counts["failed"]),
str(counts["manual"]),
str(total),
]
)
assessment_table = Table(
assessment_table_data,
colWidths=[1.5 * inch, 1 * inch, 1 * inch, 1 * inch, 1 * inch],
)
assessment_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (-1, 0), COLOR_LIGHT_BLUE),
("TEXTCOLOR", (0, 0), (-1, 0), COLOR_WHITE),
("FONTNAME", (0, 0), (-1, 0), "FiraCode"),
("FONTSIZE", (0, 0), (-1, 0), 10),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-1, -1), "MIDDLE"),
("FONTSIZE", (0, 1), (-1, -1), 9),
("GRID", (0, 0), (-1, -1), 0.5, COLOR_GRID_GRAY),
(
"ROWBACKGROUNDS",
(0, 1),
(-1, -1),
[COLOR_WHITE, COLOR_BG_BLUE],
),
]
)
)
elements.append(assessment_table)
elements.append(Spacer(1, 0.25 * inch))
# --- Top 5 failing sections ---
top_failing = stats["top_failing_sections"]
if top_failing:
elements.append(
Paragraph("Top Sections with Lowest Compliance", self.styles["h2"])
)
elements.append(Spacer(1, 0.1 * inch))
top_table_data = [["Section", "Passed", "Failed", "Compliance"]]
for section_label, section_stats in top_failing:
passed = section_stats["passed"]
failed = section_stats["failed"]
total = passed + failed
pct = (passed / total * 100) if total > 0 else 100
top_table_data.append(
[
truncate_text(section_label, 55),
str(passed),
str(failed),
f"{pct:.1f}%",
]
)
top_table = Table(
top_table_data,
colWidths=[3.5 * inch, 0.9 * inch, 0.9 * inch, 1.2 * inch],
)
top_table.setStyle(
TableStyle(
[
("BACKGROUND", (0, 0), (-1, 0), COLOR_HIGH_RISK),
("TEXTCOLOR", (0, 0), (-1, 0), COLOR_WHITE),
("FONTNAME", (0, 0), (-1, 0), "FiraCode"),
("FONTSIZE", (0, 0), (-1, 0), 10),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-1, -1), "MIDDLE"),
("FONTSIZE", (0, 1), (-1, -1), 9),
("GRID", (0, 0), (-1, -1), 0.5, COLOR_GRID_GRAY),
(
"ROWBACKGROUNDS",
(0, 1),
(-1, -1),
[COLOR_WHITE, COLOR_BG_BLUE],
),
]
)
)
elements.append(top_table)
return elements
# -------------------------------------------------------------------------
# Charts section
# -------------------------------------------------------------------------
def create_charts_section(self, data: ComplianceData) -> list:
"""Create the CIS charts section."""
elements = []
elements.append(Paragraph("Compliance Analysis", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
# --- Pie chart: overall Pass / Fail / Manual ---
stats = self._compute_statistics(data)
pie_labels = []
pie_values = []
pie_colors = []
if stats["passed"] > 0:
pie_labels.append(f"Pass ({stats['passed']})")
pie_values.append(stats["passed"])
pie_colors.append(CHART_COLOR_GREEN_1)
if stats["failed"] > 0:
pie_labels.append(f"Fail ({stats['failed']})")
pie_values.append(stats["failed"])
pie_colors.append(CHART_COLOR_RED)
if stats["manual"] > 0:
pie_labels.append(f"Manual ({stats['manual']})")
pie_values.append(stats["manual"])
pie_colors.append(CHART_COLOR_YELLOW)
if pie_values:
elements.append(Paragraph("Overall Status Distribution", self.styles["h2"]))
elements.append(Spacer(1, 0.1 * inch))
pie_buffer = create_pie_chart(
labels=pie_labels,
values=pie_values,
colors=pie_colors,
)
pie_buffer.seek(0)
elements.append(Image(pie_buffer, width=4.5 * inch, height=4.5 * inch))
elements.append(Spacer(1, 0.2 * inch))
# --- Horizontal bar: pass rate by section ---
section_stats = stats["section_stats"]
if section_stats:
elements.append(PageBreak())
elements.append(Paragraph("Compliance by Section", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
elements.append(
Paragraph(
"The following chart shows compliance percentage for each CIS "
"section based on automated checks:",
self.styles["normal_center"],
)
)
elements.append(Spacer(1, 0.1 * inch))
# Sort sections by pass rate descending for readability
sorted_sections = sorted(
section_stats.items(),
key=lambda item: (
(item[1]["passed"] / (item[1]["passed"] + item[1]["failed"]) * 100)
if (item[1]["passed"] + item[1]["failed"]) > 0
else 100
),
reverse=True,
)
bar_labels = []
bar_values = []
for section_label, section_data in sorted_sections:
total = section_data["passed"] + section_data["failed"]
if total == 0:
continue
pct = (section_data["passed"] / total) * 100
bar_labels.append(truncate_text(section_label, 60))
bar_values.append(pct)
if bar_values:
bar_buffer = create_horizontal_bar_chart(
labels=bar_labels,
values=bar_values,
xlabel="Compliance (%)",
color_func=get_chart_color_for_percentage,
label_fontsize=9,
)
bar_buffer.seek(0)
elements.append(Image(bar_buffer, width=6.5 * inch, height=5 * inch))
# --- Stacked bar: Level 1 vs Level 2 pass/fail ---
profile_counts = stats["profile_counts"]
has_profile_data = any(
(counts["passed"] + counts["failed"]) > 0
for counts in profile_counts.values()
)
if has_profile_data:
elements.append(PageBreak())
elements.append(Paragraph("Profile Breakdown", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
elements.append(
Paragraph(
"Distribution of Pass / Fail / Manual across CIS profile levels.",
self.styles["normal_center"],
)
)
elements.append(Spacer(1, 0.1 * inch))
profile_labels = []
pass_series = []
fail_series = []
manual_series = []
for bucket in _PROFILE_BUCKET_ORDER:
counts = profile_counts.get(bucket)
if not counts:
continue
total = counts["passed"] + counts["failed"] + counts["manual"]
if total == 0:
continue
profile_labels.append(_profile_badge_text(bucket))
pass_series.append(counts["passed"])
fail_series.append(counts["failed"])
manual_series.append(counts["manual"])
if profile_labels:
stacked_buffer = create_stacked_bar_chart(
labels=profile_labels,
data_series={
"Pass": pass_series,
"Fail": fail_series,
"Manual": manual_series,
},
xlabel="Profile",
ylabel="Requirements",
)
stacked_buffer.seek(0)
elements.append(Image(stacked_buffer, width=6 * inch, height=4 * inch))
return elements
# -------------------------------------------------------------------------
# Requirements Index
# -------------------------------------------------------------------------
def create_requirements_index(self, data: ComplianceData) -> list:
"""Create the CIS requirements index grouped by dynamic section."""
elements = []
elements.append(Paragraph("Requirements Index", self.styles["h1"]))
elements.append(Spacer(1, 0.1 * inch))
sections = self._derive_sections(data)
by_section: dict[str, list[dict]] = defaultdict(list)
for req in data.requirements:
meta = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
section = "Other"
profile_bucket = "Other"
assessment = ""
if meta:
section = getattr(meta, "Section", "Other") or "Other"
profile_bucket = _normalize_profile(getattr(meta, "Profile", None))
assessment_enum = getattr(meta, "AssessmentStatus", None)
assessment = getattr(assessment_enum, "value", None) or str(
assessment_enum or ""
)
by_section[section].append(
{
"id": req.id,
"description": truncate_text(req.description, 80),
"profile": _profile_badge_text(profile_bucket),
"assessment": assessment or "-",
"status": (req.status or "").upper(),
}
)
columns = [
ColumnConfig("ID", 0.9 * inch, "id", align="LEFT"),
ColumnConfig("Description", 3.0 * inch, "description", align="LEFT"),
ColumnConfig("Profile", 0.9 * inch, "profile"),
ColumnConfig("Assessment", 1 * inch, "assessment"),
ColumnConfig("Status", 0.9 * inch, "status"),
]
for section in sections:
rows = by_section.get(section, [])
if not rows:
continue
elements.append(Paragraph(truncate_text(section, 90), self.styles["h2"]))
elements.append(Spacer(1, 0.05 * inch))
table = create_data_table(
data=rows,
columns=columns,
header_color=self.config.primary_color,
normal_style=self.styles["normal_center"],
)
elements.append(table)
elements.append(Spacer(1, 0.15 * inch))
return elements
# -------------------------------------------------------------------------
# Detailed findings hook — inject CIS-specific rationale / audit content
# -------------------------------------------------------------------------
def _render_requirement_detail_extras(
self, req: RequirementData, data: ComplianceData
) -> list:
"""Render CIS rationale, impact, audit, remediation and references."""
extras = []
meta = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
if meta is None:
return extras
field_map = [
("Rationale", "RationaleStatement"),
("Impact", "ImpactStatement"),
("Audit Procedure", "AuditProcedure"),
("Remediation", "RemediationProcedure"),
("References", "References"),
]
for label, attr_name in field_map:
value = getattr(meta, attr_name, None)
if not value:
continue
text = str(value).strip()
if not text:
continue
extras.append(Paragraph(f"<b>{label}:</b>", self.styles["h3"]))
extras.append(Paragraph(escape_html(text), self.styles["normal"]))
extras.append(Spacer(1, 0.08 * inch))
return extras
# -------------------------------------------------------------------------
# Private helpers
# -------------------------------------------------------------------------
def _derive_sections(self, data: ComplianceData) -> list[str]:
"""Extract ordered unique Section names from loaded compliance data."""
seen: dict[str, bool] = {}
for req in data.requirements:
meta = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
if meta is None:
continue
section = getattr(meta, "Section", None) or "Other"
if section not in seen:
seen[section] = True
return list(seen.keys())
def _compute_statistics(self, data: ComplianceData) -> dict:
"""Aggregate all statistics needed for summary and charts.
Memoized per-``ComplianceData`` instance via ``_stats_cache_*``: the
executive summary and the charts section both need the same numbers,
so they would otherwise re-iterate the requirements twice. We key on
``id(data)`` because ``ComplianceData`` is a dataclass and its
instances are not hashable.
Returns a dict with:
- total, passed, failed, manual: int
- overall_compliance: float (percentage)
- profile_counts: {"L1": {"passed", "failed", "manual"}, ...}
- assessment_counts: {"Automated": {...}, "Manual": {...}}
- section_stats: {section_name: {"passed", "failed", "manual"}, ...}
- top_failing_sections: list[(section_name, stats)] (up to 5)
"""
cache_key = id(data)
if self._stats_cache_key == cache_key and self._stats_cache_value is not None:
return self._stats_cache_value
stats = self._compute_statistics_uncached(data)
self._stats_cache_key = cache_key
self._stats_cache_value = stats
return stats
def _compute_statistics_uncached(self, data: ComplianceData) -> dict:
"""Actual aggregation kernel; call ``_compute_statistics`` instead."""
total = len(data.requirements)
passed = sum(1 for r in data.requirements if r.status == StatusChoices.PASS)
failed = sum(1 for r in data.requirements if r.status == StatusChoices.FAIL)
manual = sum(1 for r in data.requirements if r.status == StatusChoices.MANUAL)
evaluated = passed + failed
overall_compliance = (passed / evaluated * 100) if evaluated > 0 else 100.0
profile_counts: dict[str, dict[str, int]] = {
"L1": {"passed": 0, "failed": 0, "manual": 0},
"L2": {"passed": 0, "failed": 0, "manual": 0},
"Other": {"passed": 0, "failed": 0, "manual": 0},
}
assessment_counts: dict[str, dict[str, int]] = {
"Automated": {"passed": 0, "failed": 0, "manual": 0},
"Manual": {"passed": 0, "failed": 0, "manual": 0},
}
section_stats: dict[str, dict[str, int]] = defaultdict(
lambda: {"passed": 0, "failed": 0, "manual": 0}
)
for req in data.requirements:
meta = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
if meta is None:
continue
profile_bucket = _normalize_profile(getattr(meta, "Profile", None))
assessment_enum = getattr(meta, "AssessmentStatus", None)
assessment_value = getattr(assessment_enum, "value", None) or str(
assessment_enum or ""
)
assessment_bucket = (
"Automated" if assessment_value == "Automated" else "Manual"
)
section = getattr(meta, "Section", None) or "Other"
status_key = {
StatusChoices.PASS: "passed",
StatusChoices.FAIL: "failed",
StatusChoices.MANUAL: "manual",
}.get(req.status)
if status_key is None:
continue
profile_counts[profile_bucket][status_key] += 1
assessment_counts[assessment_bucket][status_key] += 1
section_stats[section][status_key] += 1
# Top 5 sections with lowest pass rate (only sections with evaluated reqs)
def _section_rate(item):
_, stats_ = item
evaluated_ = stats_["passed"] + stats_["failed"]
if evaluated_ == 0:
return 101 # sort evaluated=0 to the bottom
return stats_["passed"] / evaluated_ * 100
top_failing_sections = sorted(
(
item
for item in section_stats.items()
if (item[1]["passed"] + item[1]["failed"]) > 0
),
key=_section_rate,
)[:5]
return {
"total": total,
"passed": passed,
"failed": failed,
"manual": manual,
"overall_compliance": overall_compliance,
"profile_counts": profile_counts,
"assessment_counts": assessment_counts,
"section_stats": dict(section_stats),
"top_failing_sections": top_failing_sections,
}
@@ -26,52 +26,6 @@ from .config import (
)
def truncate_text(text: str, max_len: int) -> str:
"""Truncate ``text`` to ``max_len`` characters, appending an ellipsis if cut.
Used by report generators that need to squeeze long descriptions, section
titles or finding titles into a fixed-width table cell.
Args:
text: Source string. ``None`` and non-string values are treated as empty.
max_len: Maximum output length including the ellipsis. Values < 4 are
clamped so the result never grows beyond ``max_len``.
Returns:
The original string if short enough, otherwise ``text[: max_len - 3] + "..."``.
When ``max_len < 4`` a plain substring of length ``max_len`` is returned
so callers never get a string longer than they asked for.
"""
if not text:
return ""
text = str(text)
if len(text) <= max_len:
return text
if max_len < 4:
return text[:max_len]
return text[: max_len - 3] + "..."
def escape_html(text: str) -> str:
"""Escape the minimal HTML entities required for safe ReportLab Paragraph rendering.
ReportLab's ``Paragraph`` parses a small HTML subset, so raw ``<``, ``>``
and ``&`` in user-provided content (rationale, remediation, etc.) would
break layout or be interpreted as tags. This helper mirrors
``html.escape`` but avoids pulling in the stdlib dependency and keeps the
output deterministic.
Args:
text: Untrusted source string.
Returns:
A string safe to embed inside a ReportLab Paragraph.
"""
return (
str(text or "").replace("&", "&amp;").replace("<", "&lt;").replace(">", "&gt;")
)
def get_color_for_risk_level(risk_level: int) -> colors.Color:
"""
Get color based on risk level.
@@ -475,15 +429,8 @@ def create_data_table(
else:
value = item.get(col.field, "")
# Wrap every string cell in Paragraph so the data rows keep the
# caller-supplied font/colour/alignment. Skipping Paragraph for
# short cells (a tempting micro-optimisation) breaks visual
# consistency: ReportLab Table falls back to Helvetica/black for
# raw strings, mixing fonts within the same table.
# ``escape_html`` keeps ``<``/``>``/``&`` in resource names from
# breaking Paragraph's mini-HTML parser.
if normal_style and isinstance(value, str):
value = Paragraph(escape_html(value), normal_style)
value = Paragraph(value, normal_style)
row.append(value)
table_data.append(row)
@@ -515,26 +462,17 @@ def create_data_table(
for idx, col in enumerate(columns):
styles.append(("ALIGN", (idx, 0), (idx, -1), col.align))
# Alternate row backgrounds: single O(1) ROWBACKGROUNDS style entry.
# The previous implementation appended N per-row BACKGROUND commands,
# which scaled the TableStyle list linearly with row count. ReportLab
# cycles through the colour list row-by-row so the visual is identical.
# The ALTERNATE_ROWS_MAX_SIZE cap is preserved to mirror legacy
# behaviour (very large tables stay plain), but the memory cost of the
# styles list is now constant regardless of row count.
# Alternate row backgrounds - skip for very large tables as it adds memory overhead
if (
alternate_rows
and len(table_data) > 1
and len(table_data) <= ALTERNATE_ROWS_MAX_SIZE
):
styles.append(
(
"ROWBACKGROUNDS",
(0, 1),
(-1, -1),
[colors.white, colors.Color(0.98, 0.98, 0.98)],
)
)
for i in range(1, len(table_data)):
if i % 2 == 0:
styles.append(
("BACKGROUND", (0, i), (-1, i), colors.Color(0.98, 0.98, 0.98))
)
table.setStyle(TableStyle(styles))
return table
@@ -1,4 +1,3 @@
import os
from dataclasses import dataclass, field
from reportlab.lib import colors
@@ -24,47 +23,6 @@ ALTERNATE_ROWS_MAX_SIZE = 200
# Larger = fewer queries but more memory per batch
FINDINGS_BATCH_SIZE = 2000
# Maximum rows per findings sub-table. ReportLab resolves layout per Flowable;
# splitting a huge findings list into multiple smaller tables keeps the peak
# memory of doc.build() bounded. A single 15k-row LongTable would force
# ReportLab to compute all column widths/row heights/page-breaks at once and
# OOM the worker; 300-row chunks are rendered contiguously with negligible
# visual impact.
FINDINGS_TABLE_CHUNK_SIZE = 300
# Maximum findings rendered per check in the detailed-findings section.
#
# Product behaviour: compliance PDFs render at most ``MAX_FINDINGS_PER_CHECK``
# **failed** findings per check (PASS rows are excluded at SQL level by the
# ``only_failed`` flag that all four list-rendering frameworks default to:
# ThreatScore, NIS2, CSA, CIS; ENS does not render finding tables). Above
# this cap each affected check renders an in-PDF banner
# ("Showing first 100 of N failed findings for this check. Use the CSV
# or JSON export for the full list") so the reader knows the table is
# truncated and where to find the full data.
#
# Why a cap exists at all:
# * ``FindingOutput.transform_api_finding`` is O(N) per finding (Pydantic
# v1 validation + nested model construction).
# * ReportLab resolves layout per Flowable; thousands of sub-tables make
# ``doc.build()`` very slow and grow the PDF unboundedly.
# * A human-readable executive/auditor PDF does not need 12,000 rows for
# one check; that is forensic data and lives in the CSV/JSON exports.
#
# Why 100 specifically:
# * Covers ~99% of real scans without truncation (most checks emit far
# fewer than 100 findings even in enterprise estates).
# * Worst-case rendered rows = 100 × ~500 checks = 50k rows across all
# frameworks, which keeps RSS bounded and a 5-framework run completes
# in minutes instead of hours.
#
# Override at runtime via ``DJANGO_PDF_MAX_FINDINGS_PER_CHECK``:
# * Set to ``0`` to disable the cap entirely (load every finding; only
# advisable for small scans).
# * Set to a larger value (e.g. ``500``) for forensic detail in big runs;
# watch RSS in the Celery worker.
MAX_FINDINGS_PER_CHECK = int(os.environ.get("DJANGO_PDF_MAX_FINDINGS_PER_CHECK", "100"))
# =============================================================================
# Base colors
@@ -355,32 +313,6 @@ FRAMEWORK_REGISTRY: dict[str, FrameworkConfig] = {
has_niveles=False,
has_weight=False,
),
"cis": FrameworkConfig(
name="cis",
display_name="CIS Benchmark",
logo_filename=None,
primary_color=COLOR_BLUE,
secondary_color=COLOR_LIGHT_BLUE,
bg_color=COLOR_BG_BLUE,
attribute_fields=[
"Section",
"SubSection",
"Profile",
"AssessmentStatus",
"Description",
"RationaleStatement",
"ImpactStatement",
"RemediationProcedure",
"AuditProcedure",
"References",
],
sections=None, # Derived dynamically per CIS variant (section names differ across versions/providers)
language="en",
has_risk_levels=False,
has_dimensions=False,
has_niveles=False,
has_weight=False,
),
}
@@ -404,7 +336,5 @@ def get_framework_config(compliance_id: str) -> FrameworkConfig | None:
return FRAMEWORK_REGISTRY["nis2"]
if "csa" in compliance_lower or "ccm" in compliance_lower:
return FRAMEWORK_REGISTRY["csa_ccm"]
if compliance_lower.startswith("cis_") or "cis" in compliance_lower:
return FRAMEWORK_REGISTRY["cis"]
return None
+12 -231
View File
@@ -10,29 +10,16 @@ from typing import Any
import sentry_sdk
from celery.utils.log import get_task_logger
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
from config.env import env
from config.settings.celery import CELERY_DEADLOCK_ATTEMPTS
from django.db import IntegrityError, OperationalError
from django.db.models import (
Case,
Count,
Exists,
IntegerField,
Max,
Min,
OuterRef,
Prefetch,
Q,
Sum,
When,
)
from django.db.models import Case, Count, IntegerField, Max, Min, Prefetch, Q, Sum, When
from django.utils import timezone as django_timezone
from tasks.jobs.queries import (
COMPLIANCE_UPSERT_PROVIDER_SCORE_SQL,
COMPLIANCE_UPSERT_TENANT_SUMMARY_SQL,
)
from tasks.utils import CustomEncoder, batched
from tasks.utils import CustomEncoder
from api.compliance import PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE
from api.constants import SEVERITY_ORDER
@@ -202,9 +189,8 @@ def _get_attack_surface_mapping_from_provider(provider_type: str) -> dict:
"iam_inline_policy_allows_privilege_escalation",
},
"ec2-imdsv1": {
"ec2_instance_imdsv2_enabled",
"ec2_instance_account_imdsv2_enabled",
}, # AWS only - instance-level IMDSv1 exposure and account IMDS defaults
"ec2_instance_imdsv2_enabled"
}, # AWS only - IMDSv1 enabled findings
}
for category_name, check_ids in attack_surface_check_mappings.items():
if check_ids is None:
@@ -1211,39 +1197,11 @@ def aggregate_findings(tenant_id: str, scan_id: str):
muted_changed=agg["muted_changed"],
)
for agg in aggregation
if agg["resources__service"] is not None
and agg["resources__region"] is not None
}
# Upsert so re-runs (post-mute reaggregation) don't trip
# `unique_scan_summary`; race-safe under concurrent writers.
ScanSummary.objects.bulk_create(
scan_aggregations,
batch_size=3000,
update_conflicts=True,
unique_fields=[
"tenant",
"scan",
"check_id",
"service",
"severity",
"region",
],
update_fields=[
"_pass",
"fail",
"muted",
"total",
"new",
"changed",
"unchanged",
"fail_new",
"fail_changed",
"pass_new",
"pass_changed",
"muted_new",
"muted_changed",
],
)
# Delete first so re-runs (e.g. post-mute reaggregation) don't hit
# the `unique_scan_summary` constraint.
ScanSummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id).delete()
ScanSummary.objects.bulk_create(scan_aggregations, batch_size=3000)
def _aggregate_findings_by_region(
@@ -1588,24 +1546,13 @@ def aggregate_attack_surface(tenant_id: str, scan_id: str):
)
)
# Bulk create overview records
if overview_objects:
with rls_transaction(tenant_id):
# Upsert so re-runs (post-mute reaggregation) don't trip
# `unique_attack_surface_per_scan`; race-safe under concurrent writers.
AttackSurfaceOverview.objects.bulk_create(
overview_objects,
batch_size=500,
update_conflicts=True,
unique_fields=["tenant_id", "scan_id", "attack_surface_type"],
update_fields=[
"total_findings",
"failed_findings",
"muted_failed_findings",
],
AttackSurfaceOverview.objects.bulk_create(overview_objects, batch_size=500)
logger.info(
f"Created {len(overview_objects)} attack surface overview records for scan {scan_id}"
)
logger.info(
f"Upserted {len(overview_objects)} attack surface overview records for scan {scan_id}"
)
else:
logger.info(f"No attack surface overview records created for scan {scan_id}")
@@ -2083,169 +2030,3 @@ def aggregate_finding_group_summaries(tenant_id: str, scan_id: str):
"created": created_count,
"updated": updated_count,
}
def reset_ephemeral_resource_findings_count(tenant_id: str, scan_id: str) -> dict:
"""Zero failed_findings_count for resources missing from a completed full-scope scan.
Resources that exist in the database for the scan's provider but were not
touched by this scan are treated as ephemeral. We keep their historical
findings, but reset the denormalized counter that drives the Resources page
sort so they stop ranking at the top.
Skipped (no-op) when:
- The scan is not in COMPLETED state.
- The scan ran with any scoping filter in scanner_args (partial scope).
Query design (must scale to 500k+ resources per provider):
Phase 1 collect ephemeral IDs with one anti-join read.
Outer filter ``(tenant_id, provider_id, failed_findings_count > 0)``
uses ``resources_tenant_provider_idx``. The correlated
``NOT EXISTS`` subquery hits the implicit unique index
``(tenant_id, scan_id, resource_id)`` on ``ResourceScanSummary``.
``NOT EXISTS`` (vs ``NOT IN``) is null-safe and lets the planner
choose between hash anti-join and indexed nested-loop anti-join.
``.iterator(chunk_size=...)`` skips the queryset cache so memory
stays bounded while streaming UUIDs.
Phase 2 UPDATE in fixed-size batches.
One large UPDATE would hold row-exclusive locks for seconds and
create a WAL spike. Batched UPDATEs by ``id__in`` (~1k rows each)
hit the primary key, keep each lock window ~50ms, bound WAL chunks,
and let other writers proceed between batches.
``failed_findings_count__gt=0`` in the UPDATE is idempotent under
concurrent scans and skips no-op rewrites.
Reads use the primary DB, not the replica: ``ResourceScanSummary`` rows
were written by the same scan task that triggered this one, so replica
lag could falsely classify scanned resources as ephemeral.
Scope detection (``Scan.is_full_scope()``) derives the set of scoping
scanner_args from ``prowler.lib.scan.scan.Scan.__init__`` via
introspection, so the API can never drift from the SDK's filter
contract. Imported scans are also rejected by trigger they may only
cover a partial slice of resources.
"""
with rls_transaction(tenant_id):
scan = Scan.objects.filter(tenant_id=tenant_id, id=scan_id).first()
if scan is None:
logger.warning(f"Scan {scan_id} not found")
return {"status": "skipped", "reason": "scan not found"}
if scan.state != StateChoices.COMPLETED:
logger.info(f"Scan {scan_id} not completed; skipping ephemeral reset")
return {"status": "skipped", "reason": "scan not completed"}
if not scan.is_full_scope():
logger.info(
f"Scan {scan_id} ran with scoping filters; skipping ephemeral reset"
)
return {"status": "skipped", "reason": "partial scan scope"}
# Race protection: if a newer completed full-scope scan exists for this
# provider, our ResourceScanSummary set is stale relative to the resources'
# current failed_findings_count values (which the newer scan already
# refreshed). Wiping based on the older scan would zero counts the newer
# scan just set. Skip and let the newer scan's reset task do the work; if
# this task was delayed in the queue, that's the correct outcome.
# `completed_at__isnull=False` is required: Postgres orders NULL first in
# DESC, so a sibling COMPLETED scan with a missing completed_at would sort
# as "newest" and incorrectly cause us to skip.
with rls_transaction(tenant_id):
latest_full_scope_scan_id = (
Scan.objects.filter(
tenant_id=tenant_id,
provider_id=scan.provider_id,
state=StateChoices.COMPLETED,
completed_at__isnull=False,
)
.order_by("-completed_at", "-inserted_at")
.values_list("id", flat=True)
.first()
)
if latest_full_scope_scan_id != scan.id:
logger.info(
f"Scan {scan_id} is not the latest completed scan for provider "
f"{scan.provider_id}; skipping ephemeral reset"
)
return {"status": "skipped", "reason": "newer scan exists"}
# Defensive gate: ResourceScanSummary rows are written by perform_prowler_scan
# via best-effort bulk_create. If those writes failed silently (or the scan
# genuinely produced resources but no summaries were persisted), the
# ~Exists(in_scan) anti-join below would classify EVERY resource for this
# provider as ephemeral and zero their counts. Bail loudly instead.
with rls_transaction(tenant_id):
summaries_present = ResourceScanSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists()
if scan.unique_resource_count > 0 and not summaries_present:
logger.error(
f"Scan {scan_id} reports {scan.unique_resource_count} unique "
f"resources but no ResourceScanSummary rows are persisted; "
f"skipping ephemeral reset to avoid wiping valid counts"
)
return {"status": "skipped", "reason": "summaries missing"}
# Stays on the primary DB intentionally. ResourceScanSummary rows are
# written by perform_prowler_scan in the same chain that triggered this
# task, so replica lag could return an empty/partial summary set; a stale
# read here would classify every Resource as ephemeral and wipe valid
# failed_findings_count values on the primary. Same rationale as
# update_provider_compliance_scores below in this module.
# Materializing the ID list (rather than streaming the iterator into
# batched UPDATEs) is intentional: it lets the UPDATEs run in their own
# short rls_transactions instead of one long transaction holding row locks
# on every batch. At 500k UUIDs the peak memory is ~40 MB — acceptable for
# a Celery worker — and is the better trade-off versus a multi-second
# write-lock window blocking concurrent scans.
with rls_transaction(tenant_id):
in_scan = ResourceScanSummary.objects.filter(
tenant_id=tenant_id,
scan_id=scan_id,
resource_id=OuterRef("pk"),
)
ephemeral_ids = list(
Resource.objects.filter(
tenant_id=tenant_id,
provider_id=scan.provider_id,
failed_findings_count__gt=0,
)
.filter(~Exists(in_scan))
.values_list("id", flat=True)
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
)
if not ephemeral_ids:
logger.info(f"No ephemeral resources for scan {scan_id}")
return {
"status": "completed",
"scan_id": str(scan_id),
"provider_id": str(scan.provider_id),
"reset": 0,
}
total_updated = 0
for batch, _ in batched(ephemeral_ids, DJANGO_FINDINGS_BATCH_SIZE):
# batched() always yields a final tuple, which is empty when the input
# length is an exact multiple of the batch size. Skip it so we don't
# issue a no-op UPDATE ... WHERE id IN ().
if not batch:
continue
with rls_transaction(tenant_id):
total_updated += Resource.objects.filter(
tenant_id=tenant_id,
id__in=batch,
failed_findings_count__gt=0,
).update(failed_findings_count=0)
logger.info(
f"Ephemeral resource reset for scan {scan_id}: "
f"{total_updated} resources zeroed for provider {scan.provider_id}"
)
return {
"status": "completed",
"scan_id": str(scan_id),
"provider_id": str(scan.provider_id),
"reset": total_updated,
}
+12 -127
View File
@@ -1,8 +1,6 @@
from celery.utils.log import get_task_logger
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
from django.db.models import Count, F, Q, Window
from django.db.models.functions import RowNumber
from tasks.jobs.reports.config import MAX_FINDINGS_PER_CHECK
from django.db.models import Count, Q
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import rls_transaction
@@ -156,8 +154,6 @@ def _load_findings_for_requirement_checks(
check_ids: list[str],
prowler_provider,
findings_cache: dict[str, list[FindingOutput]] | None = None,
total_counts_out: dict[str, int] | None = None,
only_failed_findings: bool = False,
) -> dict[str, list[FindingOutput]]:
"""
Load findings for specific check IDs on-demand with optional caching.
@@ -182,23 +178,6 @@ def _load_findings_for_requirement_checks(
prowler_provider: The initialized Prowler provider instance.
findings_cache (dict, optional): Cache of already loaded findings.
If provided, checks are first looked up in cache before querying database.
total_counts_out (dict, optional): If provided, populated with
``{check_id: total_findings_in_db}`` BEFORE any per-check cap is
applied. Lets callers render a "Showing first N of M" banner for
truncated checks. Only populated for ``check_ids`` actually
queried (cache hits keep whatever value the caller already had).
When ``only_failed_findings=True`` the total is FAIL-only.
only_failed_findings (bool): When True, push the ``status=FAIL``
filter down into the SQL query so PASS rows are never loaded
from the DB nor pydantic-transformed. This matches the
``only_failed`` requirement-level filter applied at PDF render
time: a requirement marked FAIL because 1/1000 findings failed
shouldn't render a table of 999 PASS rows. That hides the
actual failure under noise and wastes the per-check cap on
irrelevant data. NOTE: the findings cache stores whatever the
first caller asked for, so all callers in a single
``generate_compliance_reports`` run MUST pass the same flag
(which they do: it threads from ``only_failed`` defaults).
Returns:
dict[str, list[FindingOutput]]: Dictionary mapping check_id to list of FindingOutput objects.
@@ -243,70 +222,17 @@ def _load_findings_for_requirement_checks(
)
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
base_qs = Finding.all_objects.filter(
tenant_id=tenant_id,
scan_id=scan_id,
check_id__in=check_ids_to_load,
# Use iterator with chunk_size for memory-efficient streaming
# chunk_size controls how many rows Django fetches from DB at once
findings_queryset = (
Finding.all_objects.filter(
tenant_id=tenant_id,
scan_id=scan_id,
check_id__in=check_ids_to_load,
)
.order_by("check_id", "uid")
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
)
if only_failed_findings:
# Push the FAIL filter down into SQL: DB returns ~N×FAIL
# rows instead of N×ALL, and we never spend pydantic CPU on
# PASS findings the PDF would never render.
base_qs = base_qs.filter(status=StatusChoices.FAIL)
# Aggregate totals once so we (a) know which checks need capping
# and (b) can surface "Showing first N of M" in the PDF banner.
# Cheap: a single COUNT grouped by check_id.
totals: dict[str, int] = {
row["check_id"]: row["total"]
for row in base_qs.values("check_id").annotate(total=Count("id"))
}
if total_counts_out is not None:
total_counts_out.update(totals)
cap = MAX_FINDINGS_PER_CHECK
checks_over_cap = (
{cid for cid, n in totals.items() if n > cap} if cap > 0 else set()
)
# Use iterator with chunk_size for memory-efficient streaming.
# FindingOutput.transform_api_finding (prowler/lib/outputs/finding.py)
# reads finding.resources.first() and resource.tags.all() per
# finding, which without prefetch generates 2N queries per chunk.
# prefetch_related runs once per iterator chunk (Django >=4.1) and
# collapses that into a constant 2 extra queries per chunk.
if checks_over_cap:
# Top-N per check via a window function: PostgreSQL only
# materialises ``cap * |checks_over_cap| + sum(uncapped)``
# rows, vs the full table scan the previous path did.
ranked = base_qs.annotate(
rn=Window(
expression=RowNumber(),
partition_by=[F("check_id")],
order_by=F("uid").asc(),
)
)
findings_queryset = (
Finding.all_objects.filter(
id__in=ranked.filter(rn__lte=cap).values("id")
)
.prefetch_related("resources", "resources__tags")
.order_by("check_id", "uid")
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
)
logger.info(
"Per-check cap=%d active for %d checks (max %d each); "
"skipping transform for surplus rows",
cap,
len(checks_over_cap),
cap,
)
else:
findings_queryset = (
base_qs.prefetch_related("resources", "resources__tags")
.order_by("check_id", "uid")
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
)
# Pre-initialize empty lists for all check_ids to load
# This avoids repeated dict lookups and 'if not in' checks
@@ -322,11 +248,7 @@ def _load_findings_for_requirement_checks(
findings_count += 1
logger.info(
"Loaded %d findings for %d checks (truncated %d checks total=%d)",
findings_count,
len(check_ids_to_load),
len(checks_over_cap),
sum(totals.values()),
f"Loaded {findings_count} findings for {len(check_ids_to_load)} checks"
)
# Build result dict using cache references (no data duplication)
@@ -336,40 +258,3 @@ def _load_findings_for_requirement_checks(
}
return result
def _get_compliance_check_ids(compliance_obj) -> set[str]:
"""Return the union of all check_ids referenced by a compliance framework.
Used by the master report orchestrator to know which checks each
framework consumes from the shared ``findings_cache``, so that once a
framework finishes the entries no other pending framework needs can be
evicted from the cache (PROWLER-1733).
Args:
compliance_obj: A loaded Compliance framework object exposing a
``Requirements`` iterable, each requirement carrying ``Checks``.
``None`` is treated as "no checks" rather than raising, so the
caller can pass ``frameworks_bulk.get(...)`` directly without
an extra existence check.
Returns:
Set of check_id strings (empty if ``compliance_obj`` is ``None``).
"""
if compliance_obj is None:
return set()
checks: set[str] = set()
requirements = getattr(compliance_obj, "Requirements", None) or []
try:
# Defensive: Mock objects (used in unit tests) return another Mock
# for any attribute access, which is truthy but not iterable. Treat
# any non-iterable Requirements value as "no checks".
for req in requirements:
req_checks = getattr(req, "Checks", None) or []
try:
checks.update(req_checks)
except TypeError:
continue
except TypeError:
return set()
return checks
+15 -101
View File
@@ -20,8 +20,8 @@ from tasks.jobs.backfill import (
backfill_finding_group_summaries,
backfill_provider_compliance_scores,
backfill_resource_scan_summaries,
aggregate_scan_category_summaries,
aggregate_scan_resource_group_summaries,
backfill_scan_category_summaries,
backfill_scan_resource_group_summaries,
)
from tasks.jobs.connection import (
check_integration_connection,
@@ -46,11 +46,7 @@ from tasks.jobs.lighthouse_providers import (
refresh_lighthouse_provider_models,
)
from tasks.jobs.muting import mute_historical_findings
from tasks.jobs.report import (
STALE_TMP_OUTPUT_MAX_AGE_HOURS,
_cleanup_stale_tmp_output_directories,
generate_compliance_reports_job,
)
from tasks.jobs.report import generate_compliance_reports_job
from tasks.jobs.scan import (
aggregate_attack_surface,
aggregate_daily_severity,
@@ -58,7 +54,6 @@ from tasks.jobs.scan import (
aggregate_findings,
create_compliance_requirements,
perform_prowler_scan,
reset_ephemeral_resource_findings_count,
update_provider_compliance_scores,
)
from tasks.utils import (
@@ -78,7 +73,6 @@ from prowler.lib.check.compliance_models import Compliance
from prowler.lib.outputs.compliance.generic.generic import GenericCompliance
from prowler.lib.outputs.finding import Finding as FindingOutput
logger = get_task_logger(__name__)
@@ -160,13 +154,6 @@ def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str)
generate_outputs_task.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
),
# post-scan task — runs in the parallel group so a
# failure cannot cascade into reports or integrations. Its only
# prerequisite is that perform_prowler_scan has committed
# ResourceScanSummary, which is true by the time this chain fires.
reset_ephemeral_resource_findings_count_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
),
group(
# Use optimized task that generates both reports with shared queries
@@ -182,25 +169,10 @@ def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str)
).apply_async()
if can_provider_run_attack_paths_scan(tenant_id, provider_id):
# Row is normally created upstream, so this is a safeguard so we can attach the task id below
attack_paths_scan = attack_paths_db_utils.retrieve_attack_paths_scan(
tenant_id, scan_id
)
if attack_paths_scan is None:
attack_paths_scan = attack_paths_db_utils.create_attack_paths_scan(
tenant_id, scan_id, provider_id
)
# Persist the Celery task id so the periodic cleanup can revoke scans stuck in SCHEDULED
result = perform_attack_paths_scan_task.apply_async(
perform_attack_paths_scan_task.apply_async(
kwargs={"tenant_id": tenant_id, "scan_id": scan_id}
)
if attack_paths_scan and result:
attack_paths_db_utils.set_attack_paths_scan_task_id(
tenant_id, attack_paths_scan.id, result.task_id
)
@shared_task(base=RLSTask, name="provider-connection-check")
@set_tenant
@@ -402,8 +374,7 @@ class AttackPathsScanRLSTask(RLSTask):
SDK initialization, or Neo4j configuration errors during setup).
"""
def on_failure(self, exc, task_id, args, kwargs, _einfo): # noqa: ARG002
del args # Required by Celery's Task.on_failure signature; not used.
def on_failure(self, exc, task_id, args, kwargs, _einfo):
tenant_id = kwargs.get("tenant_id")
scan_id = kwargs.get("scan_id")
@@ -469,19 +440,6 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
scan_id (str): The scan identifier.
provider_id (str): The provider_id id to be used in generating outputs.
"""
try:
_cleanup_stale_tmp_output_directories(
DJANGO_TMP_OUTPUT_DIRECTORY,
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan=(tenant_id, scan_id),
)
except Exception as error:
logger.warning(
"Skipping stale tmp cleanup before output generation for scan %s: %s",
scan_id,
error,
)
# Check if the scan has findings
if not ScanSummary.objects.filter(scan_id=scan_id).exists():
logger.info(f"No findings found for scan {scan_id}")
@@ -701,9 +659,9 @@ def backfill_finding_group_summaries_task(tenant_id: str, days: int = None):
return backfill_finding_group_summaries(tenant_id=tenant_id, days=days)
@shared_task(name="scan-category-summaries", queue="overview")
@shared_task(name="backfill-scan-category-summaries", queue="backfill")
@handle_provider_deletion
def aggregate_scan_category_summaries_task(tenant_id: str, scan_id: str):
def backfill_scan_category_summaries_task(tenant_id: str, scan_id: str):
"""
Backfill ScanCategorySummary for a completed scan.
@@ -713,12 +671,12 @@ def aggregate_scan_category_summaries_task(tenant_id: str, scan_id: str):
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
"""
return aggregate_scan_category_summaries(tenant_id=tenant_id, scan_id=scan_id)
return backfill_scan_category_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="scan-resource-group-summaries", queue="overview")
@shared_task(name="backfill-scan-resource-group-summaries", queue="backfill")
@handle_provider_deletion
def aggregate_scan_resource_group_summaries_task(tenant_id: str, scan_id: str):
def backfill_scan_resource_group_summaries_task(tenant_id: str, scan_id: str):
"""
Backfill ScanGroupSummary for a completed scan.
@@ -728,7 +686,7 @@ def aggregate_scan_resource_group_summaries_task(tenant_id: str, scan_id: str):
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
"""
return aggregate_scan_resource_group_summaries(tenant_id=tenant_id, scan_id=scan_id)
return backfill_scan_resource_group_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="backfill-provider-compliance-scores", queue="backfill")
@@ -800,32 +758,6 @@ def aggregate_daily_severity_task(tenant_id: str, scan_id: str):
return aggregate_daily_severity(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="scan-reset-ephemeral-resources", queue="overview")
@handle_provider_deletion
def reset_ephemeral_resource_findings_count_task(tenant_id: str, scan_id: str):
"""Reset failed_findings_count for resources missing from a completed full-scope scan.
Failures are swallowed and returned as a status: this task lives inside the
post-scan group, and Celery propagates group-member exceptions into the next
chain step meaning a crash here would block compliance reports and
integrations. The reset is purely cosmetic (UI sort optimization), so a
bad run is logged and absorbed rather than allowed to cascade.
"""
try:
return reset_ephemeral_resource_findings_count(
tenant_id=tenant_id, scan_id=scan_id
)
except Exception as exc: # noqa: BLE001 — intentionally broad
logger.exception(
f"reset_ephemeral_resource_findings_count failed for scan {scan_id}: {exc}"
)
return {
"status": "failed",
"scan_id": str(scan_id),
"reason": str(exc),
}
@shared_task(base=RLSTask, name="scan-finding-group-summaries", queue="overview")
@set_tenant(keep_tenant=True)
@handle_provider_deletion
@@ -846,16 +778,12 @@ def reaggregate_all_finding_group_summaries_task(tenant_id: str):
limit. To keep the pre-aggregated tables consistent with that update,
this task re-runs the same per-scan aggregation pipeline that scan
completion runs on the latest completed scan of every (provider, day)
pair, rebuilding the tables that power the read endpoints:
pair, rebuilding the three tables that power the read endpoints:
- `ScanSummary` and `DailySeveritySummary` -> `/overviews/findings`,
`/overviews/findings-severity`, `/overviews/services`.
- `FindingGroupDailySummary` -> `/finding-groups` and
`/finding-groups/latest`.
- `ScanGroupSummary` -> `/overviews/resource-groups` (resource
inventory).
- `ScanCategorySummary` -> `/overviews/categories`.
- `AttackSurfaceOverview` -> `/overviews/attack-surfaces`.
Per-scan pipelines are dispatched in parallel via a Celery group so
wallclock scales with the worker pool.
@@ -887,8 +815,8 @@ def reaggregate_all_finding_group_summaries_task(tenant_id: str):
len(scan_ids),
)
# DailySeveritySummary reads from ScanSummary, so ScanSummary must be
# recomputed first; the other aggregators read Finding directly and
# can run in parallel with the severity step.
# recomputed first; FindingGroupDailySummary reads from Finding
# directly and can run in parallel with the severity step.
group(
chain(
perform_scan_summary_task.si(tenant_id=tenant_id, scan_id=scan_id),
@@ -899,15 +827,6 @@ def reaggregate_all_finding_group_summaries_task(tenant_id: str):
aggregate_finding_group_summaries_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
aggregate_scan_resource_group_summaries_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
aggregate_scan_category_summaries_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
aggregate_attack_surface_task.si(
tenant_id=tenant_id, scan_id=scan_id
),
),
)
for scan_id in scan_ids
@@ -1081,17 +1000,13 @@ def jira_integration_task(
@handle_provider_deletion
def generate_compliance_reports_task(tenant_id: str, scan_id: str, provider_id: str):
"""
Optimized task to generate ThreatScore, ENS, NIS2, CSA CCM and CIS reports with shared queries.
Optimized task to generate ThreatScore, ENS, NIS2, and CSA CCM reports with shared queries.
This task is more efficient than running separate report tasks because it reuses database queries:
- Provider object fetched once (instead of multiple times)
- Requirement statistics aggregated once (instead of multiple times)
- Can reduce database load by up to 50-70%
CIS emits a single PDF per run: the one matching the highest CIS version
available for the scan's provider, picked dynamically from
``Compliance.get_bulk`` (no hard-coded provider version mapping).
Args:
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
@@ -1108,7 +1023,6 @@ def generate_compliance_reports_task(tenant_id: str, scan_id: str, provider_id:
generate_ens=True,
generate_nis2=True,
generate_csa=True,
generate_cis=True,
)
@@ -135,7 +135,7 @@ class TestAttackPathsRun:
assert result == ingestion_result
mock_retrieve_scan.assert_called_once_with(str(tenant.id), str(scan.id))
mock_starting.assert_called_once()
config = mock_starting.call_args[0][1]
config = mock_starting.call_args[0][2]
assert config.neo4j_database == "tenant-db"
mock_get_db_name.assert_has_calls(
[call(attack_paths_scan.id, temporary=True), call(provider.tenant_id)]
@@ -2732,143 +2732,3 @@ class TestCleanupStaleAttackPathsScans:
assert result["cleaned_up_count"] == 2
# Worker should be pinged exactly once — cache prevents second ping
mock_alive.assert_called_once_with("shared-worker@host")
# `SCHEDULED` state cleanup
def _create_scheduled_scan(
self,
tenant,
provider,
*,
age_minutes,
parent_state,
with_task=True,
):
"""Create a SCHEDULED AttackPathsScan with a parent Scan in `parent_state`.
`age_minutes` controls how far in the past `started_at` is set, so
callers can place rows safely past the cleanup cutoff.
"""
parent_scan = Scan.objects.create(
name="Parent Prowler scan",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=parent_state,
tenant_id=tenant.id,
)
ap_scan = AttackPathsScan.objects.create(
tenant_id=tenant.id,
provider=provider,
scan=parent_scan,
state=StateChoices.SCHEDULED,
started_at=datetime.now(tz=timezone.utc) - timedelta(minutes=age_minutes),
)
task_result = None
if with_task:
task_result = TaskResult.objects.create(
task_id=str(ap_scan.id),
task_name="attack-paths-scan-perform",
status="PENDING",
)
task = Task.objects.create(
id=task_result.task_id,
task_runner_task=task_result,
tenant_id=tenant.id,
)
ap_scan.task = task
ap_scan.save(update_fields=["task_id"])
return ap_scan, task_result
@patch("tasks.jobs.attack_paths.cleanup.recover_graph_data_ready")
@patch("tasks.jobs.attack_paths.cleanup.graph_database.drop_database")
@patch(
"tasks.jobs.attack_paths.cleanup.rls_transaction",
new=lambda *args, **kwargs: nullcontext(),
)
@patch("tasks.jobs.attack_paths.cleanup._revoke_task")
def test_cleans_up_scheduled_scan_when_parent_is_terminal(
self,
mock_revoke,
mock_drop_db,
mock_recover,
tenants_fixture,
providers_fixture,
):
from tasks.jobs.attack_paths.cleanup import cleanup_stale_attack_paths_scans
tenant = tenants_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
ap_scan, task_result = self._create_scheduled_scan(
tenant,
provider,
age_minutes=24 * 60 * 3, # 3 days, safely past any threshold
parent_state=StateChoices.FAILED,
)
result = cleanup_stale_attack_paths_scans()
assert result["cleaned_up_count"] == 1
assert str(ap_scan.id) in result["scan_ids"]
ap_scan.refresh_from_db()
assert ap_scan.state == StateChoices.FAILED
assert ap_scan.progress == 100
assert ap_scan.completed_at is not None
assert ap_scan.ingestion_exceptions == {
"global_error": "Scan never started — cleaned up by periodic task"
}
# SCHEDULED revoke must NOT terminate a running worker
mock_revoke.assert_called_once()
assert mock_revoke.call_args.kwargs == {"terminate": False}
# Temp DB never created for SCHEDULED, so no drop attempted
mock_drop_db.assert_not_called()
# Tenant Neo4j data is untouched in this path
mock_recover.assert_not_called()
task_result.refresh_from_db()
assert task_result.status == "FAILURE"
assert task_result.date_done is not None
@patch("tasks.jobs.attack_paths.cleanup.recover_graph_data_ready")
@patch("tasks.jobs.attack_paths.cleanup.graph_database.drop_database")
@patch(
"tasks.jobs.attack_paths.cleanup.rls_transaction",
new=lambda *args, **kwargs: nullcontext(),
)
@patch("tasks.jobs.attack_paths.cleanup._revoke_task")
def test_skips_scheduled_scan_when_parent_still_in_flight(
self,
mock_revoke,
mock_drop_db,
mock_recover,
tenants_fixture,
providers_fixture,
):
from tasks.jobs.attack_paths.cleanup import cleanup_stale_attack_paths_scans
tenant = tenants_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
ap_scan, _ = self._create_scheduled_scan(
tenant,
provider,
age_minutes=24 * 60 * 3,
parent_state=StateChoices.EXECUTING,
)
result = cleanup_stale_attack_paths_scans()
assert result["cleaned_up_count"] == 0
ap_scan.refresh_from_db()
assert ap_scan.state == StateChoices.SCHEDULED
mock_revoke.assert_not_called()
+14 -145
View File
@@ -7,8 +7,8 @@ from tasks.jobs.backfill import (
backfill_compliance_summaries,
backfill_provider_compliance_scores,
backfill_resource_scan_summaries,
aggregate_scan_category_summaries,
aggregate_scan_resource_group_summaries,
backfill_scan_category_summaries,
backfill_scan_resource_group_summaries,
)
from api.models import (
@@ -183,10 +183,6 @@ class TestBackfillComplianceSummaries:
def test_backfill_creates_compliance_summaries(
self, tenants_fixture, scans_fixture, compliance_requirements_overviews_fixture
):
# Fixture seeds compliance rows the backfill aggregates over; pytest
# injects it by parameter name, so we reference it explicitly here
# to keep static analysers from flagging it as unused.
del compliance_requirements_overviews_fixture
tenant = tenants_fixture[0]
scan = scans_fixture[0]
@@ -231,86 +227,22 @@ class TestBackfillComplianceSummaries:
@pytest.mark.django_db
class TestBackfillScanCategorySummaries:
def test_rerun_with_no_findings_is_noop(self, scan_category_summary_fixture):
"""When the scan has no findings, the backfill is a no-op: it
reports `no categories to backfill` and leaves the table
untouched. The upsert path cannot drop rows it does not produce,
so any pre-existing row survives (matching the scan-completion
writer that used `ignore_conflicts=True`)."""
def test_already_backfilled(self, scan_category_summary_fixture):
tenant_id = scan_category_summary_fixture.tenant_id
scan_id = scan_category_summary_fixture.scan_id
result = aggregate_scan_category_summaries(str(tenant_id), str(scan_id))
result = backfill_scan_category_summaries(str(tenant_id), str(scan_id))
assert result == {"status": "no categories to backfill"}
assert ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id, category="existing-category"
).exists()
def test_rerun_upserts_without_duplicating(self, findings_with_categories_fixture):
"""Calling the backfill twice upserts rather than raising on
`unique_category_severity_per_scan`; rows are updated in place
(same primary keys)."""
finding = findings_with_categories_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
aggregate_scan_category_summaries(tenant_id, scan_id)
first_ids = set(
ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values_list("id", flat=True)
)
aggregate_scan_category_summaries(tenant_id, scan_id)
second_ids = set(
ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values_list("id", flat=True)
)
assert first_ids == second_ids
assert len(first_ids) == 2 # 2 categories x 1 severity
def test_rerun_reflects_mute_between_runs(self, findings_with_categories_fixture):
"""Muting a finding between two backfill runs must move counters:
`failed_findings` and `new_failed_findings` drop to zero (muted
findings are excluded from those totals). Guards against a
regression where the upsert keeps stale counts from the first run."""
finding = findings_with_categories_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
aggregate_scan_category_summaries(tenant_id, scan_id)
before = list(
ScanCategorySummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
)
assert all(s.failed_findings == 1 for s in before)
assert all(s.new_failed_findings == 1 for s in before)
assert all(s.total_findings == 1 for s in before)
Finding.all_objects.filter(pk=finding.pk).update(muted=True)
aggregate_scan_category_summaries(tenant_id, scan_id)
after = list(
ScanCategorySummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
)
assert {s.id for s in after} == {s.id for s in before}
assert all(s.failed_findings == 0 for s in after)
assert all(s.new_failed_findings == 0 for s in after)
assert all(s.total_findings == 0 for s in after)
assert result == {"status": "already backfilled"}
def test_not_completed_scan(self, get_not_completed_scans):
for scan in get_not_completed_scans:
result = aggregate_scan_category_summaries(
str(scan.tenant_id), str(scan.id)
)
result = backfill_scan_category_summaries(str(scan.tenant_id), str(scan.id))
assert result == {"status": "scan is not completed"}
def test_no_categories_to_backfill(self, scans_fixture):
scan = scans_fixture[1] # Failed scan with no findings
result = aggregate_scan_category_summaries(str(scan.tenant_id), str(scan.id))
result = backfill_scan_category_summaries(str(scan.tenant_id), str(scan.id))
assert result == {"status": "no categories to backfill"}
def test_successful_backfill(self, findings_with_categories_fixture):
@@ -318,7 +250,7 @@ class TestBackfillScanCategorySummaries:
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
result = aggregate_scan_category_summaries(tenant_id, scan_id)
result = backfill_scan_category_summaries(tenant_id, scan_id)
# 2 categories × 1 severity = 2 rows
assert result == {"status": "backfilled", "categories_count": 2}
@@ -379,87 +311,24 @@ def scan_resource_group_summary_fixture(scans_fixture):
@pytest.mark.django_db
class TestBackfillScanGroupSummaries:
def test_rerun_with_no_findings_is_noop(self, scan_resource_group_summary_fixture):
"""When the scan has no findings, the backfill is a no-op: it
reports `no resource groups to backfill` and leaves the table
untouched. The upsert path cannot drop rows it does not produce,
so any pre-existing row survives (matching the scan-completion
writer that used `ignore_conflicts=True`)."""
def test_already_backfilled(self, scan_resource_group_summary_fixture):
tenant_id = scan_resource_group_summary_fixture.tenant_id
scan_id = scan_resource_group_summary_fixture.scan_id
result = aggregate_scan_resource_group_summaries(str(tenant_id), str(scan_id))
result = backfill_scan_resource_group_summaries(str(tenant_id), str(scan_id))
assert result == {"status": "no resource groups to backfill"}
assert ScanGroupSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id, resource_group="existing-group"
).exists()
def test_rerun_upserts_without_duplicating(self, findings_with_group_fixture):
"""Calling the backfill twice upserts rather than raising on
`unique_resource_group_severity_per_scan`; rows are updated in
place (same primary keys)."""
finding = findings_with_group_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
aggregate_scan_resource_group_summaries(tenant_id, scan_id)
first_ids = set(
ScanGroupSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values_list("id", flat=True)
)
aggregate_scan_resource_group_summaries(tenant_id, scan_id)
second_ids = set(
ScanGroupSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values_list("id", flat=True)
)
assert first_ids == second_ids
assert len(first_ids) == 1 # 1 resource group x 1 severity
def test_rerun_reflects_mute_between_runs(self, findings_with_group_fixture):
"""Muting a finding between two backfill runs must move counters:
`failed_findings` and `new_failed_findings` drop to zero (muted
findings are excluded from those totals). Guards against a
regression where the upsert keeps stale counts from the first run."""
finding = findings_with_group_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
aggregate_scan_resource_group_summaries(tenant_id, scan_id)
before = list(
ScanGroupSummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
)
assert len(before) == 1
assert before[0].failed_findings == 1
assert before[0].new_failed_findings == 1
assert before[0].total_findings == 1
Finding.all_objects.filter(pk=finding.pk).update(muted=True)
aggregate_scan_resource_group_summaries(tenant_id, scan_id)
after = list(
ScanGroupSummary.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
)
assert {s.id for s in after} == {s.id for s in before}
assert after[0].failed_findings == 0
assert after[0].new_failed_findings == 0
assert after[0].total_findings == 0
assert result == {"status": "already backfilled"}
def test_not_completed_scan(self, get_not_completed_scans):
for scan in get_not_completed_scans:
result = aggregate_scan_resource_group_summaries(
result = backfill_scan_resource_group_summaries(
str(scan.tenant_id), str(scan.id)
)
assert result == {"status": "scan is not completed"}
def test_no_resource_groups_to_backfill(self, scans_fixture):
scan = scans_fixture[1] # Failed scan with no findings
result = aggregate_scan_resource_group_summaries(
result = backfill_scan_resource_group_summaries(
str(scan.tenant_id), str(scan.id)
)
assert result == {"status": "no resource groups to backfill"}
@@ -469,7 +338,7 @@ class TestBackfillScanGroupSummaries:
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
result = aggregate_scan_resource_group_summaries(tenant_id, scan_id)
result = backfill_scan_resource_group_summaries(tenant_id, scan_id)
# 1 resource group × 1 severity = 1 row
assert result == {"status": "backfilled", "resource_groups_count": 1}
File diff suppressed because it is too large Load Diff
@@ -1269,48 +1269,6 @@ class TestComponentEdgeCases:
# Should be a LongTable for large datasets
assert isinstance(table, LongTable)
def test_zebra_uses_rowbackgrounds_not_per_row_background(self, monkeypatch):
"""The styles list must contain exactly one ROWBACKGROUNDS entry
regardless of row count, never N per-row BACKGROUND entries.
"""
captured: dict = {}
# Capture the list passed to TableStyle. create_data_table builds a
# list of style tuples and wraps it in a TableStyle exactly once;
# by patching TableStyle we intercept that list.
import tasks.jobs.reports.components as comp_mod
original_table_style = comp_mod.TableStyle
def _capture_table_style(style_list):
captured["styles"] = list(style_list)
return original_table_style(style_list)
monkeypatch.setattr(comp_mod, "TableStyle", _capture_table_style)
data = [{"name": f"Item {i}"} for i in range(60)]
columns = [ColumnConfig("Name", 2 * inch, "name")]
comp_mod.create_data_table(data, columns, alternate_rows=True)
styles = captured["styles"]
# Count by command name.
names = [s[0] for s in styles if isinstance(s, tuple) and s]
# Exactly one ROWBACKGROUNDS entry.
assert names.count("ROWBACKGROUNDS") == 1
# Zero per-row BACKGROUND entries on data rows. (The header row
# BACKGROUND command is intentional and lives at coords (0,0)/(-1,0).)
data_row_bg = [
s
for s in styles
if isinstance(s, tuple)
and s[0] == "BACKGROUND"
and not (s[1] == (0, 0) and s[2] == (-1, 0))
]
assert data_row_bg == [], (
f"expected no per-row BACKGROUND entries on data rows; "
f"got {len(data_row_bg)}"
)
def test_create_risk_component_zero_values(self):
"""Test risk component with zero values."""
component = create_risk_component(risk_level=0, weight=0, score=0)
@@ -1386,194 +1344,3 @@ class TestFrameworkConfigEdgeCases:
assert get_framework_config("my_custom_threatscore_compliance") is not None
assert get_framework_config("ens_something_else") is not None
assert get_framework_config("nis2_gcp") is not None
# =============================================================================
# Findings Table Chunking Tests (PROWLER-1733)
# =============================================================================
#
# These tests guard the OOM-prevention behaviour added in PROWLER-1733:
# ``_create_findings_tables`` must split a list of findings into multiple
# small sub-tables instead of producing one giant Table, which would force
# ReportLab to resolve layout for all rows at once and OOM the worker on
# scans with thousands of findings per check.
class _DummyMetadata:
"""Lightweight stand-in for FindingOutput.metadata used in chunking tests."""
def __init__(self, check_title: str = "Title", severity: str = "high"):
self.CheckTitle = check_title
self.Severity = severity
class _DummyFinding:
"""Lightweight stand-in for FindingOutput used in chunking tests.
The chunking code only reads a small set of attributes via ``getattr``,
so a duck-typed object is enough and lets the tests run without touching
the DB or pydantic deserialisation.
"""
def __init__(
self,
check_id: str = "aws_check",
resource_name: str = "res-1",
resource_uid: str = "",
status: str = "FAIL",
region: str = "us-east-1",
with_metadata: bool = True,
):
self.check_id = check_id
self.resource_name = resource_name
self.resource_uid = resource_uid
self.status = status
self.region = region
if with_metadata:
self.metadata = _DummyMetadata()
else:
self.metadata = None
def _make_concrete_generator():
"""Return a minimal concrete subclass of BaseComplianceReportGenerator."""
class _Concrete(BaseComplianceReportGenerator):
def create_executive_summary(self, data):
return []
def create_charts_section(self, data):
return []
def create_requirements_index(self, data):
return []
return _Concrete(FrameworkConfig(name="test", display_name="Test"))
class TestFindingsTableChunking:
"""Tests for ``_create_findings_tables`` (PROWLER-1733)."""
def test_chunking_produces_expected_number_of_subtables(self):
"""5000 findings @ chunk_size=300 → 17 sub-tables + 16 spacers."""
generator = _make_concrete_generator()
findings = [_DummyFinding(check_id="c1") for _ in range(5000)]
flowables = generator._create_findings_tables(findings, chunk_size=300)
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
spacers = [f for f in flowables if isinstance(f, Spacer)]
# ceil(5000 / 300) == 17
assert len(tables) == 17
# Spacer between every pair of contiguous tables, not after the last
assert len(spacers) == 16
def test_chunk_size_param_overrides_default(self):
"""250 findings @ chunk_size=100 → 3 sub-tables."""
generator = _make_concrete_generator()
findings = [_DummyFinding(check_id="c2") for _ in range(250)]
flowables = generator._create_findings_tables(findings, chunk_size=100)
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
assert len(tables) == 3
def test_empty_findings_returns_empty_list(self):
"""No findings → no flowables. Callers can extend(...) safely."""
generator = _make_concrete_generator()
assert generator._create_findings_tables([]) == []
def test_single_chunk_has_no_spacer(self):
"""A single sub-table must not emit a trailing spacer."""
generator = _make_concrete_generator()
findings = [_DummyFinding(check_id="c3") for _ in range(10)]
flowables = generator._create_findings_tables(findings, chunk_size=300)
assert len(flowables) == 1
assert isinstance(flowables[0], (Table, LongTable))
def test_malformed_finding_is_skipped(self):
"""A broken finding must not abort the report; it is logged and skipped."""
generator = _make_concrete_generator()
class _Broken:
# No attributes at all; getattr() defaults will mostly cope, but
# we force an explicit error by making the metadata attribute
# itself raise on access.
@property
def metadata(self):
raise RuntimeError("boom")
check_id = "broken"
findings = [
_DummyFinding(check_id="c4"),
_Broken(),
_DummyFinding(check_id="c4"),
]
flowables = generator._create_findings_tables(findings, chunk_size=300)
# Two good rows → one sub-table containing them; the broken one is
# logged and dropped, not propagated.
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
assert len(tables) == 1
def test_create_findings_table_alias_returns_first_chunk(self):
"""The deprecated alias must keep returning a single Table flowable."""
generator = _make_concrete_generator()
findings = [_DummyFinding(check_id="c5") for _ in range(700)]
first = generator._create_findings_table(findings)
assert isinstance(first, (Table, LongTable))
def test_create_findings_table_alias_empty(self):
"""Alias on empty input returns an empty (header-only) Table, not None."""
generator = _make_concrete_generator()
result = generator._create_findings_table([])
# The legacy alias never returned None; an empty header-only table
# is a strict superset of that contract.
assert isinstance(result, (Table, LongTable))
# =============================================================================
# Logging Context Manager Tests (PROWLER-1733)
# =============================================================================
class TestLogPhaseContextManager:
"""Tests for ``_log_phase`` (PROWLER-1733).
The context manager emits structured ``phase_start`` / ``phase_end``
logs with ``scan_id``, ``framework`` and ``elapsed_s``, so Datadog/
CloudWatch queries can pivot by scan and find the slow section.
"""
def test_emits_start_and_end_with_elapsed_and_rss(self, caplog):
from tasks.jobs.reports.base import _log_phase
caplog.set_level("INFO", logger="tasks.jobs.reports.base")
with _log_phase("unit_test_phase", scan_id="s-1", framework="Test FW"):
pass
messages = [r.getMessage() for r in caplog.records]
starts = [m for m in messages if "phase_start" in m]
ends = [m for m in messages if "phase_end" in m]
assert len(starts) == 1 and len(ends) == 1
assert "phase=unit_test_phase" in starts[0]
assert "scan_id=s-1" in starts[0]
assert "framework=Test FW" in starts[0]
assert "elapsed_s=" in ends[0]
assert "rss_kb=" in ends[0]
assert "delta_rss_kb=" in ends[0]
def test_failure_logs_phase_failed_and_reraises(self, caplog):
from tasks.jobs.reports.base import _log_phase
caplog.set_level("INFO", logger="tasks.jobs.reports.base")
with pytest.raises(RuntimeError, match="boom"):
with _log_phase("failing_phase", scan_id="s-2", framework="FW"):
raise RuntimeError("boom")
messages = [r.getMessage() for r in caplog.records]
assert any("phase_failed" in m and "failing_phase" in m for m in messages)
# No phase_end on the failure path.
assert not any("phase_end" in m for m in messages)
@@ -1,532 +0,0 @@
from unittest.mock import Mock, patch
import pytest
from reportlab.platypus import Image, LongTable, Paragraph, Table
from tasks.jobs.reports import FRAMEWORK_REGISTRY, ComplianceData, RequirementData
from tasks.jobs.reports.cis import (
CISReportGenerator,
_normalize_profile,
_profile_badge_text,
)
from api.models import StatusChoices
# =============================================================================
# Fixtures
# =============================================================================
@pytest.fixture
def cis_generator():
"""Create a CISReportGenerator instance for testing."""
config = FRAMEWORK_REGISTRY["cis"]
return CISReportGenerator(config)
def _make_attr(
section: str,
profile_value: str = "Level 1",
assessment_value: str = "Automated",
sub_section: str = "",
**extras,
) -> Mock:
"""Build a mock CIS_Requirement_Attribute with duck-typed fields."""
attr = Mock()
attr.Section = section
attr.SubSection = sub_section
# CIS enums have `.value`. Use a simple Mock that exposes `.value`.
attr.Profile = Mock(value=profile_value)
attr.AssessmentStatus = Mock(value=assessment_value)
attr.Description = extras.get("description", "desc")
attr.RationaleStatement = extras.get("rationale", "the rationale")
attr.ImpactStatement = extras.get("impact", "the impact")
attr.RemediationProcedure = extras.get("remediation", "the remediation")
attr.AuditProcedure = extras.get("audit", "the audit")
attr.AdditionalInformation = ""
attr.DefaultValue = ""
attr.References = extras.get("references", "https://example.com")
return attr
@pytest.fixture
def basic_cis_compliance_data():
"""Create basic ComplianceData for CIS testing (no requirements)."""
return ComplianceData(
tenant_id="tenant-123",
scan_id="scan-456",
provider_id="provider-789",
compliance_id="cis_5.0_aws",
framework="CIS",
name="CIS Amazon Web Services Foundations Benchmark v5.0.0",
version="5.0",
description="Center for Internet Security AWS Foundations Benchmark",
)
@pytest.fixture
def populated_cis_compliance_data(basic_cis_compliance_data):
"""CIS data with mixed requirements across 2 sections, Profile L1/L2, Pass/Fail/Manual."""
data = basic_cis_compliance_data
data.requirements = [
RequirementData(
id="1.1",
description="Maintain current contact details",
status=StatusChoices.PASS,
passed_findings=5,
failed_findings=0,
total_findings=5,
checks=["aws_check_1"],
),
RequirementData(
id="1.2",
description="Ensure root account has no access keys",
status=StatusChoices.FAIL,
passed_findings=0,
failed_findings=3,
total_findings=3,
checks=["aws_check_2"],
),
RequirementData(
id="1.3",
description="Ensure MFA is enabled for all IAM users",
status=StatusChoices.MANUAL,
checks=[],
),
RequirementData(
id="2.1",
description="Ensure S3 Buckets are logging",
status=StatusChoices.PASS,
passed_findings=2,
failed_findings=0,
total_findings=2,
checks=["aws_check_3"],
),
RequirementData(
id="2.2",
description="Ensure encryption at rest is enabled",
status=StatusChoices.FAIL,
passed_findings=0,
failed_findings=4,
total_findings=4,
checks=["aws_check_4"],
),
]
data.attributes_by_requirement_id = {
"1.1": {
"attributes": {
"req_attributes": [
_make_attr(
"1 Identity and Access Management",
profile_value="Level 1",
assessment_value="Automated",
)
],
"checks": ["aws_check_1"],
}
},
"1.2": {
"attributes": {
"req_attributes": [
_make_attr(
"1 Identity and Access Management",
profile_value="Level 1",
assessment_value="Automated",
)
],
"checks": ["aws_check_2"],
}
},
"1.3": {
"attributes": {
"req_attributes": [
_make_attr(
"1 Identity and Access Management",
profile_value="Level 2",
assessment_value="Manual",
)
],
"checks": [],
}
},
"2.1": {
"attributes": {
"req_attributes": [
_make_attr(
"2 Storage",
profile_value="Level 2",
assessment_value="Automated",
)
],
"checks": ["aws_check_3"],
}
},
"2.2": {
"attributes": {
"req_attributes": [
_make_attr(
"2 Storage",
profile_value="Level 1",
assessment_value="Automated",
)
],
"checks": ["aws_check_4"],
}
},
}
return data
# =============================================================================
# Helper function tests
# =============================================================================
class TestNormalizeProfile:
"""Test suite for _normalize_profile helper."""
def test_level_1_string(self):
assert _normalize_profile(Mock(value="Level 1")) == "L1"
def test_level_2_string(self):
assert _normalize_profile(Mock(value="Level 2")) == "L2"
def test_e3_level_1(self):
assert _normalize_profile(Mock(value="E3 Level 1")) == "L1"
def test_e5_level_2(self):
assert _normalize_profile(Mock(value="E5 Level 2")) == "L2"
def test_none_returns_other(self):
assert _normalize_profile(None) == "Other"
def test_substring_trap_rejected(self):
"""Unrelated tokens containing the literal ``L2`` must NOT map to L2."""
# A future enum value like "CL2 Kubernetes Worker" would be silently
# misclassified by a naive substring check.
assert _normalize_profile(Mock(value="CL2 Worker")) == "Other"
assert _normalize_profile(Mock(value="HL2 Legacy")) == "Other"
def test_raw_string_level_1(self):
# Mock without .value falls back to str(profile); use a real string
class NoValue:
def __str__(self):
return "Level 1"
assert _normalize_profile(NoValue()) == "L1"
def test_unknown_profile_returns_other(self):
assert _normalize_profile(Mock(value="Custom Profile")) == "Other"
class TestProfileBadgeText:
def test_l1_label(self):
assert _profile_badge_text("L1") == "Level 1"
def test_l2_label(self):
assert _profile_badge_text("L2") == "Level 2"
def test_other_label(self):
assert _profile_badge_text("Other") == "Other"
# =============================================================================
# Generator initialization
# =============================================================================
class TestCISGeneratorInitialization:
def test_generator_created(self, cis_generator):
assert cis_generator is not None
assert cis_generator.config.name == "cis"
def test_generator_language(self, cis_generator):
assert cis_generator.config.language == "en"
def test_generator_sections_dynamic(self, cis_generator):
# CIS sections differ per variant so config.sections MUST be None
assert cis_generator.config.sections is None
def test_attribute_fields_contain_cis_specific(self, cis_generator):
for field in ("Profile", "AssessmentStatus", "RationaleStatement"):
assert field in cis_generator.config.attribute_fields
# =============================================================================
# _derive_sections
# =============================================================================
class TestDeriveSections:
def test_preserves_first_seen_order(
self, cis_generator, populated_cis_compliance_data
):
sections = cis_generator._derive_sections(populated_cis_compliance_data)
assert sections == [
"1 Identity and Access Management",
"2 Storage",
]
def test_deduplicates_sections(self, cis_generator, basic_cis_compliance_data):
basic_cis_compliance_data.requirements = [
RequirementData(id="1.1", description="a", status=StatusChoices.PASS),
RequirementData(id="1.2", description="b", status=StatusChoices.PASS),
]
attr = _make_attr("1 IAM")
basic_cis_compliance_data.attributes_by_requirement_id = {
"1.1": {"attributes": {"req_attributes": [attr], "checks": []}},
"1.2": {"attributes": {"req_attributes": [attr], "checks": []}},
}
assert cis_generator._derive_sections(basic_cis_compliance_data) == ["1 IAM"]
def test_empty_data_returns_empty(self, cis_generator, basic_cis_compliance_data):
basic_cis_compliance_data.requirements = []
basic_cis_compliance_data.attributes_by_requirement_id = {}
assert cis_generator._derive_sections(basic_cis_compliance_data) == []
# =============================================================================
# _compute_statistics
# =============================================================================
class TestComputeStatistics:
def test_totals(self, cis_generator, populated_cis_compliance_data):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
assert stats["total"] == 5
assert stats["passed"] == 2
assert stats["failed"] == 2
assert stats["manual"] == 1
def test_overall_compliance_excludes_manual(
self, cis_generator, populated_cis_compliance_data
):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
# 2 passed / 4 evaluated (pass + fail) = 50%
assert stats["overall_compliance"] == pytest.approx(50.0)
def test_overall_compliance_all_manual(
self, cis_generator, basic_cis_compliance_data
):
basic_cis_compliance_data.requirements = [
RequirementData(id="x", description="d", status=StatusChoices.MANUAL),
]
attr = _make_attr("1 IAM", profile_value="Level 1", assessment_value="Manual")
basic_cis_compliance_data.attributes_by_requirement_id = {
"x": {"attributes": {"req_attributes": [attr], "checks": []}},
}
stats = cis_generator._compute_statistics(basic_cis_compliance_data)
# No evaluated → defaults to 100%
assert stats["overall_compliance"] == 100.0
def test_profile_counts(self, cis_generator, populated_cis_compliance_data):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
profile = stats["profile_counts"]
# From fixture:
# L1: 1.1 (PASS, Auto), 1.2 (FAIL, Auto), 2.2 (FAIL, Auto) → pass=1, fail=2, manual=0
# L2: 1.3 (MANUAL, Manual), 2.1 (PASS, Auto) → pass=1, fail=0, manual=1
assert profile["L1"] == {"passed": 1, "failed": 2, "manual": 0}
assert profile["L2"] == {"passed": 1, "failed": 0, "manual": 1}
def test_assessment_counts(self, cis_generator, populated_cis_compliance_data):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
assessment = stats["assessment_counts"]
# Automated: 1.1 PASS, 1.2 FAIL, 2.1 PASS, 2.2 FAIL → pass=2, fail=2, manual=0
# Manual: 1.3 MANUAL → pass=0, fail=0, manual=1
assert assessment["Automated"] == {"passed": 2, "failed": 2, "manual": 0}
assert assessment["Manual"] == {"passed": 0, "failed": 0, "manual": 1}
def test_top_failing_sections_includes_all_evaluated(
self, cis_generator, populated_cis_compliance_data
):
stats = cis_generator._compute_statistics(populated_cis_compliance_data)
top = stats["top_failing_sections"]
# Both sections have 1 PASS + 1 FAIL evaluated → tied at 50%. The
# sort is stable, so both must appear and both must be capped at
# 5 entries.
assert len(top) == 2
section_names = {name for name, _ in top}
assert section_names == {
"1 Identity and Access Management",
"2 Storage",
}
def test_compute_statistics_is_memoized(
self, cis_generator, populated_cis_compliance_data
):
"""Calling ``_compute_statistics`` twice with the same data must
reuse the cached value and not re-run the uncached kernel."""
with patch.object(
CISReportGenerator,
"_compute_statistics_uncached",
wraps=cis_generator._compute_statistics_uncached,
) as spy:
cis_generator._compute_statistics(populated_cis_compliance_data)
cis_generator._compute_statistics(populated_cis_compliance_data)
assert spy.call_count == 1
# =============================================================================
# Executive summary
# =============================================================================
class TestCISExecutiveSummary:
def test_title_present(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_executive_summary(populated_cis_compliance_data)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
text = " ".join(str(p.text) for p in paragraphs)
assert "Executive Summary" in text
def test_tables_rendered(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_executive_summary(populated_cis_compliance_data)
tables = [e for e in elements if isinstance(e, Table)]
# Exact count: Summary, Profile, Assessment, Top Failing Sections = 4.
assert len(tables) == 4
def test_no_requirements(self, cis_generator, basic_cis_compliance_data):
basic_cis_compliance_data.requirements = []
basic_cis_compliance_data.attributes_by_requirement_id = {}
elements = cis_generator.create_executive_summary(basic_cis_compliance_data)
# With no requirements: Summary table always renders, and both Profile
# and Assessment breakdown tables render with a 0-filled default row,
# but Top Failing Sections is suppressed → exactly 3 tables.
tables = [e for e in elements if isinstance(e, Table)]
assert len(tables) == 3
# =============================================================================
# Charts section
# =============================================================================
class TestCISChartsSection:
def test_charts_rendered(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_charts_section(populated_cis_compliance_data)
# At least 1 image for the pie + 1 for section bar + 1 for stacked
images = [e for e in elements if isinstance(e, Image)]
assert len(images) >= 1
def test_charts_no_data_no_crash(self, cis_generator, basic_cis_compliance_data):
basic_cis_compliance_data.requirements = []
basic_cis_compliance_data.attributes_by_requirement_id = {}
elements = cis_generator.create_charts_section(basic_cis_compliance_data)
# Must not raise; may or may not have any Image
assert isinstance(elements, list)
# =============================================================================
# Requirements index
# =============================================================================
class TestCISRequirementsIndex:
def test_title_present(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_requirements_index(
populated_cis_compliance_data
)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
text = " ".join(str(p.text) for p in paragraphs)
assert "Requirements Index" in text
def test_groups_by_section(self, cis_generator, populated_cis_compliance_data):
elements = cis_generator.create_requirements_index(
populated_cis_compliance_data
)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
text = " ".join(str(p.text) for p in paragraphs)
assert "1 Identity and Access Management" in text
assert "2 Storage" in text
def test_renders_tables_per_section(
self, cis_generator, populated_cis_compliance_data
):
elements = cis_generator.create_requirements_index(
populated_cis_compliance_data
)
# One table per section with requirements. ``create_data_table``
# returns a LongTable when the row count exceeds its threshold and a
# plain Table otherwise — both are valid.
tables = [e for e in elements if isinstance(e, (Table, LongTable))]
assert len(tables) == 2
# =============================================================================
# Detailed findings extras hook
# =============================================================================
class TestRenderRequirementDetailExtras:
def test_inserts_all_fields(self, cis_generator, populated_cis_compliance_data):
req = populated_cis_compliance_data.requirements[1] # 1.2 FAIL
extras = cis_generator._render_requirement_detail_extras(
req, populated_cis_compliance_data
)
text = " ".join(str(p.text) for p in extras if isinstance(p, Paragraph))
assert "Rationale" in text
assert "Impact" in text
assert "Audit Procedure" in text
assert "Remediation" in text
assert "References" in text
def test_missing_metadata_returns_empty(
self, cis_generator, basic_cis_compliance_data
):
basic_cis_compliance_data.attributes_by_requirement_id = {}
req = RequirementData(id="99", description="unknown", status=StatusChoices.FAIL)
extras = cis_generator._render_requirement_detail_extras(
req, basic_cis_compliance_data
)
assert extras == []
def test_escapes_html_chars(self, cis_generator, basic_cis_compliance_data):
attr = _make_attr(
"1 IAM",
rationale="<script>alert('x')</script>",
)
basic_cis_compliance_data.attributes_by_requirement_id = {
"1.1": {"attributes": {"req_attributes": [attr], "checks": []}}
}
req = RequirementData(id="1.1", description="d", status=StatusChoices.FAIL)
extras = cis_generator._render_requirement_detail_extras(
req, basic_cis_compliance_data
)
text = " ".join(str(p.text) for p in extras if isinstance(p, Paragraph))
assert "<script>" not in text
assert "&lt;script&gt;" in text
# =============================================================================
# Cover page
# =============================================================================
class TestCISCoverPage:
@patch("tasks.jobs.reports.cis.Image")
def test_cover_page_has_logo(
self, mock_image, cis_generator, basic_cis_compliance_data
):
elements = cis_generator.create_cover_page(basic_cis_compliance_data)
assert len(elements) > 0
assert mock_image.call_count >= 1
def test_cover_page_title_includes_version(
self, cis_generator, basic_cis_compliance_data
):
elements = cis_generator.create_cover_page(basic_cis_compliance_data)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
content = " ".join(str(p.text) for p in paragraphs)
assert "CIS Benchmark" in content
assert "5.0" in content
def test_cover_page_title_includes_provider_when_set(
self, cis_generator, basic_cis_compliance_data
):
provider = Mock()
provider.provider = "aws"
provider.uid = "123456789012"
provider.alias = "test-account"
basic_cis_compliance_data.provider_obj = provider
elements = cis_generator.create_cover_page(basic_cis_compliance_data)
paragraphs = [e for e in elements if isinstance(e, Paragraph)]
content = " ".join(str(p.text) for p in paragraphs)
assert "AWS" in content
+24 -450
View File
@@ -24,7 +24,6 @@ from tasks.jobs.scan import (
aggregate_findings,
create_compliance_requirements,
perform_prowler_scan,
reset_ephemeral_resource_findings_count,
update_provider_compliance_scores,
)
from tasks.utils import CustomEncoder
@@ -36,7 +35,6 @@ from api.models import (
MuteRule,
Provider,
Resource,
ResourceScanSummary,
Scan,
ScanSummary,
StateChoices,
@@ -3361,85 +3359,6 @@ class TestAggregateFindings:
regions = {s.region for s in summaries}
assert regions == {"us-east-1", "us-west-2"}
@patch("tasks.jobs.scan.Finding.objects.filter")
@patch("tasks.jobs.scan.ScanSummary.objects.bulk_create")
@patch("tasks.jobs.scan.rls_transaction")
def test_aggregate_findings_skips_rows_with_null_service_or_region(
self, mock_rls_transaction, mock_bulk_create, mock_findings_filter
):
"""Aggregation rows with NULL service or region (orphan Findings whose
ResourceFindingMapping is missing) must be dropped before
``bulk_create`` so the NOT NULL constraints on ``scan_summaries`` are
not violated. Valid rows in the same batch must still be persisted."""
tenant_id = str(uuid.uuid4())
scan_id = str(uuid.uuid4())
base_counts = {
"fail": 1,
"_pass": 0,
"muted_count": 0,
"total": 1,
"new": 0,
"changed": 0,
"unchanged": 1,
"fail_new": 0,
"fail_changed": 0,
"pass_new": 0,
"pass_changed": 0,
"muted_new": 0,
"muted_changed": 0,
}
mock_queryset = MagicMock()
mock_queryset.values.return_value = mock_queryset
mock_queryset.annotate.return_value = [
{
"check_id": "check_valid",
"resources__service": "s3",
"severity": "high",
"resources__region": "us-east-1",
**base_counts,
},
{
"check_id": "check_null_service",
"resources__service": None,
"severity": "high",
"resources__region": "us-east-1",
**base_counts,
},
{
"check_id": "check_null_region",
"resources__service": "ec2",
"severity": "low",
"resources__region": None,
**base_counts,
},
{
"check_id": "check_null_both",
"resources__service": None,
"severity": "medium",
"resources__region": None,
**base_counts,
},
]
ctx = MagicMock()
ctx.__enter__.return_value = None
ctx.__exit__.return_value = False
mock_rls_transaction.return_value = ctx
mock_findings_filter.return_value = mock_queryset
aggregate_findings(tenant_id, scan_id)
mock_bulk_create.assert_called_once()
args, _ = mock_bulk_create.call_args
summaries = list(args[0])
assert len(summaries) == 1
assert summaries[0].check_id == "check_valid"
assert summaries[0].service == "s3"
assert summaries[0].region == "us-east-1"
def test_aggregate_findings_is_idempotent_on_rerun(
self,
tenants_fixture,
@@ -3447,24 +3366,14 @@ class TestAggregateFindings:
findings_fixture,
):
"""Re-running `aggregate_findings` for the same scan must not violate
the `unique_scan_summary` constraint. The post-mute reaggregation
pipeline re-dispatches `perform_scan_summary_task` against scans
whose summaries already exist; upsert must update existing rows in
place (same primary keys) rather than inserting duplicates."""
the `unique_scan_summary` constraint, and the resulting row set for
the scan must match the single-run output. This is exercised by the
post-mute reaggregation pipeline, which re-dispatches
`perform_scan_summary_task` against scans whose summaries already
exist."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
value_columns = (
"check_id",
"service",
"severity",
"region",
"fail",
"_pass",
"muted",
"total",
)
aggregate_findings(str(tenant.id), str(scan.id))
first_run_ids = set(
ScanSummary.all_objects.filter(
@@ -3473,11 +3382,19 @@ class TestAggregateFindings:
)
first_run_rows = list(
ScanSummary.all_objects.filter(tenant_id=tenant.id, scan_id=scan.id).values(
*value_columns
"check_id",
"service",
"severity",
"region",
"fail",
"_pass",
"muted",
"total",
)
)
# Second invocation must not raise and must not duplicate rows.
# Second invocation must not raise and must replace the rows without
# leaving duplicates behind.
aggregate_findings(str(tenant.id), str(scan.id))
second_run_ids = set(
ScanSummary.all_objects.filter(
@@ -3486,49 +3403,19 @@ class TestAggregateFindings:
)
second_run_rows = list(
ScanSummary.all_objects.filter(tenant_id=tenant.id, scan_id=scan.id).values(
*value_columns
"check_id",
"service",
"severity",
"region",
"fail",
"_pass",
"muted",
"total",
)
)
# Upsert preserves the original row identities; values stay stable
# because the underlying Finding set is unchanged between runs.
assert second_run_rows == first_run_rows
assert first_run_ids == second_run_ids
def test_aggregate_findings_reflects_mute_between_runs(
self,
tenants_fixture,
scans_fixture,
findings_fixture,
):
"""Re-running `aggregate_findings` after a finding is muted between
runs must move counters: the matching ScanSummary row's `fail`
decrements and `muted` increments. Guards against a regression where
upsert silently keeps stale values from the first run."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
finding1, _ = findings_fixture # finding1 is FAIL and not muted.
aggregate_findings(str(tenant.id), str(scan.id))
before = ScanSummary.all_objects.get(
tenant_id=tenant.id,
scan_id=scan.id,
check_id=finding1.check_id,
service="ec2",
severity=finding1.severity,
region="us-east-1",
)
assert before.fail == 1
assert before.muted == 0
Finding.all_objects.filter(pk=finding1.pk).update(muted=True)
aggregate_findings(str(tenant.id), str(scan.id))
after = ScanSummary.all_objects.get(pk=before.pk)
assert after.fail == 0
assert after.muted == 1
assert after.total == before.total
assert first_run_ids.isdisjoint(second_run_ids)
@pytest.mark.django_db
@@ -3853,7 +3740,6 @@ class TestAggregateAttackSurface:
in result["privilege-escalation"]
)
assert "ec2_instance_imdsv2_enabled" in result["ec2-imdsv1"]
assert "ec2_instance_account_imdsv2_enabled" in result["ec2-imdsv1"]
@patch("tasks.jobs.scan.AttackSurfaceOverview.objects.bulk_create")
@patch("tasks.jobs.scan.Finding.all_objects.filter")
@@ -4338,315 +4224,3 @@ class TestUpdateProviderComplianceScores:
assert any("provider_compliance_scores" in c for c in calls)
assert any("tenant_compliance_summaries" in c for c in calls)
assert any("pg_advisory_xact_lock" in c for c in calls)
class TestScanIsFullScope:
def _live_trigger(self):
return Scan.TriggerChoices.MANUAL
@pytest.mark.parametrize(
"scanner_args",
[
{},
{"unrelated": "value"},
{"checks": None},
{"services": []},
{"severities": ""},
],
)
def test_full_scope_when_no_filters_present(self, scanner_args):
scan = Scan(scanner_args=scanner_args, trigger=self._live_trigger())
assert scan.is_full_scope() is True
def test_full_scope_covers_every_sdk_kwarg(self):
# Lock the predicate to whatever ProwlerScan's __init__ exposes today.
# If the SDK adds a new filter, this test still passes via the
# introspection-driven derivation; if it adds a non-filter kwarg
# (e.g. provider-like), keep the exclusion list in sync in models.py.
from prowler.lib.scan.scan import Scan as ProwlerScan
import inspect
expected = tuple(
name
for name in inspect.signature(ProwlerScan.__init__).parameters
if name not in ("self", "provider")
)
assert Scan.get_scoping_scanner_arg_keys() == expected
# Spot-check a few well-known filters survive the introspection.
assert "checks" in expected
assert "services" in expected
assert "severities" in expected
def test_partial_scope_for_each_sdk_filter(self):
for key in Scan.get_scoping_scanner_arg_keys():
scan = Scan(scanner_args={key: ["x"]}, trigger=self._live_trigger())
assert scan.is_full_scope() is False, f"{key} should mark scan as partial"
def test_imported_scan_is_never_full_scope(self):
# Forward-defensive: any trigger outside LIVE_SCAN_TRIGGERS (e.g. a
# future "imported" trigger) must never qualify, even with empty args.
scan = Scan(scanner_args={}, trigger="imported")
assert scan.is_full_scope() is False
def test_handles_none_scanner_args(self):
scan = Scan(scanner_args=None, trigger=self._live_trigger())
assert scan.is_full_scope() is True
@pytest.mark.django_db
class TestResetEphemeralResourceFindingsCount:
def _make_scan_summary(self, tenant_id, scan_id, resource):
return ResourceScanSummary.objects.create(
tenant_id=tenant_id,
scan_id=scan_id,
resource_id=resource.id,
service=resource.service,
region=resource.region,
resource_type=resource.type,
)
def test_resets_only_resources_missing_from_full_scope_scan(
self, tenants_fixture, scans_fixture, providers_fixture, resources_fixture
):
tenant, *_ = tenants_fixture
scan1, scan2, *_ = scans_fixture
resource1, resource2, resource3 = resources_fixture
Resource.objects.filter(id=resource1.id).update(failed_findings_count=3)
Resource.objects.filter(id=resource2.id).update(failed_findings_count=5)
Resource.objects.filter(id=resource3.id).update(failed_findings_count=7)
# Only resource1 was scanned in scan1; resource2 is ephemeral.
self._make_scan_summary(tenant.id, scan1.id, resource1)
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "completed"
assert result["reset"] == 1
resource1.refresh_from_db()
resource2.refresh_from_db()
resource3.refresh_from_db()
assert resource1.failed_findings_count == 3
assert resource2.failed_findings_count == 0
# Other provider's resource is never touched.
assert resource3.failed_findings_count == 7
def test_skips_when_scan_not_completed(
self, tenants_fixture, scans_fixture, resources_fixture
):
tenant, *_ = tenants_fixture
scan1, *_ = scans_fixture
resource1, resource2, _ = resources_fixture
Scan.objects.filter(id=scan1.id).update(state=StateChoices.EXECUTING)
Resource.objects.filter(id=resource2.id).update(failed_findings_count=5)
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "skipped"
assert result["reason"] == "scan not completed"
resource2.refresh_from_db()
assert resource2.failed_findings_count == 5
def test_skips_when_scan_has_scoping_filters(
self, tenants_fixture, scans_fixture, resources_fixture
):
tenant, *_ = tenants_fixture
scan1, *_ = scans_fixture
_, resource2, _ = resources_fixture
Scan.objects.filter(id=scan1.id).update(scanner_args={"checks": ["check1"]})
Resource.objects.filter(id=resource2.id).update(failed_findings_count=5)
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "skipped"
assert result["reason"] == "partial scan scope"
resource2.refresh_from_db()
assert resource2.failed_findings_count == 5
def test_skips_when_scan_not_found(self, tenants_fixture):
tenant, *_ = tenants_fixture
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(uuid.uuid4())
)
assert result["status"] == "skipped"
assert result["reason"] == "scan not found"
def test_skips_when_newer_scan_completed_for_same_provider(
self, tenants_fixture, scans_fixture, providers_fixture, resources_fixture
):
# If a newer completed scan exists for the same provider, our
# ResourceScanSummary set is stale relative to the resources' current
# counts, and applying the diff would corrupt them.
from datetime import timedelta
tenant, *_ = tenants_fixture
scan1, *_ = scans_fixture
provider, *_ = providers_fixture
_, resource2, _ = resources_fixture
Resource.objects.filter(id=resource2.id).update(failed_findings_count=5)
# Create a newer COMPLETED scan for the same provider, with an
# explicit completed_at strictly after scan1's so ordering is
# deterministic regardless of clock resolution.
newer_completed_at = scan1.completed_at + timedelta(minutes=5)
Scan.objects.create(
name="Newer Scan",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant_id=tenant.id,
started_at=newer_completed_at,
completed_at=newer_completed_at,
)
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "skipped"
assert result["reason"] == "newer scan exists"
resource2.refresh_from_db()
assert resource2.failed_findings_count == 5
def test_does_not_touch_other_providers_resources(
self, tenants_fixture, scans_fixture, providers_fixture, resources_fixture
):
tenant, *_ = tenants_fixture
scan1, *_ = scans_fixture
_, _, resource3 = resources_fixture
# resource3 belongs to provider2 with failed_findings_count > 0 and is
# not in scan1's summary. It MUST NOT be reset.
Resource.objects.filter(id=resource3.id).update(failed_findings_count=9)
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "completed"
assert result["reset"] == 0
resource3.refresh_from_db()
assert resource3.failed_findings_count == 9
def test_resources_already_zero_are_not_rewritten(
self, tenants_fixture, scans_fixture, resources_fixture
):
tenant, *_ = tenants_fixture
scan1, *_ = scans_fixture
resource1, resource2, _ = resources_fixture
# Both resources already at 0, neither in summary -> nothing to update.
Resource.objects.filter(id=resource1.id).update(failed_findings_count=0)
Resource.objects.filter(id=resource2.id).update(failed_findings_count=0)
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "completed"
assert result["reset"] == 0
def test_skips_when_summaries_missing_for_scan_with_resources(
self, tenants_fixture, scans_fixture, resources_fixture
):
# Catastrophic guard: if a scan reports unique_resource_count > 0 but
# no ResourceScanSummary rows are persisted (e.g. bulk_create silently
# failed), the anti-join would classify EVERY resource as ephemeral
# and zero their counts. The gate must skip and preserve the data.
tenant, *_ = tenants_fixture
scan1, *_ = scans_fixture
resource1, resource2, _ = resources_fixture
Scan.objects.filter(id=scan1.id).update(unique_resource_count=10)
Resource.objects.filter(id=resource1.id).update(failed_findings_count=3)
Resource.objects.filter(id=resource2.id).update(failed_findings_count=5)
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "skipped"
assert result["reason"] == "summaries missing"
resource1.refresh_from_db()
resource2.refresh_from_db()
assert resource1.failed_findings_count == 3
assert resource2.failed_findings_count == 5
def test_ignores_sibling_scan_with_null_completed_at(
self, tenants_fixture, scans_fixture, providers_fixture, resources_fixture
):
# Postgres orders NULL first in DESC; a sibling COMPLETED scan with a
# missing completed_at must not be treated as the latest scan and
# cause us to incorrectly skip the reset.
tenant, *_ = tenants_fixture
scan1, *_ = scans_fixture
provider, *_ = providers_fixture
resource1, resource2, _ = resources_fixture
Resource.objects.filter(id=resource2.id).update(failed_findings_count=5)
self._make_scan_summary(tenant.id, scan1.id, resource1)
Scan.objects.create(
name="Ghost Scan",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant_id=tenant.id,
started_at=scan1.completed_at,
completed_at=None,
)
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "completed"
assert result["reset"] == 1
resource2.refresh_from_db()
assert resource2.failed_findings_count == 0
def test_batches_updates_when_many_ephemeral_resources(
self, tenants_fixture, scans_fixture, resources_fixture
):
# Forces multiple batches to confirm the chunked UPDATE path executes
# cleanly and the count is the sum across batches.
tenant, *_ = tenants_fixture
scan1, *_ = scans_fixture
resource1, resource2, _ = resources_fixture
Resource.objects.filter(id=resource1.id).update(failed_findings_count=2)
Resource.objects.filter(id=resource2.id).update(failed_findings_count=4)
# No ResourceScanSummary -> both resource1 and resource2 are ephemeral.
# Force a 1-row batch via the shared findings batch size knob.
with patch("tasks.jobs.scan.DJANGO_FINDINGS_BATCH_SIZE", 1):
result = reset_ephemeral_resource_findings_count(
tenant_id=str(tenant.id), scan_id=str(scan1.id)
)
assert result["status"] == "completed"
assert result["reset"] == 2
resource1.refresh_from_db()
resource2.refresh_from_db()
assert resource1.failed_findings_count == 0
assert resource2.failed_findings_count == 0
+13 -132
View File
@@ -13,8 +13,6 @@ from tasks.jobs.lighthouse_providers import (
_extract_bedrock_credentials,
)
from tasks.tasks import (
DJANGO_TMP_OUTPUT_DIRECTORY,
STALE_TMP_OUTPUT_MAX_AGE_HOURS,
_cleanup_orphan_scheduled_scans,
_perform_scan_complete_tasks,
check_integrations_task,
@@ -238,8 +236,7 @@ class TestGenerateOutputs:
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
@patch("tasks.tasks._cleanup_stale_tmp_output_directories")
def test_no_findings_returns_early(self, mock_cleanup_stale_tmp_output_directories):
def test_no_findings_returns_early(self):
with patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
@@ -251,34 +248,6 @@ class TestGenerateOutputs:
assert result == {"upload": False}
mock_filter.assert_called_once_with(scan_id=self.scan_id)
mock_cleanup_stale_tmp_output_directories.assert_called_once_with(
DJANGO_TMP_OUTPUT_DIRECTORY,
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan=(self.tenant_id, self.scan_id),
)
@patch(
"tasks.tasks._cleanup_stale_tmp_output_directories",
side_effect=RuntimeError("cleanup boom"),
)
def test_cleanup_exception_does_not_break_no_findings_flow(
self, mock_cleanup_stale_tmp_output_directories
):
with patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert result == {"upload": False}
mock_cleanup_stale_tmp_output_directories.assert_called_once_with(
DJANGO_TMP_OUTPUT_DIRECTORY,
max_age_hours=STALE_TMP_OUTPUT_MAX_AGE_HOURS,
exclude_scan=(self.tenant_id, self.scan_id),
)
@patch("tasks.tasks._upload_to_s3")
@patch("tasks.tasks._compress_output_files")
@@ -340,7 +309,7 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda _x: True, MagicMock(name="CSVCompliance"))]},
{"aws": [(lambda x: True, MagicMock(name="CSVCompliance"))]},
),
patch(
"tasks.tasks._generate_output_directory",
@@ -392,7 +361,7 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda _x: True, MagicMock())]},
{"aws": [(lambda x: True, MagicMock())]},
),
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
patch("tasks.tasks._upload_to_s3", return_value=None),
@@ -472,7 +441,7 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda _x: True, mock_compliance_class)]},
{"aws": [(lambda x: True, mock_compliance_class)]},
),
):
mock_filter.return_value.exists.return_value = True
@@ -501,10 +470,6 @@ class TestGenerateOutputs:
class TrackingWriter:
def __init__(self, findings, file_path, file_extension, from_cli):
self.findings = findings
self.file_path = file_path
self.file_extension = file_extension
self.from_cli = from_cli
self.transform_called = 0
self.batch_write_data_to_file = MagicMock()
self._data = []
@@ -613,13 +578,13 @@ class TestGenerateOutputs:
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch(
"tasks.tasks.FindingOutput.transform_api_finding",
side_effect=lambda f, _prov: f,
side_effect=lambda f, prov: f,
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
patch(
"tasks.tasks.Scan.all_objects.filter",
return_value=MagicMock(update=lambda **_kw: None),
return_value=MagicMock(update=lambda **kw: None),
),
patch("tasks.tasks.batched", return_value=two_batches),
patch("tasks.tasks.OUTPUT_FORMATS_MAPPING", {}),
@@ -701,7 +666,7 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda _x: True, mock_compliance_class)]},
{"aws": [(lambda x: True, mock_compliance_class)]},
),
):
mock_filter.return_value.exists.return_value = True
@@ -783,7 +748,7 @@ class TestScanCompleteTasks:
@patch("tasks.tasks.can_provider_run_attack_paths_scan", return_value=False)
def test_scan_complete_tasks(
self,
_mock_can_run_attack_paths,
mock_can_run_attack_paths,
mock_attack_paths_task,
mock_check_integrations_task,
mock_compliance_reports_task,
@@ -842,72 +807,6 @@ class TestScanCompleteTasks:
# Attack Paths task should be skipped when provider cannot run it
mock_attack_paths_task.assert_not_called()
@pytest.mark.parametrize(
"row_pre_existing",
[True, False],
ids=["row-pre-existing", "row-missing-fallback"],
)
@patch("tasks.tasks.aggregate_attack_surface_task.apply_async")
@patch("tasks.tasks.chain")
@patch("tasks.tasks.create_compliance_requirements_task.si")
@patch("tasks.tasks.update_provider_compliance_scores_task.si")
@patch("tasks.tasks.perform_scan_summary_task.si")
@patch("tasks.tasks.generate_outputs_task.si")
@patch("tasks.tasks.generate_compliance_reports_task.si")
@patch("tasks.tasks.check_integrations_task.si")
@patch("tasks.tasks.attack_paths_db_utils.set_attack_paths_scan_task_id")
@patch("tasks.tasks.attack_paths_db_utils.create_attack_paths_scan")
@patch("tasks.tasks.attack_paths_db_utils.retrieve_attack_paths_scan")
@patch("tasks.tasks.perform_attack_paths_scan_task.apply_async")
@patch("tasks.tasks.can_provider_run_attack_paths_scan", return_value=True)
def test_scan_complete_dispatches_attack_paths_scan(
self,
_mock_can_run_attack_paths,
mock_attack_paths_task,
mock_retrieve,
mock_create,
mock_set_task_id,
mock_check_integrations_task,
mock_compliance_reports_task,
mock_outputs_task,
mock_scan_summary_task,
mock_update_compliance_scores_task,
mock_compliance_requirements_task,
mock_chain,
mock_attack_surface_task,
row_pre_existing,
):
"""When a provider can run Attack Paths, dispatch must:
1. Reuse the existing row or create one if missing.
2. Call apply_async on the Attack Paths task.
3. Persist the returned Celery task id on the row.
"""
existing_row = MagicMock(id="ap-scan-id")
if row_pre_existing:
mock_retrieve.return_value = existing_row
else:
mock_retrieve.return_value = None
mock_create.return_value = existing_row
async_result = MagicMock(task_id="celery-task-id")
mock_attack_paths_task.return_value = async_result
_perform_scan_complete_tasks("tenant-id", "scan-id", "provider-id")
mock_retrieve.assert_called_once_with("tenant-id", "scan-id")
if row_pre_existing:
mock_create.assert_not_called()
else:
mock_create.assert_called_once_with("tenant-id", "scan-id", "provider-id")
mock_attack_paths_task.assert_called_once_with(
kwargs={"tenant_id": "tenant-id", "scan_id": "scan-id"}
)
mock_set_task_id.assert_called_once_with(
"tenant-id", "ap-scan-id", "celery-task-id"
)
class TestAttackPathsTasks:
@staticmethod
@@ -1095,7 +994,7 @@ class TestCheckIntegrationsTask:
@patch("tasks.tasks.rmtree")
def test_generate_outputs_with_asff_for_aws_with_security_hub(
self,
_mock_rmtree,
mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
@@ -1223,7 +1122,7 @@ class TestCheckIntegrationsTask:
@patch("tasks.tasks.rmtree")
def test_generate_outputs_no_asff_for_aws_without_security_hub(
self,
_mock_rmtree,
mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
@@ -1346,7 +1245,7 @@ class TestCheckIntegrationsTask:
@patch("tasks.tasks.rmtree")
def test_generate_outputs_no_asff_for_non_aws_provider(
self,
_mock_rmtree,
mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
@@ -2462,9 +2361,6 @@ class TestReaggregateAllFindingGroupSummaries:
@patch("tasks.tasks.chain")
@patch("tasks.tasks.group")
@patch("tasks.tasks.aggregate_attack_surface_task")
@patch("tasks.tasks.aggregate_scan_category_summaries_task")
@patch("tasks.tasks.aggregate_scan_resource_group_summaries_task")
@patch("tasks.tasks.aggregate_finding_group_summaries_task")
@patch("tasks.tasks.aggregate_daily_severity_task")
@patch("tasks.tasks.perform_scan_summary_task")
@@ -2475,9 +2371,6 @@ class TestReaggregateAllFindingGroupSummaries:
mock_scan_summary_task,
mock_daily_severity_task,
mock_finding_group_task,
mock_resource_group_task,
mock_category_task,
mock_attack_surface_task,
mock_group,
mock_chain,
):
@@ -2490,8 +2383,8 @@ class TestReaggregateAllFindingGroupSummaries:
yesterday = today - timedelta(days=1)
mock_outer_group_result = MagicMock()
# The first `group()` call wraps the inner parallel step; subsequent
# calls wrap the outer per-scan generator.
# The first `group()` call wraps the inner (severity, finding-group)
# parallel step; subsequent calls wrap the outer per-scan generator.
mock_group.side_effect = lambda *args, **kwargs: (
list(args[0]) if args and hasattr(args[0], "__iter__") else None,
mock_outer_group_result,
@@ -2527,9 +2420,6 @@ class TestReaggregateAllFindingGroupSummaries:
mock_scan_summary_task,
mock_daily_severity_task,
mock_finding_group_task,
mock_resource_group_task,
mock_category_task,
mock_attack_surface_task,
):
assert task_mock.si.call_count == 3
dispatched = {
@@ -2543,9 +2433,6 @@ class TestReaggregateAllFindingGroupSummaries:
@patch("tasks.tasks.chain")
@patch("tasks.tasks.group")
@patch("tasks.tasks.aggregate_attack_surface_task")
@patch("tasks.tasks.aggregate_scan_category_summaries_task")
@patch("tasks.tasks.aggregate_scan_resource_group_summaries_task")
@patch("tasks.tasks.aggregate_finding_group_summaries_task")
@patch("tasks.tasks.aggregate_daily_severity_task")
@patch("tasks.tasks.perform_scan_summary_task")
@@ -2556,9 +2443,6 @@ class TestReaggregateAllFindingGroupSummaries:
mock_scan_summary_task,
mock_daily_severity_task,
mock_finding_group_task,
mock_resource_group_task,
mock_category_task,
mock_attack_surface_task,
mock_group,
mock_chain,
):
@@ -2597,9 +2481,6 @@ class TestReaggregateAllFindingGroupSummaries:
mock_scan_summary_task,
mock_daily_severity_task,
mock_finding_group_task,
mock_resource_group_task,
mock_category_task,
mock_attack_surface_task,
):
task_mock.si.assert_called_once_with(
tenant_id=self.tenant_id, scan_id=str(latest_scan_today)
-335
View File
@@ -1,335 +0,0 @@
# AWS Inventory Connectivity Graph
A community-contributed tool that generates interactive connectivity graphs from Prowler AWS scans, visualizing relationships between AWS resources with zero additional API calls.
## Overview
This tool extends Prowler by producing two artifacts after a scan completes:
- **`<output>.inventory.json`** Machine-readable graph (nodes + edges)
- **`<output>.inventory.html`** Interactive D3.js force-directed visualization
### Why?
Prowler's existing outputs (CSV, ASFF, OCSF, HTML) report individual check findings but provide no cross-service topology view. Security engineers need to understand **how** resources are connected—which Lambda functions sit inside which VPC, which IAM roles can be assumed by which services, which event sources trigger which functions—before they can reason about attack paths, blast-radius, or lateral-movement risk.
This tool fills that gap by building a connectivity graph from the service clients that are already loaded during a Prowler scan.
## Features
### Supported AWS Services
The tool currently extracts connectivity information from:
- **Lambda** Functions, VPC/subnet/SG edges, event source mappings, layers, DLQ, KMS
- **EC2** Instances, security groups, subnet/VPC edges
- **VPC** VPCs, subnets, peering connections
- **RDS** DB instances, VPC/SG/cluster/KMS edges
- **ELBv2** ALB/NLB load balancers, SG and VPC edges
- **S3** Buckets, replication targets, logging buckets, KMS keys
- **IAM** Roles, trust-relationship edges (who can assume what)
### Edge Semantic Types
Edges are typed for downstream filtering and attack-path analysis:
- `network` Resources share a network path (VPC/subnet/SG)
- `iam` IAM trust or permission relationship
- `triggers` One resource can invoke another (event source → Lambda)
- `data_flow` Data is written/read (Lambda → SQS dead-letter queue)
- `depends_on` Soft dependency (Lambda layer, subnet belongs to VPC)
- `routes_to` Traffic routing (LB → target)
- `replicates_to` S3 replication
- `encrypts` KMS key encrypts the resource
- `logs_to` Logging relationship
### Interactive HTML Graph Features
- Force-directed layout with drag-and-drop node pinning
- Zoom / pan (mouse wheel + click-drag on background)
- Per-service color-coded nodes with a legend
- Hover tooltips showing ARN + all metadata properties
- Service filter dropdown (show only Lambda, EC2, RDS, etc.)
- Adjustable link-distance and charge-strength physics sliders
- Edge labels on every arrow
## Installation
### Prerequisites
- Python 3.9.1 or higher
- Prowler installed and configured (see [Prowler documentation](https://docs.prowler.com/))
### Setup
1. Clone or download this directory to your local machine
2. Ensure Prowler is installed and working
3. No additional dependencies required beyond Prowler's existing requirements
## Usage
### Basic Usage
Run Prowler with your desired checks, then use the inventory graph script:
```bash
# Run Prowler scan (example)
prowler aws --output-formats csv
# Generate inventory graph from the scan
python contrib/inventory-graph/inventory_graph.py --output-directory ./output
```
### Command-Line Options
```bash
python contrib/inventory-graph/inventory_graph.py [OPTIONS]
Options:
--output-directory DIR Directory to save output files (default: ./output)
--output-filename NAME Base filename without extension (default: prowler-inventory-<timestamp>)
--help Show this help message and exit
```
### Example Workflow
```bash
# 1. Run a Prowler scan on your AWS account
prowler aws --profile my-aws-profile --output-formats csv html
# 2. Generate the inventory graph
python contrib/inventory-graph/inventory_graph.py \
--output-directory ./output \
--output-filename my-aws-inventory
# 3. Open the HTML file in your browser
open output/my-aws-inventory.inventory.html
```
### Integration with Prowler Scans
The tool reads from already-loaded AWS service clients in memory (via `sys.modules`). This means:
- **Zero extra AWS API calls** Uses data already collected during the Prowler scan
- **Graceful degradation** Services not scanned are silently skipped
- **Flexible** Works with any subset of Prowler checks
## Output Files
### JSON Output (`*.inventory.json`)
Machine-readable graph structure:
```json
{
"generated_at": "2026-03-19T12:34:56Z",
"nodes": [
{
"id": "arn:aws:lambda:us-east-1:123456789012:function:my-function",
"type": "lambda_function",
"name": "my-function",
"service": "lambda",
"region": "us-east-1",
"account_id": "123456789012",
"properties": {
"runtime": "python3.9",
"vpc_id": "vpc-abc123"
}
}
],
"edges": [
{
"source_id": "arn:aws:lambda:...",
"target_id": "arn:aws:ec2:...:vpc/vpc-abc123",
"edge_type": "network",
"label": "in-vpc"
}
],
"stats": {
"node_count": 42,
"edge_count": 87
}
}
```
### HTML Output (`*.inventory.html`)
Self-contained interactive visualization that opens in any modern browser. No server or build step required.
## Architecture
### Design Decisions
| Decision | Rationale |
|----------|-----------|
| **Read from sys.modules** | Zero extra AWS API calls; services not scanned are silently skipped |
| **Self-contained HTML** | D3.js v7 via CDN; no server, no build step; opens in any browser |
| **One extractor per service** | Each extractor is independently testable; adding a new service = one new file + one line in the registry |
| **Typed edges** | Semantic types allow downstream consumers (attack-path tools, Neo4j import) to filter by relationship class |
### Project Structure
```
contrib/inventory-graph/
├── README.md # This file
├── inventory_graph.py # Main entry point script
├── lib/
│ ├── __init__.py
│ ├── models.py # ResourceNode, ResourceEdge, ConnectivityGraph dataclasses
│ ├── graph_builder.py # Reads loaded service clients from sys.modules
│ ├── inventory_output.py # write_json(), write_html()
│ └── extractors/
│ ├── __init__.py
│ ├── lambda_extractor.py # Lambda functions → VPC/subnet/SG/event-sources/layers/DLQ/KMS
│ ├── ec2_extractor.py # EC2 instances + security groups → subnet/VPC
│ ├── vpc_extractor.py # VPCs, subnets, peering connections
│ ├── rds_extractor.py # RDS instances → VPC/SG/cluster/KMS
│ ├── elbv2_extractor.py # ALB/NLB load balancers → SG/VPC
│ ├── s3_extractor.py # S3 buckets → replication targets/logging buckets/KMS keys
│ └── iam_extractor.py # IAM roles + trust-relationship edges
└── examples/
└── sample_output.html # Example output (optional)
```
## Testing
### Smoke Test (No AWS Credentials Needed)
```python
import sys
from unittest.mock import MagicMock
# Wire a fake Lambda client
mock_module = MagicMock()
mock_fn = MagicMock()
mock_fn.arn = "arn:aws:lambda:us-east-1:123:function:test"
mock_fn.name = "test"
mock_fn.region = "us-east-1"
mock_fn.vpc_id = "vpc-abc"
mock_fn.security_groups = ["sg-111"]
mock_fn.subnet_ids = {"subnet-aaa"}
mock_fn.environment = None
mock_fn.kms_key_arn = None
mock_fn.layers = []
mock_fn.dead_letter_config = None
mock_fn.event_source_mappings = []
mock_module.awslambda_client.functions = {mock_fn.arn: mock_fn}
mock_module.awslambda_client.audited_account = "123"
sys.modules["prowler.providers.aws.services.awslambda.awslambda_client"] = mock_module
from contrib.inventory_graph.lib.graph_builder import build_graph
from contrib.inventory_graph.lib.inventory_output import write_json, write_html
graph = build_graph()
write_json(graph, "/tmp/test.inventory.json")
write_html(graph, "/tmp/test.inventory.html")
# Open /tmp/test.inventory.html in a browser
```
## Extending
### Adding a New Service
1. Create a new extractor file in `lib/extractors/` (e.g., `dynamodb_extractor.py`)
2. Implement the `extract(client)` function that returns `(nodes, edges)`
3. Register it in `lib/graph_builder.py` in the `_SERVICE_REGISTRY` tuple
Example extractor template:
```python
from typing import List, Tuple
from prowler.lib.outputs.inventory.models import ResourceNode, ResourceEdge
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""Extract DynamoDB tables and their relationships."""
nodes = []
edges = []
for table in client.tables:
nodes.append(
ResourceNode(
id=table.arn,
type="dynamodb_table",
name=table.name,
service="dynamodb",
region=table.region,
account_id=client.audited_account,
properties={"billing_mode": table.billing_mode}
)
)
# Add edges for KMS encryption, streams, etc.
if table.kms_key_arn:
edges.append(
ResourceEdge(
source_id=table.kms_key_arn,
target_id=table.arn,
edge_type="encrypts",
label="encrypts"
)
)
return nodes, edges
```
## Troubleshooting
### No nodes discovered
**Problem:** The tool reports "no nodes discovered" after running.
**Solution:** Ensure you've run a Prowler scan first. The tool reads from in-memory service clients loaded during the scan. If no services were scanned, no nodes will be discovered.
### Missing services in the graph
**Problem:** Some AWS services are not appearing in the graph.
**Solution:** The tool only includes services that have been scanned by Prowler. Run Prowler with the services you want to include, or run without service filters to scan all available services.
### HTML file doesn't display properly
**Problem:** The HTML visualization doesn't load or shows errors.
**Solution:**
- Ensure you're opening the file in a modern browser (Chrome, Firefox, Safari, Edge)
- Check your browser's console for JavaScript errors
- Verify the file was generated completely (check file size > 0)
- The HTML requires internet access to load D3.js from CDN
## Roadmap
Potential future enhancements:
- [ ] Support for additional AWS services (DynamoDB, SQS, SNS, etc.)
- [ ] Export to Neo4j / Cartography format
- [ ] Attack path analysis integration
- [ ] Multi-account/multi-region aggregation
- [ ] Custom edge type filtering in HTML UI
- [ ] Graph diff between two scans
## Contributing
This is a community contribution. If you'd like to enhance it:
1. Fork the Prowler repository
2. Make your changes in `contrib/inventory-graph/`
3. Test thoroughly
4. Submit a pull request with a clear description
## License
This tool is part of the Prowler project and is licensed under the Apache License 2.0.
## Credits
- **Author:** [@sandiyochristan](https://github.com/sandiyochristan)
- **Related PR:** [#10382](https://github.com/prowler-cloud/prowler/pull/10382)
- **Prowler Project:** [prowler-cloud/prowler](https://github.com/prowler-cloud/prowler)
## Support
For issues or questions:
- Open an issue in the [Prowler repository](https://github.com/prowler-cloud/prowler/issues)
- Join the [Prowler Community Slack](https://goto.prowler.com/slack)
- Tag your issue with `contrib:inventory-graph`
@@ -1,181 +0,0 @@
#!/usr/bin/env python3
"""
Example: Generate AWS Inventory Graph with Mock Data
This example demonstrates how to use the inventory graph tool with mock AWS data.
No AWS credentials required.
"""
import sys
from pathlib import Path
from unittest.mock import MagicMock
# Add parent directory to path
sys.path.insert(0, str(Path(__file__).parent.parent))
from lib.graph_builder import build_graph
from lib.inventory_output import write_json, write_html
def create_mock_lambda_client():
"""Create a mock Lambda client with sample data."""
mock_module = MagicMock()
# Create a mock Lambda function
mock_fn = MagicMock()
mock_fn.arn = "arn:aws:lambda:us-east-1:123456789012:function:my-test-function"
mock_fn.name = "my-test-function"
mock_fn.region = "us-east-1"
mock_fn.vpc_id = "vpc-abc123"
mock_fn.security_groups = ["sg-111222"]
mock_fn.subnet_ids = {"subnet-aaa111", "subnet-bbb222"}
mock_fn.environment = {"Variables": {"ENV": "production"}}
mock_fn.kms_key_arn = (
"arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
)
mock_fn.layers = []
mock_fn.dead_letter_config = None
mock_fn.event_source_mappings = []
mock_module.awslambda_client.functions = {mock_fn.arn: mock_fn}
mock_module.awslambda_client.audited_account = "123456789012"
return mock_module
def create_mock_ec2_client():
"""Create a mock EC2 client with sample data."""
mock_module = MagicMock()
# Create a mock EC2 instance
mock_instance = MagicMock()
mock_instance.arn = (
"arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
)
mock_instance.id = "i-1234567890abcdef0"
mock_instance.region = "us-east-1"
mock_instance.vpc_id = "vpc-abc123"
mock_instance.subnet_id = "subnet-aaa111"
mock_instance.security_groups = [MagicMock(id="sg-111222")]
mock_instance.state = "running"
mock_instance.type = "t3.micro"
mock_instance.tags = [{"Key": "Name", "Value": "test-instance"}]
# Create a mock security group
mock_sg = MagicMock()
mock_sg.arn = "arn:aws:ec2:us-east-1:123456789012:security-group/sg-111222"
mock_sg.id = "sg-111222"
mock_sg.name = "test-security-group"
mock_sg.region = "us-east-1"
mock_sg.vpc_id = "vpc-abc123"
mock_module.ec2_client.instances = [mock_instance]
mock_module.ec2_client.security_groups = [mock_sg]
mock_module.ec2_client.audited_account = "123456789012"
return mock_module
def create_mock_vpc_client():
"""Create a mock VPC client with sample data."""
mock_module = MagicMock()
# Create a mock VPC
mock_vpc = MagicMock()
mock_vpc.arn = "arn:aws:ec2:us-east-1:123456789012:vpc/vpc-abc123"
mock_vpc.id = "vpc-abc123"
mock_vpc.region = "us-east-1"
mock_vpc.cidr_block = "10.0.0.0/16"
mock_vpc.tags = [{"Key": "Name", "Value": "test-vpc"}]
# Create mock subnets
mock_subnet1 = MagicMock()
mock_subnet1.arn = "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-aaa111"
mock_subnet1.id = "subnet-aaa111"
mock_subnet1.region = "us-east-1"
mock_subnet1.vpc_id = "vpc-abc123"
mock_subnet1.cidr_block = "10.0.1.0/24"
mock_subnet1.availability_zone = "us-east-1a"
mock_subnet2 = MagicMock()
mock_subnet2.arn = "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-bbb222"
mock_subnet2.id = "subnet-bbb222"
mock_subnet2.region = "us-east-1"
mock_subnet2.vpc_id = "vpc-abc123"
mock_subnet2.cidr_block = "10.0.2.0/24"
mock_subnet2.availability_zone = "us-east-1b"
mock_module.vpc_client.vpcs = [mock_vpc]
mock_module.vpc_client.subnets = [mock_subnet1, mock_subnet2]
mock_module.vpc_client.vpc_peering_connections = []
mock_module.vpc_client.audited_account = "123456789012"
return mock_module
def main():
"""Main function to demonstrate the inventory graph generation."""
print("=" * 70)
print("AWS Inventory Graph - Mock Data Example")
print("=" * 70)
print()
# Create mock clients and inject them into sys.modules
print("Creating mock AWS service clients...")
sys.modules["prowler.providers.aws.services.awslambda.awslambda_client"] = (
create_mock_lambda_client()
)
sys.modules["prowler.providers.aws.services.ec2.ec2_client"] = (
create_mock_ec2_client()
)
sys.modules["prowler.providers.aws.services.vpc.vpc_client"] = (
create_mock_vpc_client()
)
print("✓ Mock clients created")
print()
# Build the graph
print("Building connectivity graph...")
graph = build_graph()
print(f"✓ Graph built: {len(graph.nodes)} nodes, {len(graph.edges)} edges")
print()
# Display discovered nodes
print("Discovered nodes:")
for node in graph.nodes:
print(f" - {node.type}: {node.name} ({node.region})")
print()
# Display discovered edges
print("Discovered edges:")
for edge in graph.edges:
source_node = next((n for n in graph.nodes if n.id == edge.source_id), None)
target_node = next((n for n in graph.nodes if n.id == edge.target_id), None)
source_name = source_node.name if source_node else edge.source_id
target_name = target_node.name if target_node else edge.target_id
print(f" - {source_name} --[{edge.edge_type}]--> {target_name}")
print()
# Write outputs
output_dir = Path(__file__).parent
json_path = output_dir / "example_output.inventory.json"
html_path = output_dir / "example_output.inventory.html"
print("Writing output files...")
write_json(graph, str(json_path))
write_html(graph, str(html_path))
print(f"✓ JSON written to: {json_path}")
print(f"✓ HTML written to: {html_path}")
print()
print("=" * 70)
print("✓ Example complete!")
print("=" * 70)
print()
print(f"Open the HTML file to view the interactive graph:")
print(f" open {html_path}")
print()
if __name__ == "__main__":
main()
-158
View File
@@ -1,158 +0,0 @@
#!/usr/bin/env python3
"""
AWS Inventory Connectivity Graph Generator
A standalone tool that generates interactive connectivity graphs from Prowler AWS scans.
This tool reads from already-loaded AWS service clients in memory and produces:
- JSON graph (nodes + edges)
- Interactive HTML visualization
Usage:
python inventory_graph.py --output-directory ./output --output-filename my-inventory
For more information, see README.md
"""
import argparse
import os
import sys
from datetime import datetime
from pathlib import Path
# Add the contrib directory to the path so we can import the lib modules
CONTRIB_DIR = Path(__file__).parent
sys.path.insert(0, str(CONTRIB_DIR))
from lib.graph_builder import build_graph
from lib.inventory_output import write_json, write_html
def parse_arguments():
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="Generate AWS inventory connectivity graph from Prowler scan data",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Generate graph with default settings
python inventory_graph.py
# Specify custom output directory and filename
python inventory_graph.py --output-directory ./my-output --output-filename aws-inventory
# After running a Prowler scan
prowler aws --profile my-profile
python inventory_graph.py --output-directory ./output
For more information, see README.md
""",
)
parser.add_argument(
"--output-directory",
"-o",
default="./output",
help="Directory to save output files (default: ./output)",
)
parser.add_argument(
"--output-filename",
"-f",
default=None,
help="Base filename without extension (default: prowler-inventory-<timestamp>)",
)
parser.add_argument(
"--verbose",
"-v",
action="store_true",
help="Enable verbose output",
)
return parser.parse_args()
def main():
"""Main entry point for the inventory graph generator."""
args = parse_arguments()
# Set up output paths
output_dir = Path(args.output_directory)
output_dir.mkdir(parents=True, exist_ok=True)
# Generate filename with timestamp if not provided
if args.output_filename:
base_filename = args.output_filename
else:
timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
base_filename = f"prowler-inventory-{timestamp}"
json_path = output_dir / f"{base_filename}.inventory.json"
html_path = output_dir / f"{base_filename}.inventory.html"
print("=" * 70)
print("AWS Inventory Connectivity Graph Generator")
print("=" * 70)
print()
# Build the graph from loaded service clients
if args.verbose:
print("Building connectivity graph from loaded AWS service clients...")
graph = build_graph()
# Check if any nodes were discovered
if not graph.nodes:
print("⚠️ WARNING: No nodes discovered!")
print()
print("This usually means:")
print(" 1. No Prowler scan has been run yet in this Python session")
print(" 2. No AWS service clients are loaded in memory")
print()
print("To fix this:")
print(" 1. Run a Prowler scan first: prowler aws --output-formats csv")
print(" 2. Then run this script in the same session")
print()
print(
"Alternatively, integrate this tool directly into Prowler's output pipeline."
)
sys.exit(1)
print(f"✓ Discovered {len(graph.nodes)} nodes and {len(graph.edges)} edges")
print()
# Write outputs
if args.verbose:
print(f"Writing JSON output to: {json_path}")
write_json(graph, str(json_path))
if args.verbose:
print(f"Writing HTML output to: {html_path}")
write_html(graph, str(html_path))
print()
print("=" * 70)
print("✓ Graph generation complete!")
print("=" * 70)
print()
print(f"📄 JSON: {json_path}")
print(f"🌐 HTML: {html_path}")
print()
print(f"Open the HTML file in your browser to explore the interactive graph:")
print(f" open {html_path}")
print()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
print("\n\nInterrupted by user. Exiting...")
sys.exit(130)
except Exception as e:
print(f"\n❌ Error: {e}", file=sys.stderr)
if "--verbose" in sys.argv or "-v" in sys.argv:
import traceback
traceback.print_exc()
sys.exit(1)
@@ -1,94 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract EC2 instance and security-group nodes with their edges.
Edges produced:
- instance security-group [network]
- instance subnet [network]
- security-group VPC [network]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
# EC2 Instances
for instance in client.instances:
name = instance.id
for tag in instance.tags or []:
if tag.get("Key") == "Name":
name = tag["Value"]
break
props = {
"instance_type": getattr(instance, "type", None),
"state": getattr(instance, "state", None),
"vpc_id": getattr(instance, "vpc_id", None),
"subnet_id": getattr(instance, "subnet_id", None),
"public_ip": getattr(instance, "public_ip_address", None),
"private_ip": getattr(instance, "private_ip_address", None),
}
nodes.append(
ResourceNode(
id=instance.arn,
type="ec2_instance",
name=name,
service="ec2",
region=instance.region,
account_id=client.audited_account,
properties={k: v for k, v in props.items() if v is not None},
)
)
for sg_id in instance.security_groups or []:
edges.append(
ResourceEdge(
source_id=instance.arn,
target_id=sg_id,
edge_type="network",
label="sg",
)
)
if instance.subnet_id:
edges.append(
ResourceEdge(
source_id=instance.arn,
target_id=instance.subnet_id,
edge_type="network",
label="subnet",
)
)
# Security Groups
for sg in client.security_groups.values():
name = (
sg.name if hasattr(sg, "name") else sg.id if hasattr(sg, "id") else sg.arn
)
nodes.append(
ResourceNode(
id=sg.arn,
type="security_group",
name=name,
service="ec2",
region=sg.region,
account_id=client.audited_account,
properties={"vpc_id": sg.vpc_id},
)
)
if sg.vpc_id:
edges.append(
ResourceEdge(
source_id=sg.arn,
target_id=sg.vpc_id,
edge_type="network",
label="in-vpc",
)
)
return nodes, edges
@@ -1,60 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract ELBv2 (ALB/NLB) load balancer nodes and their edges.
Edges produced:
- load_balancer security-group [network]
- load_balancer VPC [network]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for lb in client.loadbalancersv2.values():
props = {
"type": getattr(lb, "type", None),
"scheme": getattr(lb, "scheme", None),
"dns_name": getattr(lb, "dns", None),
"vpc_id": getattr(lb, "vpc_id", None),
}
name = getattr(lb, "name", lb.arn.split("/")[-2] if "/" in lb.arn else lb.arn)
nodes.append(
ResourceNode(
id=lb.arn,
type="load_balancer",
name=name,
service="elbv2",
region=lb.region,
account_id=client.audited_account,
properties={k: v for k, v in props.items() if v is not None},
)
)
for sg_id in lb.security_groups or []:
edges.append(
ResourceEdge(
source_id=lb.arn,
target_id=sg_id,
edge_type="network",
label="sg",
)
)
vpc_id = getattr(lb, "vpc_id", None)
if vpc_id:
edges.append(
ResourceEdge(
source_id=lb.arn,
target_id=vpc_id,
edge_type="network",
label="in-vpc",
)
)
return nodes, edges
@@ -1,84 +0,0 @@
import json
from typing import Any, Dict, List, Tuple
from prowler.lib.logger import logger
from lib.models import ResourceEdge, ResourceNode
def _parse_trust_principals(assume_role_policy: Any) -> List[str]:
"""
Return a flat list of principal strings from an IAM assume-role policy document.
The policy may be a dict already or a JSON string.
"""
if not assume_role_policy:
return []
if isinstance(assume_role_policy, str):
try:
assume_role_policy = json.loads(assume_role_policy)
except (json.JSONDecodeError, ValueError):
return []
principals = []
for statement in assume_role_policy.get("Statement", []):
principal = statement.get("Principal", {})
if isinstance(principal, str):
principals.append(principal)
elif isinstance(principal, dict):
for v in principal.values():
if isinstance(v, list):
principals.extend(v)
else:
principals.append(v)
elif isinstance(principal, list):
principals.extend(principal)
return principals
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract IAM role nodes and their trust-relationship edges.
Edges produced:
- trusted-principal role [iam] (who can assume this role)
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for role in client.roles:
props: Dict[str, Any] = {
"path": getattr(role, "path", None),
"create_date": str(getattr(role, "create_date", "") or ""),
}
nodes.append(
ResourceNode(
id=role.arn,
type="iam_role",
name=role.name,
service="iam",
region="global",
account_id=client.audited_account,
properties={k: v for k, v in props.items() if v},
)
)
# Trust-relationship edges: principal → role (principal CAN assume role)
try:
for principal in _parse_trust_principals(role.assume_role_policy):
if principal and principal != "*":
edges.append(
ResourceEdge(
source_id=principal,
target_id=role.arn,
edge_type="iam",
label="can-assume",
)
)
except Exception as e:
logger.debug(
f"inventory iam_extractor: could not parse trust policy for {role.arn}: {e}"
)
return nodes, edges
@@ -1,118 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract Lambda function nodes and their edges from an awslambda_client.
Edges produced:
- lambda VPC [network]
- lambda subnet [network]
- lambda sg [network]
- lambda event-source[triggers] (from EventSourceMapping)
- lambda layer ARN [depends_on]
- lambda DLQ target [data_flow]
- lambda KMS key [encrypts]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for fn in client.functions.values():
props = {
"runtime": fn.runtime,
"vpc_id": fn.vpc_id,
}
if fn.environment:
props["has_env_vars"] = True
if fn.kms_key_arn:
props["kms_key_arn"] = fn.kms_key_arn
nodes.append(
ResourceNode(
id=fn.arn,
type="lambda_function",
name=fn.name,
service="lambda",
region=fn.region,
account_id=client.audited_account,
properties=props,
)
)
# Network edges → VPC, subnets, security groups
if fn.vpc_id:
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=fn.vpc_id,
edge_type="network",
label="in-vpc",
)
)
for sg_id in fn.security_groups or []:
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=sg_id,
edge_type="network",
label="sg",
)
)
for subnet_id in fn.subnet_ids or set():
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=subnet_id,
edge_type="network",
label="subnet",
)
)
# Trigger edges from event source mappings
for esm in getattr(fn, "event_source_mappings", []):
edges.append(
ResourceEdge(
source_id=esm.event_source_arn,
target_id=fn.arn,
edge_type="triggers",
label=f"esm:{esm.state}",
)
)
# Layer dependency edges
for layer in getattr(fn, "layers", []):
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=layer.arn,
edge_type="depends_on",
label="layer",
)
)
# Dead-letter queue data-flow edge
dlq = getattr(fn, "dead_letter_config", None)
if dlq and dlq.target_arn:
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=dlq.target_arn,
edge_type="data_flow",
label="dlq",
)
)
# KMS encryption edge
if fn.kms_key_arn:
edges.append(
ResourceEdge(
source_id=fn.kms_key_arn,
target_id=fn.arn,
edge_type="encrypts",
label="kms",
)
)
return nodes, edges
@@ -1,86 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract RDS DB instance nodes and their edges.
Edges produced:
- db_instance security-group [network]
- db_instance VPC [network]
- db_instance cluster [depends_on]
- db_instance KMS key [encrypts]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for db in client.db_instances.values():
props = {
"engine": getattr(db, "engine", None),
"engine_version": getattr(db, "engine_version", None),
"instance_class": getattr(db, "db_instance_class", None),
"vpc_id": getattr(db, "vpc_id", None),
"multi_az": getattr(db, "multi_az", None),
"publicly_accessible": getattr(db, "publicly_accessible", None),
"storage_encrypted": getattr(db, "storage_encrypted", None),
}
nodes.append(
ResourceNode(
id=db.arn,
type="rds_instance",
name=db.id,
service="rds",
region=db.region,
account_id=client.audited_account,
properties={k: v for k, v in props.items() if v is not None},
)
)
for sg in getattr(db, "security_groups", []):
sg_id = sg if isinstance(sg, str) else getattr(sg, "id", str(sg))
edges.append(
ResourceEdge(
source_id=db.arn,
target_id=sg_id,
edge_type="network",
label="sg",
)
)
vpc_id = getattr(db, "vpc_id", None)
if vpc_id:
edges.append(
ResourceEdge(
source_id=db.arn,
target_id=vpc_id,
edge_type="network",
label="in-vpc",
)
)
cluster_arn = getattr(db, "cluster_arn", None)
if cluster_arn:
edges.append(
ResourceEdge(
source_id=db.arn,
target_id=cluster_arn,
edge_type="depends_on",
label="cluster-member",
)
)
kms_key_id = getattr(db, "kms_key_id", None)
if kms_key_id:
edges.append(
ResourceEdge(
source_id=kms_key_id,
target_id=db.arn,
edge_type="encrypts",
label="kms",
)
)
return nodes, edges
@@ -1,92 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract S3 bucket nodes and their edges.
Edges produced:
- bucket replication-target bucket [replicates_to]
- bucket KMS key [encrypts]
- bucket logging bucket [logs_to]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for bucket in client.buckets.values():
encryption = getattr(bucket, "encryption", None)
versioning = getattr(bucket, "versioning_enabled", None)
logging = getattr(bucket, "logging", None)
public = getattr(bucket, "public_access_block", None)
props = {}
if versioning is not None:
props["versioning"] = versioning
if encryption:
enc_type = getattr(encryption, "type", str(encryption))
props["encryption"] = enc_type
nodes.append(
ResourceNode(
id=bucket.arn,
type="s3_bucket",
name=bucket.name,
service="s3",
region=bucket.region,
account_id=client.audited_account,
properties=props,
)
)
# Replication edges
for rule in getattr(bucket, "replication_rules", None) or []:
dest_bucket = getattr(rule, "destination_bucket", None)
if dest_bucket:
dest_arn = (
dest_bucket
if dest_bucket.startswith("arn:")
else f"arn:aws:s3:::{dest_bucket}"
)
edges.append(
ResourceEdge(
source_id=bucket.arn,
target_id=dest_arn,
edge_type="replicates_to",
label="s3-replication",
)
)
# Logging edges
if logging:
target_bucket = getattr(logging, "target_bucket", None)
if target_bucket:
target_arn = (
target_bucket
if target_bucket.startswith("arn:")
else f"arn:aws:s3:::{target_bucket}"
)
edges.append(
ResourceEdge(
source_id=bucket.arn,
target_id=target_arn,
edge_type="logs_to",
label="access-logs",
)
)
# KMS encryption edges
if encryption:
kms_arn = getattr(encryption, "kms_master_key_id", None)
if kms_arn:
edges.append(
ResourceEdge(
source_id=kms_arn,
target_id=bucket.arn,
edge_type="encrypts",
label="kms",
)
)
return nodes, edges
@@ -1,92 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract VPC and subnet nodes with their edges.
Edges produced:
- subnet VPC [depends_on]
- peering connection between VPCs [network]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
# VPCs
for vpc in client.vpcs.values():
name = vpc.id if hasattr(vpc, "id") else vpc.arn
for tag in vpc.tags or []:
if isinstance(tag, dict) and tag.get("Key") == "Name":
name = tag["Value"]
break
nodes.append(
ResourceNode(
id=vpc.arn,
type="vpc",
name=name,
service="vpc",
region=vpc.region,
account_id=client.audited_account,
properties={
"cidr_block": getattr(vpc, "cidr_block", None),
"is_default": getattr(vpc, "is_default", None),
},
)
)
# VPC Subnets
for subnet in client.vpc_subnets.values():
name = subnet.id if hasattr(subnet, "id") else subnet.arn
for tag in getattr(subnet, "tags", None) or []:
if isinstance(tag, dict) and tag.get("Key") == "Name":
name = tag["Value"]
break
nodes.append(
ResourceNode(
id=subnet.arn,
type="subnet",
name=name,
service="vpc",
region=subnet.region,
account_id=client.audited_account,
properties={
"vpc_id": getattr(subnet, "vpc_id", None),
"cidr_block": getattr(subnet, "cidr_block", None),
"availability_zone": getattr(subnet, "availability_zone", None),
"public": getattr(subnet, "public", None),
},
)
)
vpc_id = getattr(subnet, "vpc_id", None)
if vpc_id:
# Find the VPC ARN for this vpc_id
vpc_arn = next(
(v.arn for v in client.vpcs.values() if v.id == vpc_id),
vpc_id,
)
edges.append(
ResourceEdge(
source_id=subnet.arn,
target_id=vpc_arn,
edge_type="depends_on",
label="subnet-of",
)
)
# VPC Peering Connections
for peering in getattr(client, "vpc_peering_connections", {}).values():
edges.append(
ResourceEdge(
source_id=peering.arn,
target_id=getattr(peering, "accepter_vpc_id", peering.arn),
edge_type="network",
label="vpc-peer",
)
)
return nodes, edges

Some files were not shown because too many files have changed in this diff Show More