Compare commits

..

14 Commits

Author SHA1 Message Date
Prowler Bot 4ec493ec5c fix(api): correct service principal for Bedrock AgentCore attack paths (#11152)
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2026-05-13 12:41:10 +02:00
Prowler Bot 85c1b85852 fix(ui): render inline code without literal backticks in finding drawer (#11155)
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2026-05-13 10:47:13 +01:00
Prowler Bot 8f50c6d684 fix(m365): surface AuditLog.Read.All permission errors instead of false positives (#11146)
Co-authored-by: abdou <b-abderrahmane@outlook.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
Co-authored-by: Hugo P.Brito <hugopbrit@gmail.com>
2026-05-12 18:42:21 +01:00
Prowler Bot 1fb6c6a0f0 chore(api): Bump version to v1.27.2 (#11135)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-05-12 15:08:06 +02:00
Prowler Bot fc3a25d7a8 chore(sdk): Bump version to v5.26.2 (#11133)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-05-12 15:07:49 +02:00
Prowler Bot 1a56087ea0 chore(ui): Bump version to v5.26.2 (#11134)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-05-12 15:07:33 +02:00
Prowler Bot 2fdc480beb fix(ui): fix role cancel and select dropdown scroll (#11128)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-05-12 13:21:18 +02:00
Prowler Bot 8bfc1d85f5 chore(changelog): prepare changelog for v5.26.1 (#11130)
Co-authored-by: Daniel Barranquero <74871504+danibarranqueroo@users.noreply.github.com>
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-05-12 13:18:44 +02:00
Prowler Bot 57501e1864 fix(api): defer scan broker publish until transaction commits (#11123)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-05-12 12:12:40 +02:00
Prowler Bot 02a83adfd4 fix(m365): exclude disabled guest users from entra_users_mfa_capable (#11119)
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2026-05-12 08:50:44 +01:00
Prowler Bot 98a1bca403 chore(api): Bump version to v1.27.1 (#11114)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-05-11 15:36:28 +02:00
Prowler Bot 8aade7f024 chore(sdk): Bump version to v5.26.1 (#11111)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-05-11 15:36:17 +02:00
Prowler Bot 43b50c4d6f chore(ui): Bump version to v5.26.1 (#11113)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-05-11 15:35:40 +02:00
Prowler Bot 578c354a69 chore(api): Update prowler dependency to v5.26 for release 5.26.0 (#11106)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-05-11 13:23:19 +02:00
175 changed files with 5577 additions and 15856 deletions
+1 -1
View File
@@ -145,7 +145,7 @@ SENTRY_RELEASE=local
NEXT_PUBLIC_SENTRY_ENVIRONMENT=${SENTRY_ENVIRONMENT}
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.27.0
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.26.2
# Social login credentials
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
+9 -30
View File
@@ -1,14 +1,14 @@
name: "UI: Tests"
name: 'UI: Tests'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -16,7 +16,7 @@ concurrency:
env:
UI_WORKING_DIR: ./ui
NODE_VERSION: "24.13.0"
NODE_VERSION: '24.13.0'
permissions: {}
@@ -42,9 +42,6 @@ jobs:
fonts.gstatic.com:443
api.github.com:443
release-assets.githubusercontent.com:443
cdn.playwright.dev:443
objects.githubusercontent.com:443
playwright.download.prss.microsoft.com:443
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
@@ -136,7 +133,7 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed == 'true'
run: |
echo "Critical paths changed - running ALL unit tests"
pnpm run test:unit
pnpm run test:run
- name: Run unit tests (related to changes only)
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed != 'true' && steps.changed-source.outputs.all_changed_files != ''
@@ -145,7 +142,7 @@ jobs:
echo "${STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES}"
# Convert space-separated to vitest related format (remove ui/ prefix for relative paths)
CHANGED_FILES=$(echo "${STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES}" | tr ' ' '\n' | sed 's|^ui/||' | tr '\n' ' ')
pnpm exec vitest related $CHANGED_FILES --run --project unit
pnpm exec vitest related $CHANGED_FILES --run
env:
STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES: ${{ steps.changed-source.outputs.all_changed_files }}
@@ -153,25 +150,7 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed != 'true' && steps.changed-source.outputs.all_changed_files == ''
run: |
echo "Only test files changed - running ALL unit tests"
pnpm run test:unit
- name: Cache Playwright browsers
if: steps.check-changes.outputs.any_changed == 'true'
id: playwright-cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-chromium-${{ hashFiles('ui/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-playwright-chromium-
- name: Install Playwright Chromium browser
if: steps.check-changes.outputs.any_changed == 'true' && steps.playwright-cache.outputs.cache-hit != 'true'
run: pnpm exec playwright install chromium
- name: Run browser tests
if: steps.check-changes.outputs.any_changed == 'true'
run: pnpm run test:browser
pnpm run test:run
- name: Build application
if: steps.check-changes.outputs.any_changed == 'true'
+1 -3
View File
@@ -44,9 +44,7 @@ repos:
rev: v1.24.1
hooks:
- id: zizmor
# zizmor only audits workflows, composite actions and dependabot
# config; broader paths trip exit 3 ("no audit was performed").
files: ^\.github/(workflows|actions)/.+\.ya?ml$|^\.github/dependabot\.ya?ml$
files: ^\.github/
priority: 30
## BASH
+3 -11
View File
@@ -15,7 +15,7 @@ Use these skills for detailed patterns on-demand:
|-------|-------------|-----|
| `typescript` | Const types, flat interfaces, utility types | [SKILL.md](skills/typescript/SKILL.md) |
| `react-19` | No useMemo/useCallback, React Compiler | [SKILL.md](skills/react-19/SKILL.md) |
| `nextjs-16` | App Router, Server Actions, proxy.ts, streaming | [SKILL.md](skills/nextjs-16/SKILL.md) |
| `nextjs-15` | App Router, Server Actions, streaming | [SKILL.md](skills/nextjs-15/SKILL.md) |
| `tailwind-4` | cn() utility, no var() in className | [SKILL.md](skills/tailwind-4/SKILL.md) |
| `playwright` | Page Object Model, MCP workflow, selectors | [SKILL.md](skills/playwright/SKILL.md) |
| `pytest` | Fixtures, mocking, markers, parametrize | [SKILL.md](skills/pytest/SKILL.md) |
@@ -60,14 +60,11 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
|--------|-------|
| Add changelog entry for a PR or feature | `prowler-changelog` |
| Adding DRF pagination or permissions | `django-drf` |
| Adding a compliance output formatter (per-provider class + table dispatcher) | `prowler-compliance` |
| Adding indexes or constraints to database tables | `django-migration-psql` |
| Adding new providers | `prowler-provider` |
| Adding privilege escalation detection queries | `prowler-attack-paths-query` |
| Adding services to existing providers | `prowler-provider` |
| After creating/modifying a skill | `skill-sync` |
| App Router / Server Actions | `nextjs-16` |
| Auditing check-to-requirement mappings as a cloud auditor | `prowler-compliance` |
| App Router / Server Actions | `nextjs-15` |
| Building AI chat features | `ai-sdk-5` |
| Committing changes | `prowler-commit` |
| Configuring MCP servers in agentic workflows | `gh-aw` |
@@ -81,7 +78,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Creating a git commit | `prowler-commit` |
| Creating new checks | `prowler-sdk-check` |
| Creating new skills | `skill-creator` |
| Creating or reviewing Django migrations | `django-migration-psql` |
| Creating/modifying Prowler UI components | `prowler-ui` |
| Creating/modifying models, views, serializers | `prowler-api` |
| Creating/updating compliance frameworks | `prowler-compliance` |
@@ -89,7 +85,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Debugging gh-aw compilation errors | `gh-aw` |
| Fill .github/pull_request_template.md (Context/Description/Steps to review/Checklist) | `prowler-pr` |
| Fixing bug | `tdd` |
| Fixing compliance JSON bugs (duplicate IDs, empty Section, stale refs) | `prowler-compliance` |
| General Prowler development questions | `prowler` |
| Implementing JSON:API endpoints | `django-drf` |
| Implementing feature | `tdd` |
@@ -107,8 +102,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Review changelog format and conventions | `prowler-changelog` |
| Reviewing JSON:API compliance | `jsonapi` |
| Reviewing compliance framework PRs | `prowler-compliance-review` |
| Running makemigrations or pgmakemigrations | `django-migration-psql` |
| Syncing compliance framework with upstream catalog | `prowler-compliance` |
| Testing RLS tenant isolation | `prowler-test-api` |
| Testing hooks or utilities | `vitest` |
| Troubleshoot why a skill is missing from AGENTS.md auto-invoke | `skill-sync` |
@@ -136,7 +129,6 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Writing React components | `react-19` |
| Writing TypeScript types/interfaces | `typescript` |
| Writing Vitest tests | `vitest` |
| Writing data backfill or data migration | `django-migration-psql` |
| Writing documentation | `prowler-docs` |
| Writing unit tests for UI | `vitest` |
@@ -150,7 +142,7 @@ Prowler is an open-source cloud security assessment tool supporting AWS, Azure,
|-----------|----------|------------|
| SDK | `prowler/` | Python 3.10+, Poetry 2.3+ |
| API | `api/` | Django 5.1, DRF, Celery |
| UI | `ui/` | Next.js 16, React 19, Tailwind 4 |
| UI | `ui/` | Next.js 15, React 19, Tailwind 4 |
| MCP Server | `mcp_server/` | FastMCP, Python 3.12+ |
| Dashboard | `dashboard/` | Dash, Plotly |
+6 -29
View File
@@ -1,34 +1,11 @@
# Do you want to learn on how to...
- [Contribute with your code or fixes to Prowler](https://docs.prowler.com/developer-guide/introduction)
- [Create a new provider](https://docs.prowler.com/developer-guide/provider)
- [Create a new service](https://docs.prowler.com/developer-guide/services)
- [Create a new check for a provider](https://docs.prowler.com/developer-guide/checks)
- [Create a new security compliance framework](https://docs.prowler.com/developer-guide/security-compliance-framework)
- [Add a custom output format](https://docs.prowler.com/developer-guide/outputs)
- [Add a new integration](https://docs.prowler.com/developer-guide/integrations)
- [Contribute with documentation](https://docs.prowler.com/developer-guide/documentation)
- [Write unit tests](https://docs.prowler.com/developer-guide/unit-testing)
- [Write integration tests](https://docs.prowler.com/developer-guide/integration-testing)
- [Write end-to-end tests](https://docs.prowler.com/developer-guide/end2end-testing)
- [Debug Prowler](https://docs.prowler.com/developer-guide/debugging)
- [Configure checks](https://docs.prowler.com/developer-guide/configurable-checks)
- [Rename checks](https://docs.prowler.com/developer-guide/renaming-checks)
- [Follow the check metadata guidelines](https://docs.prowler.com/developer-guide/check-metadata-guidelines)
- [Extend the MCP server](https://docs.prowler.com/developer-guide/mcp-server)
- [Extend Lighthouse AI](https://docs.prowler.com/developer-guide/lighthouse-architecture)
- [Add AI skills](https://docs.prowler.com/developer-guide/ai-skills)
Provider-specific developer notes:
- [AWS](https://docs.prowler.com/developer-guide/aws-details)
- [Azure](https://docs.prowler.com/developer-guide/azure-details)
- [Google Cloud](https://docs.prowler.com/developer-guide/gcp-details)
- [Alibaba Cloud](https://docs.prowler.com/developer-guide/alibabacloud-details)
- [Kubernetes](https://docs.prowler.com/developer-guide/kubernetes-details)
- [Microsoft 365](https://docs.prowler.com/developer-guide/m365-details)
- [GitHub](https://docs.prowler.com/developer-guide/github-details)
- [LLM](https://docs.prowler.com/developer-guide/llm-details)
- Contribute with your code or fixes to Prowler
- Create a new check for a provider
- Create a new security compliance framework
- Add a custom output format
- Add a new integration
- Contribute with documentation
Want some swag as appreciation for your contribution?
-16
View File
@@ -2,19 +2,6 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.28.0] (Prowler UNRELEASED)
### 🚀 Added
- GIN index on `findings(categories, resource_services, resource_regions, resource_types)` to speed up `/api/v1/finding-groups` array filters [(#11001)](https://github.com/prowler-cloud/prowler/pull/11001)
### 🔄 Changed
- Remove orphaned `gin_resources_search_idx` declaration from `Resource.Meta.indexes` (DB index dropped in `0072_drop_unused_indexes`) [(#11001)](https://github.com/prowler-cloud/prowler/pull/11001)
- PDF compliance reports cap detail tables at 100 failed findings per check (configurable via `DJANGO_PDF_MAX_FINDINGS_PER_CHECK`) to bound worker memory on large scans [(#11160)](https://github.com/prowler-cloud/prowler/pull/11160)
---
## [1.27.2] (Prowler UNRELEASED)
### 🐞 Fixed
@@ -36,9 +23,6 @@ All notable changes to the **Prowler API** are documented in this file.
### 🚀 Added
- `scan-reset-ephemeral-resources` post-scan task zeroes `failed_findings_count` for resources missing from the latest full-scope scan, keeping ephemeral resources from polluting the Resources page sort [(#10929)](https://github.com/prowler-cloud/prowler/pull/10929)
### 🔄 Changed
- ASD Essential Eight (AWS) compliance framework support [(#10982)](https://github.com/prowler-cloud/prowler/pull/10982)
### 🔐 Security
+3 -3
View File
@@ -6754,8 +6754,8 @@ uuid6 = "2024.7.10"
[package.source]
type = "git"
url = "https://github.com/prowler-cloud/prowler.git"
reference = "master"
resolved_reference = "16798e293da365965120961e6539e3a9756564f9"
reference = "v5.26"
resolved_reference = "02cdcb29dbcd8eb5ed442c1cd03830000324fb0f"
[[package]]
name = "psutil"
@@ -9424,4 +9424,4 @@ files = [
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "a3ab982d11a87d951ff15694d2ca7fd51f1f51a451abb0baa067ccf6966367a8"
content-hash = "24f7a92f6c72a8207ab15f75c813a5a244c018afb0a582a5abf8c96e2c7faf12"
+2 -2
View File
@@ -25,7 +25,7 @@ dependencies = [
"defusedxml==0.7.1",
"gunicorn==23.0.0",
"lxml==5.3.2",
"prowler @ git+https://github.com/prowler-cloud/prowler.git@master",
"prowler @ git+https://github.com/prowler-cloud/prowler.git@v5.26",
"psycopg2-binary==2.9.9",
"pytest-celery[redis] (==1.3.0)",
"sentry-sdk[django] (==2.56.0)",
@@ -50,7 +50,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.28.0"
version = "1.27.2"
[project.scripts]
celery = "src.backend.config.settings.celery"
@@ -1,31 +0,0 @@
from functools import partial
from django.db import migrations
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0090_attack_paths_cleanup_priority"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="gin_find_arrays_idx",
columns="categories, resource_services, resource_regions, resource_types",
method="GIN",
all_partitions=True,
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="gin_find_arrays_idx",
),
)
]
@@ -1,73 +0,0 @@
import django.contrib.postgres.indexes
from django.db import migrations
INDEX_NAME = "gin_find_arrays_idx"
PARENT_TABLE = "findings"
def create_parent_and_attach(apps, schema_editor):
with schema_editor.connection.cursor() as cursor:
# Idempotent: the parent index may already exist if it was created
# manually on an environment before this migration ran.
cursor.execute(
f"CREATE INDEX IF NOT EXISTS {INDEX_NAME} ON ONLY {PARENT_TABLE} "
f"USING gin (categories, resource_services, resource_regions, resource_types)"
)
cursor.execute(
"SELECT inhrelid::regclass::text "
"FROM pg_inherits "
"WHERE inhparent = %s::regclass",
[PARENT_TABLE],
)
for (partition,) in cursor.fetchall():
child_idx = f"{partition.replace('.', '_')}_{INDEX_NAME}"
# ALTER INDEX ... ATTACH PARTITION has no IF NOT ATTACHED clause,
# so check pg_inherits first to keep the migration re-runnable.
cursor.execute(
"""
SELECT 1
FROM pg_inherits i
JOIN pg_class p ON p.oid = i.inhparent
JOIN pg_class c ON c.oid = i.inhrelid
WHERE p.relname = %s AND c.relname = %s
""",
[INDEX_NAME, child_idx],
)
if cursor.fetchone() is None:
cursor.execute(f"ALTER INDEX {INDEX_NAME} ATTACH PARTITION {child_idx}")
def drop_parent_index(apps, schema_editor):
with schema_editor.connection.cursor() as cursor:
cursor.execute(f"DROP INDEX IF EXISTS {INDEX_NAME}")
class Migration(migrations.Migration):
dependencies = [
("api", "0091_findings_arrays_gin_index_partitions"),
]
operations = [
migrations.SeparateDatabaseAndState(
state_operations=[
migrations.AddIndex(
model_name="finding",
index=django.contrib.postgres.indexes.GinIndex(
fields=[
"categories",
"resource_services",
"resource_regions",
"resource_types",
],
name=INDEX_NAME,
),
),
],
database_operations=[
migrations.RunPython(
create_parent_and_attach,
reverse_code=drop_parent_index,
),
],
),
]
+1 -9
View File
@@ -946,6 +946,7 @@ class Resource(RowLevelSecurityProtectedModel):
OpClass(Upper("name"), name="gin_trgm_ops"),
name="res_name_trgm_idx",
),
GinIndex(fields=["text_search"], name="gin_resources_search_idx"),
models.Index(fields=["tenant_id", "id"], name="resources_tenant_id_idx"),
models.Index(
fields=["tenant_id", "provider_id"],
@@ -1151,15 +1152,6 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
fields=["tenant_id", "scan_id", "check_id"],
name="find_tenant_scan_check_idx",
),
GinIndex(
fields=[
"categories",
"resource_services",
"resource_regions",
"resource_types",
],
name="gin_find_arrays_idx",
),
]
class JSONAPIMeta:
+1 -1
View File
@@ -1,7 +1,7 @@
openapi: 3.0.3
info:
title: Prowler API
version: 1.28.0
version: 1.27.2
description: |-
Prowler API specification.
+1 -1
View File
@@ -424,7 +424,7 @@ class SchemaView(SpectacularAPIView):
def get(self, request, *args, **kwargs):
spectacular_settings.TITLE = "Prowler API"
spectacular_settings.VERSION = "1.28.0"
spectacular_settings.VERSION = "1.27.2"
spectacular_settings.DESCRIPTION = (
"Prowler API specification.\n\nThis file is auto-generated."
)
+10 -145
View File
@@ -20,15 +20,11 @@ from tasks.jobs.reports import (
ThreatScoreReportGenerator,
)
from tasks.jobs.threatscore import compute_threatscore_metrics
from tasks.jobs.threatscore_utils import (
_aggregate_requirement_statistics_from_database,
_get_compliance_check_ids,
)
from tasks.jobs.threatscore_utils import _aggregate_requirement_statistics_from_database
from api.db_router import READ_REPLICA_ALIAS, MainRouter
from api.db_utils import rls_transaction
from api.models import Provider, Scan, ScanSummary, StateChoices, ThreatScoreSnapshot
from api.utils import initialize_prowler_provider
from prowler.lib.check.compliance_models import Compliance
from prowler.lib.outputs.finding import Finding as FindingOutput
@@ -431,7 +427,6 @@ def generate_threatscore_report(
provider_obj: Provider | None = None,
requirement_statistics: dict[str, dict[str, int]] | None = None,
findings_cache: dict[str, list[FindingOutput]] | None = None,
prowler_provider=None,
) -> None:
"""
Generate a PDF compliance report based on Prowler ThreatScore framework.
@@ -460,7 +455,6 @@ def generate_threatscore_report(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
only_failed=only_failed,
)
@@ -475,7 +469,6 @@ def generate_ens_report(
provider_obj: Provider | None = None,
requirement_statistics: dict[str, dict[str, int]] | None = None,
findings_cache: dict[str, list[FindingOutput]] | None = None,
prowler_provider=None,
) -> None:
"""
Generate a PDF compliance report for ENS RD2022 framework.
@@ -502,7 +495,6 @@ def generate_ens_report(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
include_manual=include_manual,
)
@@ -518,7 +510,6 @@ def generate_nis2_report(
provider_obj: Provider | None = None,
requirement_statistics: dict[str, dict[str, int]] | None = None,
findings_cache: dict[str, list[FindingOutput]] | None = None,
prowler_provider=None,
) -> None:
"""
Generate a PDF compliance report for NIS2 Directive (EU) 2022/2555.
@@ -546,7 +537,6 @@ def generate_nis2_report(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
only_failed=only_failed,
include_manual=include_manual,
)
@@ -563,7 +553,6 @@ def generate_csa_report(
provider_obj: Provider | None = None,
requirement_statistics: dict[str, dict[str, int]] | None = None,
findings_cache: dict[str, list[FindingOutput]] | None = None,
prowler_provider=None,
) -> None:
"""
Generate a PDF compliance report for CSA Cloud Controls Matrix (CCM) v4.0.
@@ -591,7 +580,6 @@ def generate_csa_report(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
only_failed=only_failed,
include_manual=include_manual,
)
@@ -608,7 +596,6 @@ def generate_cis_report(
provider_obj: Provider | None = None,
requirement_statistics: dict[str, dict[str, int]] | None = None,
findings_cache: dict[str, list[FindingOutput]] | None = None,
prowler_provider=None,
) -> None:
"""
Generate a PDF compliance report for a specific CIS Benchmark variant.
@@ -640,7 +627,6 @@ def generate_cis_report(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
only_failed=only_failed,
include_manual=include_manual,
)
@@ -785,17 +771,6 @@ def generate_compliance_reports(
results["csa"] = {"upload": False, "path": ""}
generate_csa = False
# Load the framework definitions for this provider once. We use this map
# both to pick the latest CIS variant and to precompute the set of
# check_ids each framework consumes (for findings_cache eviction).
frameworks_bulk: dict = {}
try:
frameworks_bulk = Compliance.get_bulk(provider_type)
except Exception as e:
logger.error("Error loading compliance frameworks for %s: %s", provider_type, e)
# Fall through; individual frameworks will still try and fail
# gracefully if their compliance_id is missing.
# For CIS we do NOT pre-check the provider against a hard-coded whitelist
# (that list drifts the moment a new CIS JSON ships). Instead, we inspect
# the dynamically loaded framework map and pick the latest available CIS
@@ -803,6 +778,7 @@ def generate_compliance_reports(
latest_cis: str | None = None
if generate_cis:
try:
frameworks_bulk = Compliance.get_bulk(provider_type)
latest_cis = _pick_latest_cis_variant(
name for name in frameworks_bulk.keys() if name.startswith("cis_")
)
@@ -839,84 +815,10 @@ def generate_compliance_reports(
tenant_id, scan_id
)
# Initialize the Prowler provider once for the whole report batch. Each
# generator used to re-init this in _load_compliance_data, paying the
# boto3/Azure-SDK construction cost 5 times per scan. The instance is
# only used by FindingOutput.transform_api_finding to enrich findings,
# so a single shared instance is correct.
logger.info("Initializing prowler_provider once for all reports (scan %s)", scan_id)
try:
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
prowler_provider = initialize_prowler_provider(provider_obj)
except Exception as init_error:
# If init fails the generators will fall back to lazy init in
# _load_compliance_data; we just log and continue.
logger.warning(
"Could not pre-initialize prowler_provider for scan %s: %s",
scan_id,
init_error,
)
prowler_provider = None
# Create shared findings cache up front so the eviction closure below
# can reference it. Defined BEFORE the closure to avoid the UnboundLocalError
# trap if an early-return is later inserted between the closure and its
# first use.
findings_cache: dict[str, list[FindingOutput]] = {}
# Create shared findings cache
findings_cache = {}
logger.info("Created shared findings cache for all reports")
# Precompute the set of check_ids each framework consumes. After a
# framework finishes, every check_id that no remaining framework still
# needs is evicted from findings_cache so the dict does not keep
# growing through the batch (PROWLER-1733).
pending_checks_by_framework: dict[str, set[str]] = {}
if generate_threatscore:
pending_checks_by_framework["threatscore"] = _get_compliance_check_ids(
frameworks_bulk.get(f"prowler_threatscore_{provider_type}")
)
if generate_ens:
pending_checks_by_framework["ens"] = _get_compliance_check_ids(
frameworks_bulk.get(f"ens_rd2022_{provider_type}")
)
if generate_nis2:
pending_checks_by_framework["nis2"] = _get_compliance_check_ids(
frameworks_bulk.get(f"nis2_{provider_type}")
)
if generate_csa:
pending_checks_by_framework["csa"] = _get_compliance_check_ids(
frameworks_bulk.get(f"csa_ccm_4.0_{provider_type}")
)
if generate_cis and latest_cis:
pending_checks_by_framework["cis"] = _get_compliance_check_ids(
frameworks_bulk.get(latest_cis)
)
def _evict_after_framework(done_key: str) -> int:
"""Drop from findings_cache every check_id no pending framework still needs."""
done = pending_checks_by_framework.pop(done_key, set())
still_needed: set[str] = (
set().union(*pending_checks_by_framework.values())
if pending_checks_by_framework
else set()
)
exclusive = done - still_needed
evicted = 0
for cid in exclusive:
if findings_cache.pop(cid, None) is not None:
evicted += 1
if evicted:
logger.info(
"Evicted %d exclusive check entries from findings_cache after %s "
"(remaining cache size: %d)",
evicted,
done_key,
len(findings_cache),
)
# Release the lists' memory now instead of waiting for the next
# gc cycle; FindingOutput instances retain quite a bit of state.
gc.collect()
return evicted
generated_report_keys: list[str] = []
output_paths: dict[str, str] = {}
out_dir: str | None = None
@@ -1005,7 +907,6 @@ def generate_compliance_reports(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
)
# Compute and store ThreatScore metrics snapshot
@@ -1083,15 +984,9 @@ def generate_compliance_reports(
logger.warning("ThreatScore report saved locally at %s", out_dir)
except Exception as e:
logger.exception(
"compliance_report_failed framework=threatscore scan_id=%s tenant_id=%s",
scan_id,
tenant_id,
)
logger.error("Error generating ThreatScore report: %s", e)
results["threatscore"] = {"upload": False, "path": "", "error": str(e)}
_evict_after_framework("threatscore")
# Generate ENS report
if generate_ens:
generated_report_keys.append("ens")
@@ -1111,7 +1006,6 @@ def generate_compliance_reports(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
)
upload_uri_ens = _upload_to_s3(
@@ -1126,15 +1020,9 @@ def generate_compliance_reports(
logger.warning("ENS report saved locally at %s", out_dir)
except Exception as e:
logger.exception(
"compliance_report_failed framework=ens scan_id=%s tenant_id=%s",
scan_id,
tenant_id,
)
logger.error("Error generating ENS report: %s", e)
results["ens"] = {"upload": False, "path": "", "error": str(e)}
_evict_after_framework("ens")
# Generate NIS2 report
if generate_nis2:
generated_report_keys.append("nis2")
@@ -1155,7 +1043,6 @@ def generate_compliance_reports(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
)
upload_uri_nis2 = _upload_to_s3(
@@ -1170,15 +1057,9 @@ def generate_compliance_reports(
logger.warning("NIS2 report saved locally at %s", out_dir)
except Exception as e:
logger.exception(
"compliance_report_failed framework=nis2 scan_id=%s tenant_id=%s",
scan_id,
tenant_id,
)
logger.error("Error generating NIS2 report: %s", e)
results["nis2"] = {"upload": False, "path": "", "error": str(e)}
_evict_after_framework("nis2")
# Generate CSA CCM report
if generate_csa:
generated_report_keys.append("csa")
@@ -1199,7 +1080,6 @@ def generate_compliance_reports(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
)
upload_uri_csa = _upload_to_s3(
@@ -1214,15 +1094,9 @@ def generate_compliance_reports(
logger.warning("CSA CCM report saved locally at %s", out_dir)
except Exception as e:
logger.exception(
"compliance_report_failed framework=csa scan_id=%s tenant_id=%s",
scan_id,
tenant_id,
)
logger.error("Error generating CSA CCM report: %s", e)
results["csa"] = {"upload": False, "path": "", "error": str(e)}
_evict_after_framework("csa")
# Generate CIS Benchmark report for the latest available version only.
# CIS ships multiple versions per provider (e.g. cis_1.4_aws, cis_5.0_aws,
# cis_6.0_aws); we dynamically pick the highest semantic version at run
@@ -1245,7 +1119,6 @@ def generate_compliance_reports(
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
)
upload_uri_cis = _upload_to_s3(
@@ -1274,22 +1147,14 @@ def generate_compliance_reports(
)
except Exception as e:
logger.exception(
"compliance_report_failed framework=cis variant=%s scan_id=%s tenant_id=%s",
latest_cis,
scan_id,
tenant_id,
)
logger.error("Error generating CIS report %s: %s", latest_cis, e)
results["cis"] = {
"upload": False,
"path": "",
"error": str(e),
}
finally:
# Free ReportLab/matplotlib memory before moving on. CIS is
# always the last framework, so evicting its entries clears the
# cache entirely (subject to its check_ids set).
_evict_after_framework("cis")
# Free ReportLab/matplotlib memory before moving on.
gc.collect()
# Clean up temporary files only if all generated reports were
+75 -288
View File
@@ -1,9 +1,6 @@
import gc
import os
import resource as _resource_module
import time
from abc import ABC, abstractmethod
from contextlib import contextmanager
from dataclasses import dataclass, field
from typing import Any
@@ -44,7 +41,6 @@ from .config import (
COLOR_LIGHT_BLUE,
COLOR_LIGHTER_BLUE,
COLOR_PROWLER_DARK_GREEN,
FINDINGS_TABLE_CHUNK_SIZE,
PADDING_LARGE,
PADDING_SMALL,
FrameworkConfig,
@@ -52,53 +48,6 @@ from .config import (
logger = get_task_logger(__name__)
@contextmanager
def _log_phase(phase: str, *, scan_id: str, framework: str):
"""Log start/end timing and RSS deltas around a report-building section.
Emits structured key=value logs so Grafana/Datadog/CloudWatch queries
can pivot by ``phase``, ``framework`` and ``scan_id`` to find the
slow/heavy section on any given scan. ``getrusage`` returns KB on
Linux and bytes on macOS; the values are still useful in relative
terms even though units differ across platforms.
"""
start = time.perf_counter()
rss_before = _resource_module.getrusage(_resource_module.RUSAGE_SELF).ru_maxrss
logger.info(
"phase_start phase=%s scan_id=%s framework=%s rss_kb=%d",
phase,
scan_id,
framework,
rss_before,
)
try:
yield
except Exception:
elapsed = time.perf_counter() - start
logger.exception(
"phase_failed phase=%s scan_id=%s framework=%s elapsed_s=%.2f",
phase,
scan_id,
framework,
elapsed,
)
raise
else:
elapsed = time.perf_counter() - start
rss_after = _resource_module.getrusage(_resource_module.RUSAGE_SELF).ru_maxrss
logger.info(
"phase_end phase=%s scan_id=%s framework=%s elapsed_s=%.2f "
"rss_kb=%d delta_rss_kb=%d",
phase,
scan_id,
framework,
elapsed,
rss_after,
rss_after - rss_before,
)
# Register fonts (done once at module load)
_fonts_registered: bool = False
@@ -386,7 +335,6 @@ class BaseComplianceReportGenerator(ABC):
provider_obj: Provider | None = None,
requirement_statistics: dict[str, dict[str, int]] | None = None,
findings_cache: dict[str, list[FindingOutput]] | None = None,
prowler_provider: Any | None = None,
**kwargs,
) -> None:
"""Generate the PDF compliance report.
@@ -403,35 +351,23 @@ class BaseComplianceReportGenerator(ABC):
provider_obj: Optional pre-fetched Provider object
requirement_statistics: Optional pre-aggregated statistics
findings_cache: Optional pre-loaded findings cache
prowler_provider: Optional pre-initialized Prowler provider. When
generating multiple reports for the same scan the master
function initializes this once and passes it in to avoid
re-running boto3/Azure-SDK setup per framework.
**kwargs: Additional framework-specific arguments
"""
framework = self.config.display_name
logger.info(
"report_generation_start framework=%s scan_id=%s compliance_id=%s",
framework,
scan_id,
compliance_id,
"Generating %s report for scan %s", self.config.display_name, scan_id
)
try:
# 1. Load compliance data
with _log_phase(
"load_compliance_data", scan_id=scan_id, framework=framework
):
data = self._load_compliance_data(
tenant_id=tenant_id,
scan_id=scan_id,
compliance_id=compliance_id,
provider_id=provider_id,
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
prowler_provider=prowler_provider,
)
data = self._load_compliance_data(
tenant_id=tenant_id,
scan_id=scan_id,
compliance_id=compliance_id,
provider_id=provider_id,
provider_obj=provider_obj,
requirement_statistics=requirement_statistics,
findings_cache=findings_cache,
)
# 2. Create PDF document
doc = self._create_document(output_path, data)
@@ -441,54 +377,37 @@ class BaseComplianceReportGenerator(ABC):
elements = []
# Cover page (lightweight)
with _log_phase("cover_page", scan_id=scan_id, framework=framework):
elements.extend(self.create_cover_page(data))
elements.append(PageBreak())
elements.extend(self.create_cover_page(data))
elements.append(PageBreak())
# Executive summary (framework-specific)
with _log_phase("executive_summary", scan_id=scan_id, framework=framework):
elements.extend(self.create_executive_summary(data))
elements.extend(self.create_executive_summary(data))
# Body sections (charts + requirements index)
# Override _build_body_sections() in subclasses to change section order
with _log_phase("body_sections", scan_id=scan_id, framework=framework):
elements.extend(self._build_body_sections(data))
elements.extend(self._build_body_sections(data))
# Detailed findings - heaviest section, loads findings on-demand
with _log_phase("detailed_findings", scan_id=scan_id, framework=framework):
elements.extend(self.create_detailed_findings(data, **kwargs))
gc.collect() # Free findings data after processing
logger.info("Building detailed findings section...")
elements.extend(self.create_detailed_findings(data, **kwargs))
gc.collect() # Free findings data after processing
# 4. Build the PDF
logger.info(
"doc_build_about_to_run framework=%s scan_id=%s elements=%d",
framework,
scan_id,
len(elements),
)
with _log_phase("doc_build", scan_id=scan_id, framework=framework):
self._build_pdf(doc, elements, data)
logger.info("Building PDF document with %d elements...", len(elements))
self._build_pdf(doc, elements, data)
# Final cleanup
del elements
gc.collect()
logger.info(
"report_generation_end framework=%s scan_id=%s output_path=%s",
framework,
scan_id,
output_path,
)
logger.info("Successfully generated report at %s", output_path)
except Exception:
# logger.exception captures the full traceback; the contextual
# keys keep production search-by-scan-id viable.
logger.exception(
"report_generation_failed framework=%s scan_id=%s compliance_id=%s",
framework,
scan_id,
compliance_id,
)
except Exception as e:
import traceback
tb_lineno = e.__traceback__.tb_lineno if e.__traceback__ else "unknown"
logger.error("Error generating report, line %s -- %s", tb_lineno, e)
logger.error("Full traceback:\n%s", traceback.format_exc())
raise
def _build_body_sections(self, data: ComplianceData) -> list:
@@ -719,25 +638,15 @@ class BaseComplianceReportGenerator(ABC):
for req in requirements:
check_ids_to_load.extend(req.checks)
# Load findings on-demand only for the checks that will be displayed.
# When ``only_failed`` is active at requirement level, also push the
# FAIL filter down to the finding level: a requirement marked FAIL
# because 1/1000 findings failed must not render a table dominated by
# 999 PASS rows. That hides the actual failure under noise and
# makes the per-check cap truncate the wrong rows.
# ``total_counts`` is populated with the pre-cap total per check_id
# (FAIL-only when only_failed is active) so the "Showing first N of
# M" banner uses the same denominator the reader cares about.
# Load findings on-demand only for the checks that will be displayed
# Uses the shared findings cache to avoid duplicate queries across reports
logger.info("Loading findings on-demand for %d requirements", len(requirements))
total_counts: dict[str, int] = {}
findings_by_check_id = _load_findings_for_requirement_checks(
data.tenant_id,
data.scan_id,
check_ids_to_load,
data.prowler_provider,
data.findings_by_check_id, # Pass the cache to update it
total_counts_out=total_counts,
only_failed_findings=only_failed,
)
for req in requirements:
@@ -769,31 +678,9 @@ class BaseComplianceReportGenerator(ABC):
)
)
else:
# Surface truncation BEFORE the tables so readers see it
# at the same scroll position as the data itself, not
# after thousands of rendered rows.
loaded = len(findings)
total = total_counts.get(check_id, loaded)
if total > loaded:
kind = "failed findings" if only_failed else "findings"
elements.append(
Paragraph(
f"<b>&#9888; Showing first {loaded:,} of "
f"{total:,} {kind} for this check.</b> "
f"Use the CSV or JSON export for the full "
f"list. The PDF caps detail rows to keep "
f"the report readable and bounded in size.",
self.styles["normal"],
)
)
elements.append(Spacer(1, 0.05 * inch))
# Create chunked findings tables to prevent OOM when a
# single check has thousands of findings (ReportLab
# resolves layout per Flowable, so many small tables
# render contiguously with a bounded memory peak).
findings_tables = self._create_findings_tables(findings)
elements.extend(findings_tables)
# Create findings table
findings_table = self._create_findings_table(findings)
elements.append(findings_table)
elements.append(Spacer(1, 0.1 * inch))
@@ -848,7 +735,6 @@ class BaseComplianceReportGenerator(ABC):
provider_obj: Provider | None,
requirement_statistics: dict | None,
findings_cache: dict | None,
prowler_provider: Any | None = None,
) -> ComplianceData:
"""Load and aggregate compliance data from the database.
@@ -860,9 +746,6 @@ class BaseComplianceReportGenerator(ABC):
provider_obj: Optional pre-fetched Provider
requirement_statistics: Optional pre-aggregated statistics
findings_cache: Optional pre-loaded findings
prowler_provider: Optional pre-initialized Prowler provider. When
the master function initializes it once and passes it in,
we skip the per-report ``initialize_prowler_provider`` call.
Returns:
Aggregated ComplianceData object
@@ -872,8 +755,7 @@ class BaseComplianceReportGenerator(ABC):
if provider_obj is None:
provider_obj = Provider.objects.get(id=provider_id)
if prowler_provider is None:
prowler_provider = initialize_prowler_provider(provider_obj)
prowler_provider = initialize_prowler_provider(provider_obj)
provider_type = provider_obj.provider
# Load compliance framework
@@ -941,32 +823,13 @@ class BaseComplianceReportGenerator(ABC):
) -> SimpleDocTemplate:
"""Create the PDF document template.
Validates that ``output_path`` is a filesystem path string with an
existing parent directory. SimpleDocTemplate technically accepts a
BytesIO too, but we want every report to land on disk so the
Celery worker doesn't hold the full PDF in memory while uploading
to S3.
Args:
output_path: Path for the output PDF
data: Compliance data for metadata
Returns:
Configured SimpleDocTemplate
Raises:
TypeError: ``output_path`` is not a string.
FileNotFoundError: The parent directory does not exist.
"""
if not isinstance(output_path, str):
raise TypeError(
"output_path must be a filesystem path string; "
f"got {type(output_path).__name__}"
)
parent_dir = os.path.dirname(output_path)
if parent_dir and not os.path.isdir(parent_dir):
raise FileNotFoundError(f"Output directory does not exist: {parent_dir}")
return SimpleDocTemplate(
output_path,
pagesize=letter,
@@ -1013,10 +876,47 @@ class BaseComplianceReportGenerator(ABC):
onLaterPages=add_footer,
)
# Column layout shared by all findings sub-tables. Defined as a method so
# subclasses can override it without re-implementing the chunking logic.
def _findings_table_columns(self) -> list[ColumnConfig]:
return [
def _create_findings_table(self, findings: list[FindingOutput]) -> Any:
"""Create a findings table.
Args:
findings: List of finding objects
Returns:
ReportLab Table element
"""
def get_finding_title(f):
metadata = getattr(f, "metadata", None)
if metadata:
return getattr(metadata, "CheckTitle", getattr(f, "check_id", ""))
return getattr(f, "check_id", "")
def get_resource_name(f):
name = getattr(f, "resource_name", "")
if not name:
name = getattr(f, "resource_uid", "")
return name
def get_severity(f):
metadata = getattr(f, "metadata", None)
if metadata:
return getattr(metadata, "Severity", "").capitalize()
return ""
# Convert findings to dicts for the table
data = []
for f in findings:
item = {
"title": get_finding_title(f),
"resource_name": get_resource_name(f),
"severity": get_severity(f),
"status": getattr(f, "status", "").upper(),
"region": getattr(f, "region", "global"),
}
data.append(item)
columns = [
ColumnConfig("Finding", 2.5 * inch, "title"),
ColumnConfig("Resource", 3 * inch, "resource_name"),
ColumnConfig("Severity", 0.9 * inch, "severity"),
@@ -1024,122 +924,9 @@ class BaseComplianceReportGenerator(ABC):
ColumnConfig("Region", 0.9 * inch, "region"),
]
@staticmethod
def _finding_to_row(f: FindingOutput) -> dict[str, str]:
"""Project a FindingOutput onto the row dict the table expects.
Kept defensive: missing metadata or attributes return empty strings
rather than raising, so a single malformed finding never breaks the
whole report.
"""
metadata = getattr(f, "metadata", None)
title = (
getattr(metadata, "CheckTitle", getattr(f, "check_id", ""))
if metadata
else getattr(f, "check_id", "")
)
resource_name = getattr(f, "resource_name", "") or getattr(
f, "resource_uid", ""
)
severity = getattr(metadata, "Severity", "").capitalize() if metadata else ""
return {
"title": title,
"resource_name": resource_name,
"severity": severity,
"status": getattr(f, "status", "").upper(),
"region": getattr(f, "region", "global"),
}
def _create_findings_tables(
self,
findings: list[FindingOutput],
chunk_size: int | None = None,
) -> list[Any]:
"""Build a list of small findings tables to keep ``doc.build()`` memory bounded.
ReportLab resolves layout (column widths, row heights, page-breaks)
per Flowable. A single ``LongTable`` of 15k rows forces all of that
to be computed at once and reliably OOMs the worker on large scans.
Splitting into chunks of ``chunk_size`` rows produces an equivalent-
looking PDF (LongTable repeats headers; chunks render contiguously)
with a bounded memory peak per chunk.
Args:
findings: List of finding objects for a single check.
chunk_size: Rows per sub-table. ``None`` uses
``FINDINGS_TABLE_CHUNK_SIZE`` from config.
Returns:
List of ReportLab flowables (interleaved ``Table``/``LongTable``
and small ``Spacer`` between chunks). Empty list when there are
no findings.
"""
if not findings:
return []
chunk_size = chunk_size or FINDINGS_TABLE_CHUNK_SIZE
# Build all rows first so we can chunk without re-walking the
# FindingOutput list. Malformed findings are skipped with a logged
# exception, never enough to abort the entire report.
rows: list[dict[str, str]] = []
for f in findings:
try:
rows.append(self._finding_to_row(f))
except Exception:
logger.exception(
"Skipping malformed finding while building table for check %s",
getattr(f, "check_id", "unknown"),
)
if not rows:
return []
columns = self._findings_table_columns()
flowables: list = []
total = len(rows)
for start in range(0, total, chunk_size):
chunk = rows[start : start + chunk_size]
flowables.append(
create_data_table(
data=chunk,
columns=columns,
header_color=self.config.primary_color,
normal_style=self.styles["normal_center"],
)
)
# A tiny spacer between chunks keeps them visually contiguous
# without forcing a page-break (KeepTogether would negate the
# memory benefit of chunking).
if start + chunk_size < total:
flowables.append(Spacer(1, 0.05 * inch))
if total > chunk_size:
logger.debug(
"Built %d findings sub-tables (chunk_size=%d, total_findings=%d)",
(total + chunk_size - 1) // chunk_size,
chunk_size,
total,
)
return flowables
def _create_findings_table(self, findings: list[FindingOutput]) -> Any:
"""Deprecated alias kept for backwards compatibility.
Returns the first chunk produced by ``_create_findings_tables``.
New callers MUST use ``_create_findings_tables``, which returns a
list of flowables and is what ``create_detailed_findings`` invokes.
"""
flowables = self._create_findings_tables(findings)
if flowables:
return flowables[0]
# Empty input → return an empty (header-only) table so callers that
# used to receive a Table never get None.
return create_data_table(
data=[],
columns=self._findings_table_columns(),
data=data,
columns=columns,
header_color=self.config.primary_color,
normal_style=self.styles["normal_center"],
)
@@ -1,11 +1,9 @@
import gc
import io
import math
import time
from typing import Callable
import matplotlib
from celery.utils.log import get_task_logger
# Use non-interactive Agg backend for memory efficiency in server environments
# This MUST be set before importing pyplot
@@ -22,26 +20,6 @@ from .config import ( # noqa: E402
CHART_DPI_DEFAULT,
)
logger = get_task_logger(__name__)
def _log_chart_built(name: str, dpi: int, buffer: io.BytesIO, started: float) -> None:
"""Emit a structured DEBUG line summarising a chart render.
Centralised so the formatting stays consistent across all chart helpers
and so we never accidentally pay for buffer.getbuffer().nbytes when
debug logging is disabled.
"""
if logger.isEnabledFor(10): # logging.DEBUG
logger.debug(
"chart_built name=%s dpi=%d bytes=%d elapsed_s=%.2f",
name,
dpi,
buffer.getbuffer().nbytes,
time.perf_counter() - started,
)
# Use centralized DPI setting from config
DEFAULT_CHART_DPI = CHART_DPI_DEFAULT
@@ -99,7 +77,6 @@ def create_vertical_bar_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
if color_func is None:
color_func = get_chart_color_for_percentage
@@ -145,7 +122,6 @@ def create_vertical_bar_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("vertical_bar", dpi, buffer, _started)
return buffer
@@ -180,7 +156,6 @@ def create_horizontal_bar_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
if color_func is None:
color_func = get_chart_color_for_percentage
@@ -232,7 +207,6 @@ def create_horizontal_bar_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("horizontal_bar", dpi, buffer, _started)
return buffer
@@ -265,7 +239,6 @@ def create_radar_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
num_vars = len(labels)
angles = [n / float(num_vars) * 2 * math.pi for n in range(num_vars)]
@@ -302,7 +275,6 @@ def create_radar_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("radar", dpi, buffer, _started)
return buffer
@@ -331,7 +303,6 @@ def create_pie_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
fig, ax = plt.subplots(figsize=figsize)
_, _, autotexts = ax.pie(
@@ -359,7 +330,6 @@ def create_pie_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("pie", dpi, buffer, _started)
return buffer
@@ -392,7 +362,6 @@ def create_stacked_bar_chart(
Returns:
BytesIO buffer containing the PNG image
"""
_started = time.perf_counter()
fig, ax = plt.subplots(figsize=figsize)
# Default colors if not provided
@@ -432,5 +401,4 @@ def create_stacked_bar_chart(
plt.close(fig)
gc.collect() # Force garbage collection after heavy matplotlib operation
_log_chart_built("stacked_bar", dpi, buffer, _started)
return buffer
@@ -475,15 +475,8 @@ def create_data_table(
else:
value = item.get(col.field, "")
# Wrap every string cell in Paragraph so the data rows keep the
# caller-supplied font/colour/alignment. Skipping Paragraph for
# short cells (a tempting micro-optimisation) breaks visual
# consistency: ReportLab Table falls back to Helvetica/black for
# raw strings, mixing fonts within the same table.
# ``escape_html`` keeps ``<``/``>``/``&`` in resource names from
# breaking Paragraph's mini-HTML parser.
if normal_style and isinstance(value, str):
value = Paragraph(escape_html(value), normal_style)
value = Paragraph(value, normal_style)
row.append(value)
table_data.append(row)
@@ -515,26 +508,17 @@ def create_data_table(
for idx, col in enumerate(columns):
styles.append(("ALIGN", (idx, 0), (idx, -1), col.align))
# Alternate row backgrounds: single O(1) ROWBACKGROUNDS style entry.
# The previous implementation appended N per-row BACKGROUND commands,
# which scaled the TableStyle list linearly with row count. ReportLab
# cycles through the colour list row-by-row so the visual is identical.
# The ALTERNATE_ROWS_MAX_SIZE cap is preserved to mirror legacy
# behaviour (very large tables stay plain), but the memory cost of the
# styles list is now constant regardless of row count.
# Alternate row backgrounds - skip for very large tables as it adds memory overhead
if (
alternate_rows
and len(table_data) > 1
and len(table_data) <= ALTERNATE_ROWS_MAX_SIZE
):
styles.append(
(
"ROWBACKGROUNDS",
(0, 1),
(-1, -1),
[colors.white, colors.Color(0.98, 0.98, 0.98)],
)
)
for i in range(1, len(table_data)):
if i % 2 == 0:
styles.append(
("BACKGROUND", (0, i), (-1, i), colors.Color(0.98, 0.98, 0.98))
)
table.setStyle(TableStyle(styles))
return table
@@ -1,4 +1,3 @@
import os
from dataclasses import dataclass, field
from reportlab.lib import colors
@@ -24,47 +23,6 @@ ALTERNATE_ROWS_MAX_SIZE = 200
# Larger = fewer queries but more memory per batch
FINDINGS_BATCH_SIZE = 2000
# Maximum rows per findings sub-table. ReportLab resolves layout per Flowable;
# splitting a huge findings list into multiple smaller tables keeps the peak
# memory of doc.build() bounded. A single 15k-row LongTable would force
# ReportLab to compute all column widths/row heights/page-breaks at once and
# OOM the worker; 300-row chunks are rendered contiguously with negligible
# visual impact.
FINDINGS_TABLE_CHUNK_SIZE = 300
# Maximum findings rendered per check in the detailed-findings section.
#
# Product behaviour: compliance PDFs render at most ``MAX_FINDINGS_PER_CHECK``
# **failed** findings per check (PASS rows are excluded at SQL level by the
# ``only_failed`` flag that all four list-rendering frameworks default to:
# ThreatScore, NIS2, CSA, CIS; ENS does not render finding tables). Above
# this cap each affected check renders an in-PDF banner
# ("Showing first 100 of N failed findings for this check. Use the CSV
# or JSON export for the full list") so the reader knows the table is
# truncated and where to find the full data.
#
# Why a cap exists at all:
# * ``FindingOutput.transform_api_finding`` is O(N) per finding (Pydantic
# v1 validation + nested model construction).
# * ReportLab resolves layout per Flowable; thousands of sub-tables make
# ``doc.build()`` very slow and grow the PDF unboundedly.
# * A human-readable executive/auditor PDF does not need 12,000 rows for
# one check; that is forensic data and lives in the CSV/JSON exports.
#
# Why 100 specifically:
# * Covers ~99% of real scans without truncation (most checks emit far
# fewer than 100 findings even in enterprise estates).
# * Worst-case rendered rows = 100 × ~500 checks = 50k rows across all
# frameworks, which keeps RSS bounded and a 5-framework run completes
# in minutes instead of hours.
#
# Override at runtime via ``DJANGO_PDF_MAX_FINDINGS_PER_CHECK``:
# * Set to ``0`` to disable the cap entirely (load every finding; only
# advisable for small scans).
# * Set to a larger value (e.g. ``500``) for forensic detail in big runs;
# watch RSS in the Celery worker.
MAX_FINDINGS_PER_CHECK = int(os.environ.get("DJANGO_PDF_MAX_FINDINGS_PER_CHECK", "100"))
# =============================================================================
# Base colors
+12 -127
View File
@@ -1,8 +1,6 @@
from celery.utils.log import get_task_logger
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
from django.db.models import Count, F, Q, Window
from django.db.models.functions import RowNumber
from tasks.jobs.reports.config import MAX_FINDINGS_PER_CHECK
from django.db.models import Count, Q
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import rls_transaction
@@ -156,8 +154,6 @@ def _load_findings_for_requirement_checks(
check_ids: list[str],
prowler_provider,
findings_cache: dict[str, list[FindingOutput]] | None = None,
total_counts_out: dict[str, int] | None = None,
only_failed_findings: bool = False,
) -> dict[str, list[FindingOutput]]:
"""
Load findings for specific check IDs on-demand with optional caching.
@@ -182,23 +178,6 @@ def _load_findings_for_requirement_checks(
prowler_provider: The initialized Prowler provider instance.
findings_cache (dict, optional): Cache of already loaded findings.
If provided, checks are first looked up in cache before querying database.
total_counts_out (dict, optional): If provided, populated with
``{check_id: total_findings_in_db}`` BEFORE any per-check cap is
applied. Lets callers render a "Showing first N of M" banner for
truncated checks. Only populated for ``check_ids`` actually
queried (cache hits keep whatever value the caller already had).
When ``only_failed_findings=True`` the total is FAIL-only.
only_failed_findings (bool): When True, push the ``status=FAIL``
filter down into the SQL query so PASS rows are never loaded
from the DB nor pydantic-transformed. This matches the
``only_failed`` requirement-level filter applied at PDF render
time: a requirement marked FAIL because 1/1000 findings failed
shouldn't render a table of 999 PASS rows. That hides the
actual failure under noise and wastes the per-check cap on
irrelevant data. NOTE: the findings cache stores whatever the
first caller asked for, so all callers in a single
``generate_compliance_reports`` run MUST pass the same flag
(which they do: it threads from ``only_failed`` defaults).
Returns:
dict[str, list[FindingOutput]]: Dictionary mapping check_id to list of FindingOutput objects.
@@ -243,70 +222,17 @@ def _load_findings_for_requirement_checks(
)
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
base_qs = Finding.all_objects.filter(
tenant_id=tenant_id,
scan_id=scan_id,
check_id__in=check_ids_to_load,
# Use iterator with chunk_size for memory-efficient streaming
# chunk_size controls how many rows Django fetches from DB at once
findings_queryset = (
Finding.all_objects.filter(
tenant_id=tenant_id,
scan_id=scan_id,
check_id__in=check_ids_to_load,
)
.order_by("check_id", "uid")
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
)
if only_failed_findings:
# Push the FAIL filter down into SQL: DB returns ~N×FAIL
# rows instead of N×ALL, and we never spend pydantic CPU on
# PASS findings the PDF would never render.
base_qs = base_qs.filter(status=StatusChoices.FAIL)
# Aggregate totals once so we (a) know which checks need capping
# and (b) can surface "Showing first N of M" in the PDF banner.
# Cheap: a single COUNT grouped by check_id.
totals: dict[str, int] = {
row["check_id"]: row["total"]
for row in base_qs.values("check_id").annotate(total=Count("id"))
}
if total_counts_out is not None:
total_counts_out.update(totals)
cap = MAX_FINDINGS_PER_CHECK
checks_over_cap = (
{cid for cid, n in totals.items() if n > cap} if cap > 0 else set()
)
# Use iterator with chunk_size for memory-efficient streaming.
# FindingOutput.transform_api_finding (prowler/lib/outputs/finding.py)
# reads finding.resources.first() and resource.tags.all() per
# finding, which without prefetch generates 2N queries per chunk.
# prefetch_related runs once per iterator chunk (Django >=4.1) and
# collapses that into a constant 2 extra queries per chunk.
if checks_over_cap:
# Top-N per check via a window function: PostgreSQL only
# materialises ``cap * |checks_over_cap| + sum(uncapped)``
# rows, vs the full table scan the previous path did.
ranked = base_qs.annotate(
rn=Window(
expression=RowNumber(),
partition_by=[F("check_id")],
order_by=F("uid").asc(),
)
)
findings_queryset = (
Finding.all_objects.filter(
id__in=ranked.filter(rn__lte=cap).values("id")
)
.prefetch_related("resources", "resources__tags")
.order_by("check_id", "uid")
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
)
logger.info(
"Per-check cap=%d active for %d checks (max %d each); "
"skipping transform for surplus rows",
cap,
len(checks_over_cap),
cap,
)
else:
findings_queryset = (
base_qs.prefetch_related("resources", "resources__tags")
.order_by("check_id", "uid")
.iterator(chunk_size=DJANGO_FINDINGS_BATCH_SIZE)
)
# Pre-initialize empty lists for all check_ids to load
# This avoids repeated dict lookups and 'if not in' checks
@@ -322,11 +248,7 @@ def _load_findings_for_requirement_checks(
findings_count += 1
logger.info(
"Loaded %d findings for %d checks (truncated %d checks total=%d)",
findings_count,
len(check_ids_to_load),
len(checks_over_cap),
sum(totals.values()),
f"Loaded {findings_count} findings for {len(check_ids_to_load)} checks"
)
# Build result dict using cache references (no data duplication)
@@ -336,40 +258,3 @@ def _load_findings_for_requirement_checks(
}
return result
def _get_compliance_check_ids(compliance_obj) -> set[str]:
"""Return the union of all check_ids referenced by a compliance framework.
Used by the master report orchestrator to know which checks each
framework consumes from the shared ``findings_cache``, so that once a
framework finishes the entries no other pending framework needs can be
evicted from the cache (PROWLER-1733).
Args:
compliance_obj: A loaded Compliance framework object exposing a
``Requirements`` iterable, each requirement carrying ``Checks``.
``None`` is treated as "no checks" rather than raising, so the
caller can pass ``frameworks_bulk.get(...)`` directly without
an extra existence check.
Returns:
Set of check_id strings (empty if ``compliance_obj`` is ``None``).
"""
if compliance_obj is None:
return set()
checks: set[str] = set()
requirements = getattr(compliance_obj, "Requirements", None) or []
try:
# Defensive: Mock objects (used in unit tests) return another Mock
# for any attribute access, which is truthy but not iterable. Treat
# any non-iterable Requirements value as "no checks".
for req in requirements:
req_checks = getattr(req, "Checks", None) or []
try:
checks.update(req_checks)
except TypeError:
continue
except TypeError:
return set()
return checks
-488
View File
@@ -44,8 +44,6 @@ from api.models import (
Finding,
Resource,
ResourceFindingMapping,
ResourceTag,
ResourceTagMapping,
StateChoices,
StatusChoices,
)
@@ -369,317 +367,6 @@ class TestLoadFindingsForChecks:
assert result == {}
def test_prefetch_avoids_n_plus_one(self, tenants_fixture, scans_fixture):
"""Loading N findings must NOT execute O(N) extra queries for resources/tags.
Regression test for PROWLER-1733. ``FindingOutput.transform_api_finding``
reads ``finding.resources.first()`` and ``resource.tags.all()`` per
finding. Without ``prefetch_related`` that's 2N additional queries;
with prefetch it collapses to a small constant per iterator chunk.
"""
from django.test.utils import CaptureQueriesContext
from django.db import connections
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Build N findings, each linked to one resource that owns 2 tags.
N = 20
for i in range(N):
finding = Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"f-prefetch-{i}",
check_id="aws_check_prefetch",
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={
"provider": "aws",
"checkid": "aws_check_prefetch",
"checktitle": "t",
"checktype": [],
"servicename": "s",
"subservicename": "",
"severity": "high",
"resourcetype": "r",
"description": "",
"risk": "",
"relatedurl": "",
"remediation": {
"recommendation": {"text": "", "url": ""},
"code": {
"nativeiac": "",
"terraform": "",
"cli": "",
"other": "",
},
},
"resourceidtemplate": "",
"categories": [],
"dependson": [],
"relatedto": [],
"notes": "",
},
raw_result={},
)
resource = Resource.objects.create(
tenant_id=tenant.id,
provider=scan.provider,
uid=f"r-prefetch-{i}",
name=f"r-prefetch-{i}",
metadata="{}",
details="",
region="us-east-1",
service="s",
type="t::r",
)
ResourceFindingMapping.objects.create(
tenant_id=tenant.id, finding=finding, resource=resource
)
for k in ("env", "owner"):
tag, _ = ResourceTag.objects.get_or_create(
tenant_id=tenant.id, key=k, value=f"v-{i}-{k}"
)
ResourceTagMapping.objects.create(
tenant_id=tenant.id, resource=resource, tag=tag
)
mock_provider = Mock()
mock_provider.type = "aws"
mock_provider.identity.account = "test"
# Patch transform_api_finding to a no-op so the test isolates queries
# to the queryset/prefetch path (transform itself is exercised by
# the integration tests above and not by this regression check).
with patch(
"tasks.jobs.threatscore_utils.FindingOutput.transform_api_finding",
side_effect=lambda model, provider: Mock(check_id=model.check_id),
):
with CaptureQueriesContext(
connections["default_read_replica"]
if "default_read_replica" in connections.databases
else connections["default"]
) as ctx:
_load_findings_for_requirement_checks(
str(tenant.id),
str(scan.id),
["aws_check_prefetch"],
mock_provider,
)
# Expected: a small constant number of queries irrespective of N.
# Pre-fix this would be ~1 + 2*N. We give some slack for RLS SET
# LOCAL statements that the rls_transaction emits.
assert len(ctx.captured_queries) < N, (
f"Expected O(1) queries with prefetch_related; got "
f"{len(ctx.captured_queries)} for N={N} (N+1 regression?)"
)
def test_max_findings_per_check_cap(self, tenants_fixture, scans_fixture):
"""When a check exceeds ``MAX_FINDINGS_PER_CHECK``, only ``cap`` rows
are loaded AND ``total_counts_out`` reports the pre-cap total.
Guards the PROWLER-1733 truncation knob: prevents both runaway memory
and silent data loss in the PDF (the banner relies on knowing the
real total).
"""
from unittest.mock import patch as _patch
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create 12 findings for a single check; cap to 5.
check_id = "aws_check_cap_test"
for i in range(12):
finding = Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"f-cap-{i:02d}",
check_id=check_id,
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
)
resource = Resource.objects.create(
tenant_id=tenant.id,
provider=scan.provider,
uid=f"r-cap-{i:02d}",
name=f"r-cap-{i:02d}",
metadata="{}",
details="",
region="us-east-1",
service="s",
type="t::r",
)
ResourceFindingMapping.objects.create(
tenant_id=tenant.id, finding=finding, resource=resource
)
mock_provider = Mock(type="aws")
mock_provider.identity.account = "test"
totals: dict = {}
# Patch the cap to a small value AND skip the heavy transform so we
# only assert on row counts and totals.
with (
_patch("tasks.jobs.threatscore_utils.MAX_FINDINGS_PER_CHECK", 5),
_patch(
"tasks.jobs.threatscore_utils.FindingOutput.transform_api_finding",
side_effect=lambda model, provider: Mock(check_id=model.check_id),
),
):
result = _load_findings_for_requirement_checks(
str(tenant.id),
str(scan.id),
[check_id],
mock_provider,
total_counts_out=totals,
)
assert len(result[check_id]) == 5, (
f"cap=5 should yield exactly 5 loaded findings, got {len(result[check_id])}"
)
assert totals[check_id] == 12, (
f"total_counts_out should report the pre-cap total (12), got {totals[check_id]}"
)
def test_only_failed_findings_pushes_down_to_sql(
self, tenants_fixture, scans_fixture
):
"""When ``only_failed_findings=True``, PASS rows are excluded by the
DB filter, not just visually hidden afterwards.
Regression for the consistency fix: previously the requirement-level
``only_failed`` flag filtered which requirements appeared, but inside
each rendered requirement the table still showed PASS rows mixed
with FAIL, which combined with ``MAX_FINDINGS_PER_CHECK`` could
truncate to 1000 PASS findings and hide the actual failure.
"""
from unittest.mock import patch as _patch
tenant = tenants_fixture[0]
scan = scans_fixture[0]
check_id = "aws_check_only_failed_test"
# Mix PASS and FAIL so the filter has something to drop.
for i in range(6):
status = StatusChoices.FAIL if i % 2 == 0 else StatusChoices.PASS
finding = Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"f-of-{i:02d}",
check_id=check_id,
status=status,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
)
resource = Resource.objects.create(
tenant_id=tenant.id,
provider=scan.provider,
uid=f"r-of-{i:02d}",
name=f"r-of-{i:02d}",
metadata="{}",
details="",
region="us-east-1",
service="s",
type="t::r",
)
ResourceFindingMapping.objects.create(
tenant_id=tenant.id, finding=finding, resource=resource
)
mock_provider = Mock(type="aws")
mock_provider.identity.account = "test"
totals: dict = {}
with _patch(
"tasks.jobs.threatscore_utils.FindingOutput.transform_api_finding",
side_effect=lambda model, provider: Mock(
check_id=model.check_id, status=model.status
),
):
result = _load_findings_for_requirement_checks(
str(tenant.id),
str(scan.id),
[check_id],
mock_provider,
total_counts_out=totals,
only_failed_findings=True,
)
# 3 FAIL + 3 PASS in DB; FAIL-only filter should load just 3.
loaded = result[check_id]
assert len(loaded) == 3, f"expected 3 FAIL findings, got {len(loaded)}"
statuses = {getattr(f, "status", None) for f in loaded}
assert statuses == {StatusChoices.FAIL}, (
f"expected all loaded findings to be FAIL; got statuses {statuses}"
)
# total_counts must reflect the FAIL-only total, not the global total.
assert totals[check_id] == 3, (
f"total_counts should be FAIL-only (3), got {totals[check_id]}"
)
def test_max_findings_per_check_disabled(self, tenants_fixture, scans_fixture):
"""``MAX_FINDINGS_PER_CHECK=0`` disables the cap; load all rows."""
from unittest.mock import patch as _patch
tenant = tenants_fixture[0]
scan = scans_fixture[0]
check_id = "aws_check_uncapped"
for i in range(8):
f = Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"f-unc-{i:02d}",
check_id=check_id,
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
)
r = Resource.objects.create(
tenant_id=tenant.id,
provider=scan.provider,
uid=f"r-unc-{i:02d}",
name=f"r-unc-{i:02d}",
metadata="{}",
details="",
region="us-east-1",
service="s",
type="t::r",
)
ResourceFindingMapping.objects.create(
tenant_id=tenant.id, finding=f, resource=r
)
mock_provider = Mock(type="aws")
mock_provider.identity.account = "test"
totals: dict = {}
with (
_patch("tasks.jobs.threatscore_utils.MAX_FINDINGS_PER_CHECK", 0),
_patch(
"tasks.jobs.threatscore_utils.FindingOutput.transform_api_finding",
side_effect=lambda model, provider: Mock(check_id=model.check_id),
),
):
result = _load_findings_for_requirement_checks(
str(tenant.id),
str(scan.id),
[check_id],
mock_provider,
total_counts_out=totals,
)
assert len(result[check_id]) == 8
assert totals[check_id] == 8
class TestCleanupStaleTmpOutputDirectories:
"""Unit tests for opportunistic stale cleanup under tmp output root."""
@@ -1168,181 +855,6 @@ class TestGenerateComplianceReportsOptimized:
assert result["cis"] == {"upload": False, "path": ""}
mock_cis.assert_not_called()
@patch("api.utils.initialize_prowler_provider")
@patch("tasks.jobs.report.rmtree")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_cis_report")
@patch("tasks.jobs.report.generate_csa_report")
@patch("tasks.jobs.report.generate_nis2_report")
@patch("tasks.jobs.report.generate_ens_report")
@patch("tasks.jobs.report.generate_threatscore_report")
@patch("tasks.jobs.report._generate_compliance_output_directory")
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report.Compliance.get_bulk")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.ScanSummary.objects.filter")
def test_findings_cache_eviction_after_framework(
self,
mock_scan_summary_filter,
mock_provider_get,
mock_get_bulk,
mock_aggregate_stats,
mock_generate_output_dir,
mock_threatscore,
mock_ens,
mock_nis2,
mock_csa,
mock_cis,
mock_upload_to_s3,
mock_rmtree,
mock_init_provider,
):
"""After each framework finishes, exclusive entries are evicted.
Threat scenario for PROWLER-1733: the shared ``findings_cache`` used
to grow monotonically through all 5 frameworks. With the new
eviction logic, check_ids only used by ThreatScore are dropped when
ThreatScore finishes, before ENS runs.
"""
from types import SimpleNamespace
from tasks.jobs import report as report_mod
mock_scan_summary_filter.return_value.exists.return_value = True
mock_provider_get.return_value = Mock(uid="provider-uid", provider="aws")
# ThreatScore consumes {tsc_only, shared}; ENS consumes {ens_only,
# shared}. After ThreatScore evicts, tsc_only must be gone but
# shared and ens_only must remain.
mock_get_bulk.return_value = {
"prowler_threatscore_aws": SimpleNamespace(
Requirements=[SimpleNamespace(Checks=["tsc_only", "shared"])]
),
"ens_rd2022_aws": SimpleNamespace(
Requirements=[SimpleNamespace(Checks=["ens_only", "shared"])]
),
}
mock_aggregate_stats.return_value = {}
mock_generate_output_dir.return_value = "/tmp/tenant/scan/x/prowler-out"
mock_upload_to_s3.return_value = "s3://bucket/tenant/scan/x/report.pdf"
mock_init_provider.return_value = Mock(name="prowler_provider")
# Seed the cache as if both frameworks had already loaded their
# findings. We mutate it indirectly: each generator wrapper is a
# Mock: make ThreatScore populate the cache, and have ENS observe
# the state at call time so we can introspect post-eviction.
observed_state: dict = {}
def _threatscore_side_effect(**kwargs):
cache = kwargs["findings_cache"]
cache["tsc_only"] = ["tsc-finding"]
cache["shared"] = ["shared-finding"]
def _ens_side_effect(**kwargs):
# ENS runs AFTER threatscore's _evict_after_framework("threatscore").
observed_state["cache_keys_when_ens_runs"] = set(
kwargs["findings_cache"].keys()
)
kwargs["findings_cache"]["ens_only"] = ["ens-finding"]
mock_threatscore.side_effect = _threatscore_side_effect
mock_ens.side_effect = _ens_side_effect
report_mod.generate_compliance_reports(
tenant_id=str(uuid.uuid4()),
scan_id=str(uuid.uuid4()),
provider_id=str(uuid.uuid4()),
generate_threatscore=True,
generate_ens=True,
generate_nis2=False,
generate_csa=False,
generate_cis=False,
)
# ``tsc_only`` was exclusive to ThreatScore → evicted before ENS ran.
# ``shared`` is still pending for ENS → must remain.
assert "tsc_only" not in observed_state["cache_keys_when_ens_runs"], (
"tsc_only should have been evicted before ENS ran"
)
assert "shared" in observed_state["cache_keys_when_ens_runs"], (
"shared must remain in cache because ENS still needs it"
)
@patch("api.utils.initialize_prowler_provider")
@patch("tasks.jobs.report.rmtree")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_cis_report")
@patch("tasks.jobs.report.generate_csa_report")
@patch("tasks.jobs.report.generate_nis2_report")
@patch("tasks.jobs.report.generate_ens_report")
@patch("tasks.jobs.report.generate_threatscore_report")
@patch("tasks.jobs.report._generate_compliance_output_directory")
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report.Compliance.get_bulk")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.ScanSummary.objects.filter")
def test_prowler_provider_initialized_once(
self,
mock_scan_summary_filter,
mock_provider_get,
mock_get_bulk,
mock_aggregate_stats,
mock_generate_output_dir,
mock_threatscore,
mock_ens,
mock_nis2,
mock_csa,
mock_cis,
mock_upload_to_s3,
mock_rmtree,
mock_init_provider,
):
"""``initialize_prowler_provider`` must be called exactly once for
the whole batch (PROWLER-1733). Previously each generator re-init'd
the SDK provider in ``_load_compliance_data`` 5 inits per scan.
"""
mock_scan_summary_filter.return_value.exists.return_value = True
mock_provider_get.return_value = Mock(uid="provider-uid", provider="aws")
# CIS variant discovery needs at least one cis_* key.
mock_get_bulk.return_value = {"cis_6.0_aws": Mock()}
mock_aggregate_stats.return_value = {}
mock_generate_output_dir.return_value = "/tmp/tenant/scan/x/prowler-out"
mock_upload_to_s3.return_value = "s3://bucket/tenant/scan/x/report.pdf"
mock_init_provider.return_value = Mock(name="prowler_provider")
generate_compliance_reports(
tenant_id=str(uuid.uuid4()),
scan_id=str(uuid.uuid4()),
provider_id=str(uuid.uuid4()),
generate_threatscore=True,
generate_ens=True,
generate_nis2=True,
generate_csa=True,
generate_cis=True,
)
# All 5 wrappers were invoked once each…
mock_threatscore.assert_called_once()
mock_ens.assert_called_once()
mock_nis2.assert_called_once()
mock_csa.assert_called_once()
mock_cis.assert_called_once()
# …but the SDK provider was initialized only once.
assert mock_init_provider.call_count == 1, (
f"expected 1 init, got {mock_init_provider.call_count} "
f"(prowler_provider must be shared across reports)"
)
# The shared instance must reach every wrapper as kwargs.
shared = mock_init_provider.return_value
for mock_wrapper in (
mock_threatscore,
mock_ens,
mock_nis2,
mock_csa,
mock_cis,
):
_, call_kwargs = mock_wrapper.call_args
assert call_kwargs.get("prowler_provider") is shared
@patch("tasks.jobs.report.rmtree")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_threatscore_report")
@@ -1269,48 +1269,6 @@ class TestComponentEdgeCases:
# Should be a LongTable for large datasets
assert isinstance(table, LongTable)
def test_zebra_uses_rowbackgrounds_not_per_row_background(self, monkeypatch):
"""The styles list must contain exactly one ROWBACKGROUNDS entry
regardless of row count, never N per-row BACKGROUND entries.
"""
captured: dict = {}
# Capture the list passed to TableStyle. create_data_table builds a
# list of style tuples and wraps it in a TableStyle exactly once;
# by patching TableStyle we intercept that list.
import tasks.jobs.reports.components as comp_mod
original_table_style = comp_mod.TableStyle
def _capture_table_style(style_list):
captured["styles"] = list(style_list)
return original_table_style(style_list)
monkeypatch.setattr(comp_mod, "TableStyle", _capture_table_style)
data = [{"name": f"Item {i}"} for i in range(60)]
columns = [ColumnConfig("Name", 2 * inch, "name")]
comp_mod.create_data_table(data, columns, alternate_rows=True)
styles = captured["styles"]
# Count by command name.
names = [s[0] for s in styles if isinstance(s, tuple) and s]
# Exactly one ROWBACKGROUNDS entry.
assert names.count("ROWBACKGROUNDS") == 1
# Zero per-row BACKGROUND entries on data rows. (The header row
# BACKGROUND command is intentional and lives at coords (0,0)/(-1,0).)
data_row_bg = [
s
for s in styles
if isinstance(s, tuple)
and s[0] == "BACKGROUND"
and not (s[1] == (0, 0) and s[2] == (-1, 0))
]
assert data_row_bg == [], (
f"expected no per-row BACKGROUND entries on data rows; "
f"got {len(data_row_bg)}"
)
def test_create_risk_component_zero_values(self):
"""Test risk component with zero values."""
component = create_risk_component(risk_level=0, weight=0, score=0)
@@ -1386,194 +1344,3 @@ class TestFrameworkConfigEdgeCases:
assert get_framework_config("my_custom_threatscore_compliance") is not None
assert get_framework_config("ens_something_else") is not None
assert get_framework_config("nis2_gcp") is not None
# =============================================================================
# Findings Table Chunking Tests (PROWLER-1733)
# =============================================================================
#
# These tests guard the OOM-prevention behaviour added in PROWLER-1733:
# ``_create_findings_tables`` must split a list of findings into multiple
# small sub-tables instead of producing one giant Table, which would force
# ReportLab to resolve layout for all rows at once and OOM the worker on
# scans with thousands of findings per check.
class _DummyMetadata:
"""Lightweight stand-in for FindingOutput.metadata used in chunking tests."""
def __init__(self, check_title: str = "Title", severity: str = "high"):
self.CheckTitle = check_title
self.Severity = severity
class _DummyFinding:
"""Lightweight stand-in for FindingOutput used in chunking tests.
The chunking code only reads a small set of attributes via ``getattr``,
so a duck-typed object is enough and lets the tests run without touching
the DB or pydantic deserialisation.
"""
def __init__(
self,
check_id: str = "aws_check",
resource_name: str = "res-1",
resource_uid: str = "",
status: str = "FAIL",
region: str = "us-east-1",
with_metadata: bool = True,
):
self.check_id = check_id
self.resource_name = resource_name
self.resource_uid = resource_uid
self.status = status
self.region = region
if with_metadata:
self.metadata = _DummyMetadata()
else:
self.metadata = None
def _make_concrete_generator():
"""Return a minimal concrete subclass of BaseComplianceReportGenerator."""
class _Concrete(BaseComplianceReportGenerator):
def create_executive_summary(self, data):
return []
def create_charts_section(self, data):
return []
def create_requirements_index(self, data):
return []
return _Concrete(FrameworkConfig(name="test", display_name="Test"))
class TestFindingsTableChunking:
"""Tests for ``_create_findings_tables`` (PROWLER-1733)."""
def test_chunking_produces_expected_number_of_subtables(self):
"""5000 findings @ chunk_size=300 → 17 sub-tables + 16 spacers."""
generator = _make_concrete_generator()
findings = [_DummyFinding(check_id="c1") for _ in range(5000)]
flowables = generator._create_findings_tables(findings, chunk_size=300)
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
spacers = [f for f in flowables if isinstance(f, Spacer)]
# ceil(5000 / 300) == 17
assert len(tables) == 17
# Spacer between every pair of contiguous tables, not after the last
assert len(spacers) == 16
def test_chunk_size_param_overrides_default(self):
"""250 findings @ chunk_size=100 → 3 sub-tables."""
generator = _make_concrete_generator()
findings = [_DummyFinding(check_id="c2") for _ in range(250)]
flowables = generator._create_findings_tables(findings, chunk_size=100)
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
assert len(tables) == 3
def test_empty_findings_returns_empty_list(self):
"""No findings → no flowables. Callers can extend(...) safely."""
generator = _make_concrete_generator()
assert generator._create_findings_tables([]) == []
def test_single_chunk_has_no_spacer(self):
"""A single sub-table must not emit a trailing spacer."""
generator = _make_concrete_generator()
findings = [_DummyFinding(check_id="c3") for _ in range(10)]
flowables = generator._create_findings_tables(findings, chunk_size=300)
assert len(flowables) == 1
assert isinstance(flowables[0], (Table, LongTable))
def test_malformed_finding_is_skipped(self):
"""A broken finding must not abort the report; it is logged and skipped."""
generator = _make_concrete_generator()
class _Broken:
# No attributes at all; getattr() defaults will mostly cope, but
# we force an explicit error by making the metadata attribute
# itself raise on access.
@property
def metadata(self):
raise RuntimeError("boom")
check_id = "broken"
findings = [
_DummyFinding(check_id="c4"),
_Broken(),
_DummyFinding(check_id="c4"),
]
flowables = generator._create_findings_tables(findings, chunk_size=300)
# Two good rows → one sub-table containing them; the broken one is
# logged and dropped, not propagated.
tables = [f for f in flowables if isinstance(f, (Table, LongTable))]
assert len(tables) == 1
def test_create_findings_table_alias_returns_first_chunk(self):
"""The deprecated alias must keep returning a single Table flowable."""
generator = _make_concrete_generator()
findings = [_DummyFinding(check_id="c5") for _ in range(700)]
first = generator._create_findings_table(findings)
assert isinstance(first, (Table, LongTable))
def test_create_findings_table_alias_empty(self):
"""Alias on empty input returns an empty (header-only) Table, not None."""
generator = _make_concrete_generator()
result = generator._create_findings_table([])
# The legacy alias never returned None; an empty header-only table
# is a strict superset of that contract.
assert isinstance(result, (Table, LongTable))
# =============================================================================
# Logging Context Manager Tests (PROWLER-1733)
# =============================================================================
class TestLogPhaseContextManager:
"""Tests for ``_log_phase`` (PROWLER-1733).
The context manager emits structured ``phase_start`` / ``phase_end``
logs with ``scan_id``, ``framework`` and ``elapsed_s``, so Datadog/
CloudWatch queries can pivot by scan and find the slow section.
"""
def test_emits_start_and_end_with_elapsed_and_rss(self, caplog):
from tasks.jobs.reports.base import _log_phase
caplog.set_level("INFO", logger="tasks.jobs.reports.base")
with _log_phase("unit_test_phase", scan_id="s-1", framework="Test FW"):
pass
messages = [r.getMessage() for r in caplog.records]
starts = [m for m in messages if "phase_start" in m]
ends = [m for m in messages if "phase_end" in m]
assert len(starts) == 1 and len(ends) == 1
assert "phase=unit_test_phase" in starts[0]
assert "scan_id=s-1" in starts[0]
assert "framework=Test FW" in starts[0]
assert "elapsed_s=" in ends[0]
assert "rss_kb=" in ends[0]
assert "delta_rss_kb=" in ends[0]
def test_failure_logs_phase_failed_and_reraises(self, caplog):
from tasks.jobs.reports.base import _log_phase
caplog.set_level("INFO", logger="tasks.jobs.reports.base")
with pytest.raises(RuntimeError, match="boom"):
with _log_phase("failing_phase", scan_id="s-2", framework="FW"):
raise RuntimeError("boom")
messages = [r.getMessage() for r in caplog.records]
assert any("phase_failed" in m and "failing_phase" in m for m in messages)
# No phase_end on the failure path.
assert not any("phase_end" in m for m in messages)
-335
View File
@@ -1,335 +0,0 @@
# AWS Inventory Connectivity Graph
A community-contributed tool that generates interactive connectivity graphs from Prowler AWS scans, visualizing relationships between AWS resources with zero additional API calls.
## Overview
This tool extends Prowler by producing two artifacts after a scan completes:
- **`<output>.inventory.json`** Machine-readable graph (nodes + edges)
- **`<output>.inventory.html`** Interactive D3.js force-directed visualization
### Why?
Prowler's existing outputs (CSV, ASFF, OCSF, HTML) report individual check findings but provide no cross-service topology view. Security engineers need to understand **how** resources are connected—which Lambda functions sit inside which VPC, which IAM roles can be assumed by which services, which event sources trigger which functions—before they can reason about attack paths, blast-radius, or lateral-movement risk.
This tool fills that gap by building a connectivity graph from the service clients that are already loaded during a Prowler scan.
## Features
### Supported AWS Services
The tool currently extracts connectivity information from:
- **Lambda** Functions, VPC/subnet/SG edges, event source mappings, layers, DLQ, KMS
- **EC2** Instances, security groups, subnet/VPC edges
- **VPC** VPCs, subnets, peering connections
- **RDS** DB instances, VPC/SG/cluster/KMS edges
- **ELBv2** ALB/NLB load balancers, SG and VPC edges
- **S3** Buckets, replication targets, logging buckets, KMS keys
- **IAM** Roles, trust-relationship edges (who can assume what)
### Edge Semantic Types
Edges are typed for downstream filtering and attack-path analysis:
- `network` Resources share a network path (VPC/subnet/SG)
- `iam` IAM trust or permission relationship
- `triggers` One resource can invoke another (event source → Lambda)
- `data_flow` Data is written/read (Lambda → SQS dead-letter queue)
- `depends_on` Soft dependency (Lambda layer, subnet belongs to VPC)
- `routes_to` Traffic routing (LB → target)
- `replicates_to` S3 replication
- `encrypts` KMS key encrypts the resource
- `logs_to` Logging relationship
### Interactive HTML Graph Features
- Force-directed layout with drag-and-drop node pinning
- Zoom / pan (mouse wheel + click-drag on background)
- Per-service color-coded nodes with a legend
- Hover tooltips showing ARN + all metadata properties
- Service filter dropdown (show only Lambda, EC2, RDS, etc.)
- Adjustable link-distance and charge-strength physics sliders
- Edge labels on every arrow
## Installation
### Prerequisites
- Python 3.9.1 or higher
- Prowler installed and configured (see [Prowler documentation](https://docs.prowler.com/))
### Setup
1. Clone or download this directory to your local machine
2. Ensure Prowler is installed and working
3. No additional dependencies required beyond Prowler's existing requirements
## Usage
### Basic Usage
Run Prowler with your desired checks, then use the inventory graph script:
```bash
# Run Prowler scan (example)
prowler aws --output-formats csv
# Generate inventory graph from the scan
python contrib/inventory-graph/inventory_graph.py --output-directory ./output
```
### Command-Line Options
```bash
python contrib/inventory-graph/inventory_graph.py [OPTIONS]
Options:
--output-directory DIR Directory to save output files (default: ./output)
--output-filename NAME Base filename without extension (default: prowler-inventory-<timestamp>)
--help Show this help message and exit
```
### Example Workflow
```bash
# 1. Run a Prowler scan on your AWS account
prowler aws --profile my-aws-profile --output-formats csv html
# 2. Generate the inventory graph
python contrib/inventory-graph/inventory_graph.py \
--output-directory ./output \
--output-filename my-aws-inventory
# 3. Open the HTML file in your browser
open output/my-aws-inventory.inventory.html
```
### Integration with Prowler Scans
The tool reads from already-loaded AWS service clients in memory (via `sys.modules`). This means:
- **Zero extra AWS API calls** Uses data already collected during the Prowler scan
- **Graceful degradation** Services not scanned are silently skipped
- **Flexible** Works with any subset of Prowler checks
## Output Files
### JSON Output (`*.inventory.json`)
Machine-readable graph structure:
```json
{
"generated_at": "2026-03-19T12:34:56Z",
"nodes": [
{
"id": "arn:aws:lambda:us-east-1:123456789012:function:my-function",
"type": "lambda_function",
"name": "my-function",
"service": "lambda",
"region": "us-east-1",
"account_id": "123456789012",
"properties": {
"runtime": "python3.9",
"vpc_id": "vpc-abc123"
}
}
],
"edges": [
{
"source_id": "arn:aws:lambda:...",
"target_id": "arn:aws:ec2:...:vpc/vpc-abc123",
"edge_type": "network",
"label": "in-vpc"
}
],
"stats": {
"node_count": 42,
"edge_count": 87
}
}
```
### HTML Output (`*.inventory.html`)
Self-contained interactive visualization that opens in any modern browser. No server or build step required.
## Architecture
### Design Decisions
| Decision | Rationale |
|----------|-----------|
| **Read from sys.modules** | Zero extra AWS API calls; services not scanned are silently skipped |
| **Self-contained HTML** | D3.js v7 via CDN; no server, no build step; opens in any browser |
| **One extractor per service** | Each extractor is independently testable; adding a new service = one new file + one line in the registry |
| **Typed edges** | Semantic types allow downstream consumers (attack-path tools, Neo4j import) to filter by relationship class |
### Project Structure
```
contrib/inventory-graph/
├── README.md # This file
├── inventory_graph.py # Main entry point script
├── lib/
│ ├── __init__.py
│ ├── models.py # ResourceNode, ResourceEdge, ConnectivityGraph dataclasses
│ ├── graph_builder.py # Reads loaded service clients from sys.modules
│ ├── inventory_output.py # write_json(), write_html()
│ └── extractors/
│ ├── __init__.py
│ ├── lambda_extractor.py # Lambda functions → VPC/subnet/SG/event-sources/layers/DLQ/KMS
│ ├── ec2_extractor.py # EC2 instances + security groups → subnet/VPC
│ ├── vpc_extractor.py # VPCs, subnets, peering connections
│ ├── rds_extractor.py # RDS instances → VPC/SG/cluster/KMS
│ ├── elbv2_extractor.py # ALB/NLB load balancers → SG/VPC
│ ├── s3_extractor.py # S3 buckets → replication targets/logging buckets/KMS keys
│ └── iam_extractor.py # IAM roles + trust-relationship edges
└── examples/
└── sample_output.html # Example output (optional)
```
## Testing
### Smoke Test (No AWS Credentials Needed)
```python
import sys
from unittest.mock import MagicMock
# Wire a fake Lambda client
mock_module = MagicMock()
mock_fn = MagicMock()
mock_fn.arn = "arn:aws:lambda:us-east-1:123:function:test"
mock_fn.name = "test"
mock_fn.region = "us-east-1"
mock_fn.vpc_id = "vpc-abc"
mock_fn.security_groups = ["sg-111"]
mock_fn.subnet_ids = {"subnet-aaa"}
mock_fn.environment = None
mock_fn.kms_key_arn = None
mock_fn.layers = []
mock_fn.dead_letter_config = None
mock_fn.event_source_mappings = []
mock_module.awslambda_client.functions = {mock_fn.arn: mock_fn}
mock_module.awslambda_client.audited_account = "123"
sys.modules["prowler.providers.aws.services.awslambda.awslambda_client"] = mock_module
from contrib.inventory_graph.lib.graph_builder import build_graph
from contrib.inventory_graph.lib.inventory_output import write_json, write_html
graph = build_graph()
write_json(graph, "/tmp/test.inventory.json")
write_html(graph, "/tmp/test.inventory.html")
# Open /tmp/test.inventory.html in a browser
```
## Extending
### Adding a New Service
1. Create a new extractor file in `lib/extractors/` (e.g., `dynamodb_extractor.py`)
2. Implement the `extract(client)` function that returns `(nodes, edges)`
3. Register it in `lib/graph_builder.py` in the `_SERVICE_REGISTRY` tuple
Example extractor template:
```python
from typing import List, Tuple
from prowler.lib.outputs.inventory.models import ResourceNode, ResourceEdge
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""Extract DynamoDB tables and their relationships."""
nodes = []
edges = []
for table in client.tables:
nodes.append(
ResourceNode(
id=table.arn,
type="dynamodb_table",
name=table.name,
service="dynamodb",
region=table.region,
account_id=client.audited_account,
properties={"billing_mode": table.billing_mode}
)
)
# Add edges for KMS encryption, streams, etc.
if table.kms_key_arn:
edges.append(
ResourceEdge(
source_id=table.kms_key_arn,
target_id=table.arn,
edge_type="encrypts",
label="encrypts"
)
)
return nodes, edges
```
## Troubleshooting
### No nodes discovered
**Problem:** The tool reports "no nodes discovered" after running.
**Solution:** Ensure you've run a Prowler scan first. The tool reads from in-memory service clients loaded during the scan. If no services were scanned, no nodes will be discovered.
### Missing services in the graph
**Problem:** Some AWS services are not appearing in the graph.
**Solution:** The tool only includes services that have been scanned by Prowler. Run Prowler with the services you want to include, or run without service filters to scan all available services.
### HTML file doesn't display properly
**Problem:** The HTML visualization doesn't load or shows errors.
**Solution:**
- Ensure you're opening the file in a modern browser (Chrome, Firefox, Safari, Edge)
- Check your browser's console for JavaScript errors
- Verify the file was generated completely (check file size > 0)
- The HTML requires internet access to load D3.js from CDN
## Roadmap
Potential future enhancements:
- [ ] Support for additional AWS services (DynamoDB, SQS, SNS, etc.)
- [ ] Export to Neo4j / Cartography format
- [ ] Attack path analysis integration
- [ ] Multi-account/multi-region aggregation
- [ ] Custom edge type filtering in HTML UI
- [ ] Graph diff between two scans
## Contributing
This is a community contribution. If you'd like to enhance it:
1. Fork the Prowler repository
2. Make your changes in `contrib/inventory-graph/`
3. Test thoroughly
4. Submit a pull request with a clear description
## License
This tool is part of the Prowler project and is licensed under the Apache License 2.0.
## Credits
- **Author:** [@sandiyochristan](https://github.com/sandiyochristan)
- **Related PR:** [#10382](https://github.com/prowler-cloud/prowler/pull/10382)
- **Prowler Project:** [prowler-cloud/prowler](https://github.com/prowler-cloud/prowler)
## Support
For issues or questions:
- Open an issue in the [Prowler repository](https://github.com/prowler-cloud/prowler/issues)
- Join the [Prowler Community Slack](https://goto.prowler.com/slack)
- Tag your issue with `contrib:inventory-graph`
@@ -1,181 +0,0 @@
#!/usr/bin/env python3
"""
Example: Generate AWS Inventory Graph with Mock Data
This example demonstrates how to use the inventory graph tool with mock AWS data.
No AWS credentials required.
"""
import sys
from pathlib import Path
from unittest.mock import MagicMock
# Add parent directory to path
sys.path.insert(0, str(Path(__file__).parent.parent))
from lib.graph_builder import build_graph
from lib.inventory_output import write_json, write_html
def create_mock_lambda_client():
"""Create a mock Lambda client with sample data."""
mock_module = MagicMock()
# Create a mock Lambda function
mock_fn = MagicMock()
mock_fn.arn = "arn:aws:lambda:us-east-1:123456789012:function:my-test-function"
mock_fn.name = "my-test-function"
mock_fn.region = "us-east-1"
mock_fn.vpc_id = "vpc-abc123"
mock_fn.security_groups = ["sg-111222"]
mock_fn.subnet_ids = {"subnet-aaa111", "subnet-bbb222"}
mock_fn.environment = {"Variables": {"ENV": "production"}}
mock_fn.kms_key_arn = (
"arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
)
mock_fn.layers = []
mock_fn.dead_letter_config = None
mock_fn.event_source_mappings = []
mock_module.awslambda_client.functions = {mock_fn.arn: mock_fn}
mock_module.awslambda_client.audited_account = "123456789012"
return mock_module
def create_mock_ec2_client():
"""Create a mock EC2 client with sample data."""
mock_module = MagicMock()
# Create a mock EC2 instance
mock_instance = MagicMock()
mock_instance.arn = (
"arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
)
mock_instance.id = "i-1234567890abcdef0"
mock_instance.region = "us-east-1"
mock_instance.vpc_id = "vpc-abc123"
mock_instance.subnet_id = "subnet-aaa111"
mock_instance.security_groups = [MagicMock(id="sg-111222")]
mock_instance.state = "running"
mock_instance.type = "t3.micro"
mock_instance.tags = [{"Key": "Name", "Value": "test-instance"}]
# Create a mock security group
mock_sg = MagicMock()
mock_sg.arn = "arn:aws:ec2:us-east-1:123456789012:security-group/sg-111222"
mock_sg.id = "sg-111222"
mock_sg.name = "test-security-group"
mock_sg.region = "us-east-1"
mock_sg.vpc_id = "vpc-abc123"
mock_module.ec2_client.instances = [mock_instance]
mock_module.ec2_client.security_groups = [mock_sg]
mock_module.ec2_client.audited_account = "123456789012"
return mock_module
def create_mock_vpc_client():
"""Create a mock VPC client with sample data."""
mock_module = MagicMock()
# Create a mock VPC
mock_vpc = MagicMock()
mock_vpc.arn = "arn:aws:ec2:us-east-1:123456789012:vpc/vpc-abc123"
mock_vpc.id = "vpc-abc123"
mock_vpc.region = "us-east-1"
mock_vpc.cidr_block = "10.0.0.0/16"
mock_vpc.tags = [{"Key": "Name", "Value": "test-vpc"}]
# Create mock subnets
mock_subnet1 = MagicMock()
mock_subnet1.arn = "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-aaa111"
mock_subnet1.id = "subnet-aaa111"
mock_subnet1.region = "us-east-1"
mock_subnet1.vpc_id = "vpc-abc123"
mock_subnet1.cidr_block = "10.0.1.0/24"
mock_subnet1.availability_zone = "us-east-1a"
mock_subnet2 = MagicMock()
mock_subnet2.arn = "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-bbb222"
mock_subnet2.id = "subnet-bbb222"
mock_subnet2.region = "us-east-1"
mock_subnet2.vpc_id = "vpc-abc123"
mock_subnet2.cidr_block = "10.0.2.0/24"
mock_subnet2.availability_zone = "us-east-1b"
mock_module.vpc_client.vpcs = [mock_vpc]
mock_module.vpc_client.subnets = [mock_subnet1, mock_subnet2]
mock_module.vpc_client.vpc_peering_connections = []
mock_module.vpc_client.audited_account = "123456789012"
return mock_module
def main():
"""Main function to demonstrate the inventory graph generation."""
print("=" * 70)
print("AWS Inventory Graph - Mock Data Example")
print("=" * 70)
print()
# Create mock clients and inject them into sys.modules
print("Creating mock AWS service clients...")
sys.modules["prowler.providers.aws.services.awslambda.awslambda_client"] = (
create_mock_lambda_client()
)
sys.modules["prowler.providers.aws.services.ec2.ec2_client"] = (
create_mock_ec2_client()
)
sys.modules["prowler.providers.aws.services.vpc.vpc_client"] = (
create_mock_vpc_client()
)
print("✓ Mock clients created")
print()
# Build the graph
print("Building connectivity graph...")
graph = build_graph()
print(f"✓ Graph built: {len(graph.nodes)} nodes, {len(graph.edges)} edges")
print()
# Display discovered nodes
print("Discovered nodes:")
for node in graph.nodes:
print(f" - {node.type}: {node.name} ({node.region})")
print()
# Display discovered edges
print("Discovered edges:")
for edge in graph.edges:
source_node = next((n for n in graph.nodes if n.id == edge.source_id), None)
target_node = next((n for n in graph.nodes if n.id == edge.target_id), None)
source_name = source_node.name if source_node else edge.source_id
target_name = target_node.name if target_node else edge.target_id
print(f" - {source_name} --[{edge.edge_type}]--> {target_name}")
print()
# Write outputs
output_dir = Path(__file__).parent
json_path = output_dir / "example_output.inventory.json"
html_path = output_dir / "example_output.inventory.html"
print("Writing output files...")
write_json(graph, str(json_path))
write_html(graph, str(html_path))
print(f"✓ JSON written to: {json_path}")
print(f"✓ HTML written to: {html_path}")
print()
print("=" * 70)
print("✓ Example complete!")
print("=" * 70)
print()
print(f"Open the HTML file to view the interactive graph:")
print(f" open {html_path}")
print()
if __name__ == "__main__":
main()
-158
View File
@@ -1,158 +0,0 @@
#!/usr/bin/env python3
"""
AWS Inventory Connectivity Graph Generator
A standalone tool that generates interactive connectivity graphs from Prowler AWS scans.
This tool reads from already-loaded AWS service clients in memory and produces:
- JSON graph (nodes + edges)
- Interactive HTML visualization
Usage:
python inventory_graph.py --output-directory ./output --output-filename my-inventory
For more information, see README.md
"""
import argparse
import os
import sys
from datetime import datetime
from pathlib import Path
# Add the contrib directory to the path so we can import the lib modules
CONTRIB_DIR = Path(__file__).parent
sys.path.insert(0, str(CONTRIB_DIR))
from lib.graph_builder import build_graph
from lib.inventory_output import write_json, write_html
def parse_arguments():
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="Generate AWS inventory connectivity graph from Prowler scan data",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Generate graph with default settings
python inventory_graph.py
# Specify custom output directory and filename
python inventory_graph.py --output-directory ./my-output --output-filename aws-inventory
# After running a Prowler scan
prowler aws --profile my-profile
python inventory_graph.py --output-directory ./output
For more information, see README.md
""",
)
parser.add_argument(
"--output-directory",
"-o",
default="./output",
help="Directory to save output files (default: ./output)",
)
parser.add_argument(
"--output-filename",
"-f",
default=None,
help="Base filename without extension (default: prowler-inventory-<timestamp>)",
)
parser.add_argument(
"--verbose",
"-v",
action="store_true",
help="Enable verbose output",
)
return parser.parse_args()
def main():
"""Main entry point for the inventory graph generator."""
args = parse_arguments()
# Set up output paths
output_dir = Path(args.output_directory)
output_dir.mkdir(parents=True, exist_ok=True)
# Generate filename with timestamp if not provided
if args.output_filename:
base_filename = args.output_filename
else:
timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
base_filename = f"prowler-inventory-{timestamp}"
json_path = output_dir / f"{base_filename}.inventory.json"
html_path = output_dir / f"{base_filename}.inventory.html"
print("=" * 70)
print("AWS Inventory Connectivity Graph Generator")
print("=" * 70)
print()
# Build the graph from loaded service clients
if args.verbose:
print("Building connectivity graph from loaded AWS service clients...")
graph = build_graph()
# Check if any nodes were discovered
if not graph.nodes:
print("⚠️ WARNING: No nodes discovered!")
print()
print("This usually means:")
print(" 1. No Prowler scan has been run yet in this Python session")
print(" 2. No AWS service clients are loaded in memory")
print()
print("To fix this:")
print(" 1. Run a Prowler scan first: prowler aws --output-formats csv")
print(" 2. Then run this script in the same session")
print()
print(
"Alternatively, integrate this tool directly into Prowler's output pipeline."
)
sys.exit(1)
print(f"✓ Discovered {len(graph.nodes)} nodes and {len(graph.edges)} edges")
print()
# Write outputs
if args.verbose:
print(f"Writing JSON output to: {json_path}")
write_json(graph, str(json_path))
if args.verbose:
print(f"Writing HTML output to: {html_path}")
write_html(graph, str(html_path))
print()
print("=" * 70)
print("✓ Graph generation complete!")
print("=" * 70)
print()
print(f"📄 JSON: {json_path}")
print(f"🌐 HTML: {html_path}")
print()
print(f"Open the HTML file in your browser to explore the interactive graph:")
print(f" open {html_path}")
print()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
print("\n\nInterrupted by user. Exiting...")
sys.exit(130)
except Exception as e:
print(f"\n❌ Error: {e}", file=sys.stderr)
if "--verbose" in sys.argv or "-v" in sys.argv:
import traceback
traceback.print_exc()
sys.exit(1)
@@ -1,94 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract EC2 instance and security-group nodes with their edges.
Edges produced:
- instance security-group [network]
- instance subnet [network]
- security-group VPC [network]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
# EC2 Instances
for instance in client.instances:
name = instance.id
for tag in instance.tags or []:
if tag.get("Key") == "Name":
name = tag["Value"]
break
props = {
"instance_type": getattr(instance, "type", None),
"state": getattr(instance, "state", None),
"vpc_id": getattr(instance, "vpc_id", None),
"subnet_id": getattr(instance, "subnet_id", None),
"public_ip": getattr(instance, "public_ip_address", None),
"private_ip": getattr(instance, "private_ip_address", None),
}
nodes.append(
ResourceNode(
id=instance.arn,
type="ec2_instance",
name=name,
service="ec2",
region=instance.region,
account_id=client.audited_account,
properties={k: v for k, v in props.items() if v is not None},
)
)
for sg_id in instance.security_groups or []:
edges.append(
ResourceEdge(
source_id=instance.arn,
target_id=sg_id,
edge_type="network",
label="sg",
)
)
if instance.subnet_id:
edges.append(
ResourceEdge(
source_id=instance.arn,
target_id=instance.subnet_id,
edge_type="network",
label="subnet",
)
)
# Security Groups
for sg in client.security_groups.values():
name = (
sg.name if hasattr(sg, "name") else sg.id if hasattr(sg, "id") else sg.arn
)
nodes.append(
ResourceNode(
id=sg.arn,
type="security_group",
name=name,
service="ec2",
region=sg.region,
account_id=client.audited_account,
properties={"vpc_id": sg.vpc_id},
)
)
if sg.vpc_id:
edges.append(
ResourceEdge(
source_id=sg.arn,
target_id=sg.vpc_id,
edge_type="network",
label="in-vpc",
)
)
return nodes, edges
@@ -1,60 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract ELBv2 (ALB/NLB) load balancer nodes and their edges.
Edges produced:
- load_balancer security-group [network]
- load_balancer VPC [network]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for lb in client.loadbalancersv2.values():
props = {
"type": getattr(lb, "type", None),
"scheme": getattr(lb, "scheme", None),
"dns_name": getattr(lb, "dns", None),
"vpc_id": getattr(lb, "vpc_id", None),
}
name = getattr(lb, "name", lb.arn.split("/")[-2] if "/" in lb.arn else lb.arn)
nodes.append(
ResourceNode(
id=lb.arn,
type="load_balancer",
name=name,
service="elbv2",
region=lb.region,
account_id=client.audited_account,
properties={k: v for k, v in props.items() if v is not None},
)
)
for sg_id in lb.security_groups or []:
edges.append(
ResourceEdge(
source_id=lb.arn,
target_id=sg_id,
edge_type="network",
label="sg",
)
)
vpc_id = getattr(lb, "vpc_id", None)
if vpc_id:
edges.append(
ResourceEdge(
source_id=lb.arn,
target_id=vpc_id,
edge_type="network",
label="in-vpc",
)
)
return nodes, edges
@@ -1,84 +0,0 @@
import json
from typing import Any, Dict, List, Tuple
from prowler.lib.logger import logger
from lib.models import ResourceEdge, ResourceNode
def _parse_trust_principals(assume_role_policy: Any) -> List[str]:
"""
Return a flat list of principal strings from an IAM assume-role policy document.
The policy may be a dict already or a JSON string.
"""
if not assume_role_policy:
return []
if isinstance(assume_role_policy, str):
try:
assume_role_policy = json.loads(assume_role_policy)
except (json.JSONDecodeError, ValueError):
return []
principals = []
for statement in assume_role_policy.get("Statement", []):
principal = statement.get("Principal", {})
if isinstance(principal, str):
principals.append(principal)
elif isinstance(principal, dict):
for v in principal.values():
if isinstance(v, list):
principals.extend(v)
else:
principals.append(v)
elif isinstance(principal, list):
principals.extend(principal)
return principals
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract IAM role nodes and their trust-relationship edges.
Edges produced:
- trusted-principal role [iam] (who can assume this role)
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for role in client.roles:
props: Dict[str, Any] = {
"path": getattr(role, "path", None),
"create_date": str(getattr(role, "create_date", "") or ""),
}
nodes.append(
ResourceNode(
id=role.arn,
type="iam_role",
name=role.name,
service="iam",
region="global",
account_id=client.audited_account,
properties={k: v for k, v in props.items() if v},
)
)
# Trust-relationship edges: principal → role (principal CAN assume role)
try:
for principal in _parse_trust_principals(role.assume_role_policy):
if principal and principal != "*":
edges.append(
ResourceEdge(
source_id=principal,
target_id=role.arn,
edge_type="iam",
label="can-assume",
)
)
except Exception as e:
logger.debug(
f"inventory iam_extractor: could not parse trust policy for {role.arn}: {e}"
)
return nodes, edges
@@ -1,118 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract Lambda function nodes and their edges from an awslambda_client.
Edges produced:
- lambda VPC [network]
- lambda subnet [network]
- lambda sg [network]
- lambda event-source[triggers] (from EventSourceMapping)
- lambda layer ARN [depends_on]
- lambda DLQ target [data_flow]
- lambda KMS key [encrypts]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for fn in client.functions.values():
props = {
"runtime": fn.runtime,
"vpc_id": fn.vpc_id,
}
if fn.environment:
props["has_env_vars"] = True
if fn.kms_key_arn:
props["kms_key_arn"] = fn.kms_key_arn
nodes.append(
ResourceNode(
id=fn.arn,
type="lambda_function",
name=fn.name,
service="lambda",
region=fn.region,
account_id=client.audited_account,
properties=props,
)
)
# Network edges → VPC, subnets, security groups
if fn.vpc_id:
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=fn.vpc_id,
edge_type="network",
label="in-vpc",
)
)
for sg_id in fn.security_groups or []:
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=sg_id,
edge_type="network",
label="sg",
)
)
for subnet_id in fn.subnet_ids or set():
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=subnet_id,
edge_type="network",
label="subnet",
)
)
# Trigger edges from event source mappings
for esm in getattr(fn, "event_source_mappings", []):
edges.append(
ResourceEdge(
source_id=esm.event_source_arn,
target_id=fn.arn,
edge_type="triggers",
label=f"esm:{esm.state}",
)
)
# Layer dependency edges
for layer in getattr(fn, "layers", []):
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=layer.arn,
edge_type="depends_on",
label="layer",
)
)
# Dead-letter queue data-flow edge
dlq = getattr(fn, "dead_letter_config", None)
if dlq and dlq.target_arn:
edges.append(
ResourceEdge(
source_id=fn.arn,
target_id=dlq.target_arn,
edge_type="data_flow",
label="dlq",
)
)
# KMS encryption edge
if fn.kms_key_arn:
edges.append(
ResourceEdge(
source_id=fn.kms_key_arn,
target_id=fn.arn,
edge_type="encrypts",
label="kms",
)
)
return nodes, edges
@@ -1,86 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract RDS DB instance nodes and their edges.
Edges produced:
- db_instance security-group [network]
- db_instance VPC [network]
- db_instance cluster [depends_on]
- db_instance KMS key [encrypts]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for db in client.db_instances.values():
props = {
"engine": getattr(db, "engine", None),
"engine_version": getattr(db, "engine_version", None),
"instance_class": getattr(db, "db_instance_class", None),
"vpc_id": getattr(db, "vpc_id", None),
"multi_az": getattr(db, "multi_az", None),
"publicly_accessible": getattr(db, "publicly_accessible", None),
"storage_encrypted": getattr(db, "storage_encrypted", None),
}
nodes.append(
ResourceNode(
id=db.arn,
type="rds_instance",
name=db.id,
service="rds",
region=db.region,
account_id=client.audited_account,
properties={k: v for k, v in props.items() if v is not None},
)
)
for sg in getattr(db, "security_groups", []):
sg_id = sg if isinstance(sg, str) else getattr(sg, "id", str(sg))
edges.append(
ResourceEdge(
source_id=db.arn,
target_id=sg_id,
edge_type="network",
label="sg",
)
)
vpc_id = getattr(db, "vpc_id", None)
if vpc_id:
edges.append(
ResourceEdge(
source_id=db.arn,
target_id=vpc_id,
edge_type="network",
label="in-vpc",
)
)
cluster_arn = getattr(db, "cluster_arn", None)
if cluster_arn:
edges.append(
ResourceEdge(
source_id=db.arn,
target_id=cluster_arn,
edge_type="depends_on",
label="cluster-member",
)
)
kms_key_id = getattr(db, "kms_key_id", None)
if kms_key_id:
edges.append(
ResourceEdge(
source_id=kms_key_id,
target_id=db.arn,
edge_type="encrypts",
label="kms",
)
)
return nodes, edges
@@ -1,92 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract S3 bucket nodes and their edges.
Edges produced:
- bucket replication-target bucket [replicates_to]
- bucket KMS key [encrypts]
- bucket logging bucket [logs_to]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
for bucket in client.buckets.values():
encryption = getattr(bucket, "encryption", None)
versioning = getattr(bucket, "versioning_enabled", None)
logging = getattr(bucket, "logging", None)
public = getattr(bucket, "public_access_block", None)
props = {}
if versioning is not None:
props["versioning"] = versioning
if encryption:
enc_type = getattr(encryption, "type", str(encryption))
props["encryption"] = enc_type
nodes.append(
ResourceNode(
id=bucket.arn,
type="s3_bucket",
name=bucket.name,
service="s3",
region=bucket.region,
account_id=client.audited_account,
properties=props,
)
)
# Replication edges
for rule in getattr(bucket, "replication_rules", None) or []:
dest_bucket = getattr(rule, "destination_bucket", None)
if dest_bucket:
dest_arn = (
dest_bucket
if dest_bucket.startswith("arn:")
else f"arn:aws:s3:::{dest_bucket}"
)
edges.append(
ResourceEdge(
source_id=bucket.arn,
target_id=dest_arn,
edge_type="replicates_to",
label="s3-replication",
)
)
# Logging edges
if logging:
target_bucket = getattr(logging, "target_bucket", None)
if target_bucket:
target_arn = (
target_bucket
if target_bucket.startswith("arn:")
else f"arn:aws:s3:::{target_bucket}"
)
edges.append(
ResourceEdge(
source_id=bucket.arn,
target_id=target_arn,
edge_type="logs_to",
label="access-logs",
)
)
# KMS encryption edges
if encryption:
kms_arn = getattr(encryption, "kms_master_key_id", None)
if kms_arn:
edges.append(
ResourceEdge(
source_id=kms_arn,
target_id=bucket.arn,
edge_type="encrypts",
label="kms",
)
)
return nodes, edges
@@ -1,92 +0,0 @@
from typing import List, Tuple
from lib.models import ResourceEdge, ResourceNode
def extract(client) -> Tuple[List[ResourceNode], List[ResourceEdge]]:
"""
Extract VPC and subnet nodes with their edges.
Edges produced:
- subnet VPC [depends_on]
- peering connection between VPCs [network]
"""
nodes: List[ResourceNode] = []
edges: List[ResourceEdge] = []
# VPCs
for vpc in client.vpcs.values():
name = vpc.id if hasattr(vpc, "id") else vpc.arn
for tag in vpc.tags or []:
if isinstance(tag, dict) and tag.get("Key") == "Name":
name = tag["Value"]
break
nodes.append(
ResourceNode(
id=vpc.arn,
type="vpc",
name=name,
service="vpc",
region=vpc.region,
account_id=client.audited_account,
properties={
"cidr_block": getattr(vpc, "cidr_block", None),
"is_default": getattr(vpc, "is_default", None),
},
)
)
# VPC Subnets
for subnet in client.vpc_subnets.values():
name = subnet.id if hasattr(subnet, "id") else subnet.arn
for tag in getattr(subnet, "tags", None) or []:
if isinstance(tag, dict) and tag.get("Key") == "Name":
name = tag["Value"]
break
nodes.append(
ResourceNode(
id=subnet.arn,
type="subnet",
name=name,
service="vpc",
region=subnet.region,
account_id=client.audited_account,
properties={
"vpc_id": getattr(subnet, "vpc_id", None),
"cidr_block": getattr(subnet, "cidr_block", None),
"availability_zone": getattr(subnet, "availability_zone", None),
"public": getattr(subnet, "public", None),
},
)
)
vpc_id = getattr(subnet, "vpc_id", None)
if vpc_id:
# Find the VPC ARN for this vpc_id
vpc_arn = next(
(v.arn for v in client.vpcs.values() if v.id == vpc_id),
vpc_id,
)
edges.append(
ResourceEdge(
source_id=subnet.arn,
target_id=vpc_arn,
edge_type="depends_on",
label="subnet-of",
)
)
# VPC Peering Connections
for peering in getattr(client, "vpc_peering_connections", {}).values():
edges.append(
ResourceEdge(
source_id=peering.arn,
target_id=getattr(peering, "accepter_vpc_id", peering.arn),
edge_type="network",
label="vpc-peer",
)
)
return nodes, edges
@@ -1,106 +0,0 @@
"""
graph_builder.py
----------------
Builds a ConnectivityGraph by reading already-loaded AWS service clients from
sys.modules. Only services that were actually scanned (i.e. whose client
module is already imported) contribute nodes and edges. Unknown / unloaded
services are silently skipped, so the output degrades gracefully when only a
subset of checks has been run.
"""
import sys
from typing import Tuple
from prowler.lib.logger import logger
from lib.models import ConnectivityGraph
# Registry: (sys.modules key, attribute name inside that module, extractor module path)
_SERVICE_REGISTRY: Tuple[Tuple[str, str, str], ...] = (
(
"prowler.providers.aws.services.awslambda.awslambda_client",
"awslambda_client",
"lib.extractors.lambda_extractor",
),
(
"prowler.providers.aws.services.ec2.ec2_client",
"ec2_client",
"lib.extractors.ec2_extractor",
),
(
"prowler.providers.aws.services.vpc.vpc_client",
"vpc_client",
"lib.extractors.vpc_extractor",
),
(
"prowler.providers.aws.services.rds.rds_client",
"rds_client",
"lib.extractors.rds_extractor",
),
(
"prowler.providers.aws.services.elbv2.elbv2_client",
"elbv2_client",
"lib.extractors.elbv2_extractor",
),
(
"prowler.providers.aws.services.s3.s3_client",
"s3_client",
"lib.extractors.s3_extractor",
),
(
"prowler.providers.aws.services.iam.iam_client",
"iam_client",
"lib.extractors.iam_extractor",
),
)
def build_graph() -> ConnectivityGraph:
"""
Iterate over every registered service, check whether its client module is
already loaded, and call the corresponding extractor.
Returns a ConnectivityGraph with all discovered nodes and edges.
Duplicate node IDs are silently deduplicated (first occurrence wins).
"""
graph = ConnectivityGraph()
seen_node_ids: set = set()
for client_module_key, client_attr, extractor_module_key in _SERVICE_REGISTRY:
client_module = sys.modules.get(client_module_key)
if client_module is None:
continue
service_client = getattr(client_module, client_attr, None)
if service_client is None:
continue
extractor_module = sys.modules.get(extractor_module_key)
if extractor_module is None:
try:
import importlib
extractor_module = importlib.import_module(extractor_module_key)
except ImportError as e:
logger.debug(
f"inventory graph_builder: cannot import extractor {extractor_module_key}: {e}"
)
continue
try:
nodes, edges = extractor_module.extract(service_client)
except Exception as e:
logger.error(
f"inventory graph_builder: extractor {extractor_module_key} failed: "
f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}]: {e}"
)
continue
for node in nodes:
if node.id not in seen_node_ids:
graph.add_node(node)
seen_node_ids.add(node.id)
for edge in edges:
graph.add_edge(edge)
return graph
@@ -1,502 +0,0 @@
"""
inventory_output.py
-------------------
Writes the ConnectivityGraph produced by graph_builder to two files:
<output_path>.inventory.json machine-readable graph (nodes + edges)
<output_path>.inventory.html interactive D3.js force-directed graph
"""
import json
import os
from dataclasses import asdict
from datetime import datetime
from typing import Optional
from prowler.lib.logger import logger
from lib.models import ConnectivityGraph
# ---------------------------------------------------------------------------
# JSON output
# ---------------------------------------------------------------------------
def write_json(graph: ConnectivityGraph, file_path: str) -> None:
"""Serialise the graph to a JSON file."""
try:
os.makedirs(os.path.dirname(file_path), exist_ok=True)
data = {
"generated_at": datetime.utcnow().isoformat() + "Z",
"nodes": [asdict(n) for n in graph.nodes],
"edges": [asdict(e) for e in graph.edges],
"stats": {
"node_count": len(graph.nodes),
"edge_count": len(graph.edges),
},
}
with open(file_path, "w", encoding="utf-8") as fh:
json.dump(data, fh, indent=2, default=str)
logger.info(f"Inventory graph JSON written to {file_path}")
except Exception as e:
logger.error(
f"inventory_output.write_json: {e.__class__.__name__}[{e.__traceback__.tb_lineno}]: {e}"
)
# ---------------------------------------------------------------------------
# HTML output (self-contained, D3.js CDN)
# ---------------------------------------------------------------------------
# Colour palette per node type
_NODE_COLOURS = {
"lambda_function": "#f59e0b",
"ec2_instance": "#3b82f6",
"security_group": "#6366f1",
"vpc": "#10b981",
"subnet": "#34d399",
"rds_instance": "#ef4444",
"load_balancer": "#8b5cf6",
"s3_bucket": "#06b6d4",
"iam_role": "#f97316",
"default": "#94a3b8",
}
# Edge stroke colours per edge type
_EDGE_COLOURS = {
"network": "#64748b",
"iam": "#f97316",
"triggers": "#a855f7",
"data_flow": "#0ea5e9",
"depends_on": "#94a3b8",
"routes_to": "#22c55e",
"replicates_to": "#ec4899",
"encrypts": "#eab308",
"logs_to": "#78716c",
}
_HTML_TEMPLATE = """\
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<title>Prowler AWS Connectivity Graph</title>
<script src="https://d3js.org/d3.v7.min.js"></script>
<style>
*, *::before, *::after {{ box-sizing: border-box; }}
body {{
margin: 0;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif;
background: #0f172a;
color: #e2e8f0;
}}
#header {{
padding: 12px 20px;
background: #1e293b;
border-bottom: 1px solid #334155;
display: flex;
align-items: center;
gap: 16px;
}}
#header h1 {{ margin: 0; font-size: 18px; font-weight: 700; }}
#header .stats {{ font-size: 13px; color: #94a3b8; }}
#controls {{
padding: 8px 20px;
background: #1e293b;
border-bottom: 1px solid #334155;
display: flex;
gap: 12px;
align-items: center;
flex-wrap: wrap;
}}
#controls label {{ font-size: 12px; color: #94a3b8; }}
#controls select, #controls input[type=range] {{
background: #0f172a;
color: #e2e8f0;
border: 1px solid #334155;
border-radius: 4px;
padding: 3px 6px;
font-size: 12px;
}}
#graph-container {{ width: 100%; height: calc(100vh - 100px); position: relative; }}
svg {{ width: 100%; height: 100%; }}
.node circle {{
stroke: #1e293b;
stroke-width: 1.5px;
cursor: pointer;
transition: r 0.15s;
}}
.node circle:hover {{ stroke-width: 3px; }}
.node text {{
font-size: 10px;
fill: #e2e8f0;
pointer-events: none;
text-shadow: 0 0 4px #0f172a;
}}
.link {{
stroke-opacity: 0.6;
stroke-width: 1.5px;
}}
.link-label {{
font-size: 8px;
fill: #94a3b8;
pointer-events: none;
}}
#tooltip {{
position: fixed;
background: #1e293b;
border: 1px solid #334155;
border-radius: 6px;
padding: 10px 14px;
font-size: 12px;
pointer-events: none;
max-width: 320px;
word-break: break-all;
z-index: 9999;
display: none;
}}
#tooltip strong {{ color: #f8fafc; }}
#tooltip .prop {{ color: #94a3b8; margin-top: 4px; }}
#legend {{
position: absolute;
top: 10px;
right: 10px;
background: rgba(30,41,59,0.9);
border: 1px solid #334155;
border-radius: 6px;
padding: 10px 14px;
font-size: 11px;
}}
#legend h3 {{ margin: 0 0 6px; font-size: 12px; }}
.legend-row {{ display: flex; align-items: center; gap: 6px; margin: 3px 0; }}
.legend-dot {{ width: 12px; height: 12px; border-radius: 50%; flex-shrink: 0; }}
.legend-line {{ width: 20px; height: 2px; flex-shrink: 0; }}
</style>
</head>
<body>
<div id="header">
<h1>🔗 AWS Connectivity Graph</h1>
<span class="stats" id="stat-label">Generated: {generated_at}</span>
</div>
<div id="controls">
<label>Filter service:
<select id="filter-service">
<option value="">All services</option>
</select>
</label>
<label>Link distance:
<input type="range" id="link-distance" min="40" max="300" value="120"/>
</label>
<label>Charge strength:
<input type="range" id="charge-strength" min="-800" max="-20" value="-250"/>
</label>
<span class="stats" id="visible-count"></span>
</div>
<div id="graph-container">
<svg id="graph-svg"></svg>
<div id="tooltip"></div>
<div id="legend">
<h3>Node types</h3>
{legend_nodes_html}
<h3 style="margin-top:8px">Edge types</h3>
{legend_edges_html}
</div>
</div>
<script>
const RAW_NODES = {nodes_json};
const RAW_EDGES = {edges_json};
const NODE_COLOURS = {node_colours_json};
const EDGE_COLOURS = {edge_colours_json};
// helpers
function nodeColour(d) {{
return NODE_COLOURS[d.type] || NODE_COLOURS["default"];
}}
function edgeColour(d) {{
return EDGE_COLOURS[d.edge_type] || "#94a3b8";
}}
function nodeRadius(d) {{
const base = {{
lambda_function: 9, ec2_instance: 10, vpc: 14, subnet: 8,
security_group: 7, rds_instance: 11, load_balancer: 12,
s3_bucket: 9, iam_role: 9
}};
return base[d.type] || 8;
}}
// filter controls
const services = [...new Set(RAW_NODES.map(n => n.service))].sort();
const sel = document.getElementById("filter-service");
services.forEach(s => {{
const o = document.createElement("option");
o.value = s; o.textContent = s;
sel.appendChild(o);
}});
// D3 setup
const svg = d3.select("#graph-svg");
const container = svg.append("g");
// zoom
svg.call(
d3.zoom().scaleExtent([0.05, 8])
.on("zoom", e => container.attr("transform", e.transform))
);
// arrowhead marker
const defs = svg.append("defs");
defs.append("marker")
.attr("id", "arrow")
.attr("viewBox", "0 -5 10 10")
.attr("refX", 20).attr("refY", 0)
.attr("markerWidth", 6).attr("markerHeight", 6)
.attr("orient", "auto")
.append("path")
.attr("d", "M0,-5L10,0L0,5")
.attr("fill", "#94a3b8");
// tooltip
const tooltip = document.getElementById("tooltip");
// simulation
let simulation, linkSel, nodeSel, labelSel;
function buildGraph(nodeFilter) {{
// Determine which nodes to show
const visibleNodes = nodeFilter
? RAW_NODES.filter(n => n.service === nodeFilter)
: RAW_NODES;
const visibleIds = new Set(visibleNodes.map(n => n.id));
// Only show edges where BOTH endpoints are visible
const visibleEdges = RAW_EDGES.filter(
e => visibleIds.has(e.source_id) && visibleIds.has(e.target_id)
);
document.getElementById("visible-count").textContent =
`Showing ${{visibleNodes.length}} nodes · ${{visibleEdges.length}} edges`;
container.selectAll("*").remove();
if (simulation) simulation.stop();
const nodes = visibleNodes.map(n => ({{ ...n }}));
const nodeIndex = Object.fromEntries(nodes.map(n => [n.id, n]));
const links = visibleEdges.map(e => ({{
...e,
source: nodeIndex[e.source_id] || e.source_id,
target: nodeIndex[e.target_id] || e.target_id,
}}));
const dist = +document.getElementById("link-distance").value;
const charge = +document.getElementById("charge-strength").value;
simulation = d3.forceSimulation(nodes)
.force("link", d3.forceLink(links).id(d => d.id).distance(dist))
.force("charge", d3.forceManyBody().strength(charge))
.force("center", d3.forceCenter(
document.getElementById("graph-container").clientWidth / 2,
document.getElementById("graph-container").clientHeight / 2
))
.force("collision", d3.forceCollide().radius(d => nodeRadius(d) + 6));
// Edges
linkSel = container.append("g").attr("class", "links")
.selectAll("line")
.data(links)
.join("line")
.attr("class", "link")
.attr("stroke", edgeColour)
.attr("marker-end", "url(#arrow)");
// Edge labels
labelSel = container.append("g").attr("class", "link-labels")
.selectAll("text")
.data(links)
.join("text")
.attr("class", "link-label")
.text(d => d.label || "");
// Nodes
nodeSel = container.append("g").attr("class", "nodes")
.selectAll("g")
.data(nodes)
.join("g")
.attr("class", "node")
.call(
d3.drag()
.on("start", (event, d) => {{
if (!event.active) simulation.alphaTarget(0.3).restart();
d.fx = d.x; d.fy = d.y;
}})
.on("drag", (event, d) => {{ d.fx = event.x; d.fy = event.y; }})
.on("end", (event, d) => {{
if (!event.active) simulation.alphaTarget(0);
d.fx = null; d.fy = null;
}})
)
.on("mouseover", (event, d) => {{
const props = Object.entries(d.properties || {{}})
.map(([k, v]) => `<div class="prop"><b>${{k}}</b>: ${{v}}</div>`)
.join("");
tooltip.innerHTML = `
<strong>${{d.name}}</strong>
<div class="prop"><b>type</b>: ${{d.type}}</div>
<div class="prop"><b>service</b>: ${{d.service}}</div>
<div class="prop"><b>region</b>: ${{d.region}}</div>
<div class="prop"><b>account</b>: ${{d.account_id}}</div>
<div class="prop" style="word-break:break-all"><b>arn</b>: ${{d.id}}</div>
${{props}}
`;
tooltip.style.display = "block";
tooltip.style.left = (event.clientX + 12) + "px";
tooltip.style.top = (event.clientY - 10) + "px";
}})
.on("mousemove", event => {{
tooltip.style.left = (event.clientX + 12) + "px";
tooltip.style.top = (event.clientY - 10) + "px";
}})
.on("mouseout", () => {{ tooltip.style.display = "none"; }});
nodeSel.append("circle")
.attr("r", nodeRadius)
.attr("fill", nodeColour);
nodeSel.append("text")
.attr("dx", d => nodeRadius(d) + 3)
.attr("dy", "0.35em")
.text(d => d.name.length > 24 ? d.name.slice(0, 22) + "" : d.name);
simulation.on("tick", () => {{
linkSel
.attr("x1", d => d.source.x)
.attr("y1", d => d.source.y)
.attr("x2", d => d.target.x)
.attr("y2", d => d.target.y);
labelSel
.attr("x", d => (d.source.x + d.target.x) / 2)
.attr("y", d => (d.source.y + d.target.y) / 2);
nodeSel.attr("transform", d => `translate(${{d.x}},${{d.y}})`);
}});
}}
// Initial render
buildGraph(null);
// Filter change
sel.addEventListener("change", () => buildGraph(sel.value || null));
// Simulation control sliders restart on change
document.getElementById("link-distance").addEventListener("input", () => buildGraph(sel.value || null));
document.getElementById("charge-strength").addEventListener("input", () => buildGraph(sel.value || null));
</script>
</body>
</html>
"""
def _build_legend_html(colours: dict, shape: str) -> str:
rows = []
for key, colour in sorted(colours.items()):
if shape == "dot":
rows.append(
f'<div class="legend-row">'
f'<div class="legend-dot" style="background:{colour}"></div>'
f"<span>{key}</span></div>"
)
else:
rows.append(
f'<div class="legend-row">'
f'<div class="legend-line" style="background:{colour}"></div>'
f"<span>{key}</span></div>"
)
return "\n".join(rows)
def write_html(graph: ConnectivityGraph, file_path: str) -> None:
"""Render the graph as a self-contained interactive HTML page."""
try:
os.makedirs(os.path.dirname(file_path), exist_ok=True)
nodes_json = json.dumps(
[
{
"id": n.id,
"type": n.type,
"name": n.name,
"service": n.service,
"region": n.region,
"account_id": n.account_id,
"properties": n.properties,
}
for n in graph.nodes
],
indent=None,
default=str,
)
edges_json = json.dumps(
[
{
"source_id": e.source_id,
"target_id": e.target_id,
"edge_type": e.edge_type,
"label": e.label or "",
}
for e in graph.edges
],
indent=None,
default=str,
)
html = _HTML_TEMPLATE.format(
generated_at=datetime.utcnow().strftime("%Y-%m-%d %H:%M UTC"),
nodes_json=nodes_json,
edges_json=edges_json,
node_colours_json=json.dumps(_NODE_COLOURS),
edge_colours_json=json.dumps(_EDGE_COLOURS),
legend_nodes_html=_build_legend_html(_NODE_COLOURS, "dot"),
legend_edges_html=_build_legend_html(_EDGE_COLOURS, "line"),
)
with open(file_path, "w", encoding="utf-8") as fh:
fh.write(html)
logger.info(f"Inventory graph HTML written to {file_path}")
except Exception as e:
logger.error(
f"inventory_output.write_html: {e.__class__.__name__}[{e.__traceback__.tb_lineno}]: {e}"
)
# ---------------------------------------------------------------------------
# Convenience entry-point called from __main__.py
# ---------------------------------------------------------------------------
def generate_inventory_outputs(output_path: str) -> None:
"""
Build the connectivity graph from currently-loaded service clients and write
both JSON and HTML outputs.
Args:
output_path: base file path WITHOUT extension, e.g.
"output/prowler-output-20240101120000".
The function appends .inventory.json and .inventory.html.
"""
from lib.graph_builder import build_graph
graph = build_graph()
if not graph.nodes:
logger.warning(
"Inventory graph: no nodes discovered. "
"Make sure at least one AWS service was scanned before generating the inventory."
)
write_json(graph, f"{output_path}.inventory.json")
write_html(graph, f"{output_path}.inventory.html")
-71
View File
@@ -1,71 +0,0 @@
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
@dataclass
class ResourceNode:
"""
Represents a single AWS resource as a node in the connectivity graph.
id : globally unique identifier always the resource ARN
type : coarse resource type used for grouping/colour, e.g. "lambda_function"
name : human-readable label shown on the graph
service : AWS service name, e.g. "lambda", "ec2", "rds"
region : AWS region the resource lives in
account_id: AWS account ID
properties: additional resource-specific metadata (runtime, vpc_id, etc.)
"""
id: str
type: str
name: str
service: str
region: str
account_id: str
properties: Dict[str, Any] = field(default_factory=dict)
@dataclass
class ResourceEdge:
"""
Represents a directional relationship between two resource nodes.
source_id : ARN of the source node
target_id : ARN of the target node
edge_type : semantic type of the relationship, e.g.:
"network" resources share a network path (VPC/subnet/SG)
"iam" IAM trust or permission relationship
"triggers" one resource can invoke another (event source Lambda)
"data_flow" data is written/read (Lambda SQS dead-letter queue)
"depends_on" soft dependency (Lambda layer, subnet belongs to VPC)
"routes_to" traffic routing (LB target)
"encrypts" KMS key encrypts the resource
label : optional short label rendered on the edge in the HTML graph
"""
source_id: str
target_id: str
edge_type: str
label: Optional[str] = None
@dataclass
class ConnectivityGraph:
"""
Container for the full inventory connectivity graph.
nodes: all discovered resource nodes
edges: all discovered edges between nodes
"""
nodes: List[ResourceNode] = field(default_factory=list)
edges: List[ResourceEdge] = field(default_factory=list)
def add_node(self, node: ResourceNode) -> None:
self.nodes.append(node)
def add_edge(self, edge: ResourceEdge) -> None:
self.edges.append(edge)
def node_ids(self) -> set:
return {n.id for n in self.nodes}
@@ -2,378 +2,47 @@
title: 'Creating a New Security Compliance Framework in Prowler'
---
This guide explains how to add a new security compliance framework to Prowler, end to end. It covers directory layout, the JSON schema, check mapping conventions, the Pydantic models that validate each framework, the CSV output formatter, local validation, testing, and the pull request process.
## Introduction
A compliance framework in Prowler maps a public or custom control catalog (for example CIS, NIST 800-53, PCI DSS, HIPAA, ENS, CCC) to the security checks that Prowler already runs. Each requirement links to zero, one or more Prowler checks. When a scan executes, findings are aggregated per requirement to produce the compliance report rendered by Prowler CLI and Prowler Cloud.
To create or contribute a custom security framework for Prowler—or to integrate a public framework—you must ensure the necessary checks are available. If they are missing, they must be implemented before proceeding.
Prowler ships with 85+ compliance frameworks across All Providers. The catalog lives under `prowler/compliance/<provider>/` (or `prowler/compliance/` for universal compliance frameworks)
Each framework is defined in a compliance file per provider. The file should follow the structure used in `prowler/compliance/<provider>/` and be named `<framework>_<version>_<provider>.json`. Follow the format below to create your own.
<Warning>
A compliance framework must represent the **complete state** of the source catalog. Every requirement defined by the framework has to be present in the JSON file, even when none of the existing Prowler checks can automate it. In that case, leave `Checks` as an empty array, but do not omit the requirement.
## Compliance Framework
Requirement coverage feeds the compliance percentage calculations and the metadata surfaces (dashboards, widgets, exports). Missing requirements skew those metrics and break the report as a faithful snapshot of the framework.
</Warning>
### Compliance Framework Structure
### Prerequisites
Each compliance framework file consists of structured metadata that identifies the framework and maps security checks to requirements or controls. Please note that a single requirement can be linked to multiple Prowler checks:
Before adding a new framework, complete the following checks:
- **Verify the framework is not already supported.** Inspect `prowler/compliance/<provider>/` for an existing JSON file matching the name and version.
- **Confirm the required checks exist.** Every requirement that can be automated must point to one or more existing Prowler checks. For each missing check, implement it first by following the [Prowler Checks](/developer-guide/checks) guide.
- **Review a reference framework.** Use an existing framework with a similar structure as your template. `cis_2.0_aws.json` is the canonical reference for CIS-style frameworks. `ccc_aws.json`, `ens_rd2022_aws.json`, and `nist_800_53_revision_5_aws.json` illustrate other attribute shapes.
## Four-Layer Architecture
A compliance framework spans four layers. A complete contribution must touch each layer that applies.
- **Layer 1 Schema validation:** The Pydantic models in `prowler/lib/check/compliance_models.py` define the canonical schema for each attribute shape (CIS, ENS, Mitre, CCC, C5, CSA CCM, ISO 27001, KISA ISMS-P, AWS Well-Architected, Prowler ThreatScore, and a generic fallback).
- **Layer 2 JSON catalog:** The framework JSON file in `prowler/compliance/<provider>/` lists every requirement and maps it to checks.
- **Layer 3 Output formatter:** The Python module in `prowler/lib/outputs/compliance/<framework>/` builds the CSV row model, the per-provider transformer, and the CLI summary table.
- **Layer 4 Output dispatchers:** The dispatchers in `prowler/lib/outputs/compliance/compliance.py` and `prowler/lib/outputs/compliance/compliance_output.py` route findings to the right formatter based on the framework identifier.
The rest of this guide walks each layer in order.
## Directory Structure and File Naming
Compliance frameworks live at:
- `Framework`: string The distinguished name of the framework (e.g., CIS).
- `Provider`: string The cloud provider where the framework applies (AWS, Azure, OCI).
- `Version`: string The framework version (e.g., 1.4 for CIS).
- `Requirements`: array of objects. Defines security requirements and their mapping to Prowler checks. All requirements or controls are to be included with the mapping to Prowler.
- `Requirements_Id`: string A unique identifier for each requirement within the framework
- `Requirements_Description`: string The requirement description as specified in the framework.
- `Requirements_Attributes`: array of objects. Contains relevant metadata such as security levels, sections, and any additional data needed for reporting with the result of the findings. Attributes should be derived directly from the frameworks own terminology, ensuring consistency with its established definitions.
- `Requirements_Checks`: array. The Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case automation is not feasible, this can be empty.
```
prowler/compliance/<provider>/<framework>_<version>_<provider>.json
```
The filename conventions are:
- All lowercase, words separated with underscores.
- `<provider>` is a supported provider identifier: `aws`, `azure`, `gcp`, `kubernetes`, `m365`, `github`, `googleworkspace`, `alibabacloud`, `oraclecloud`, `cloudflare`, `mongodbatlas`, `nhn`, `openstack`, `iac`, `llm`.
- `<version>` is optional. Omit it when the framework has no versioning, as in `ccc_aws.json`.
- The file basename (without `.json`) is the framework key that Prowler CLI accepts via `--compliance`.
Examples:
- `prowler/compliance/aws/cis_2.0_aws.json`
- `prowler/compliance/aws/nist_800_53_revision_5_aws.json`
- `prowler/compliance/azure/ens_rd2022_azure.json`
- `prowler/compliance/kubernetes/cis_1.10_kubernetes.json`
- `prowler/compliance/aws/ccc_aws.json`
The output formatter directory mirrors the framework name:
```
prowler/lib/outputs/compliance/<framework>/
├── <framework>.py # CLI summary-table dispatcher
├── <framework>_<provider>.py # Per-provider transformer class
├── models.py # Pydantic CSV row model
└── __init__.py
```
## JSON Schema Reference
Every compliance file is a JSON document with the following top-level keys.
| Field | Type | Required | Description |
|---|---|---|---|
| `Framework` | string | Yes | Canonical framework identifier, for example `CIS`, `NIST-800-53-Revision-5`, `ENS`, `CCC`. |
| `Name` | string | Yes | Human-readable framework name displayed by Prowler App. |
| `Version` | string | Yes | Framework version, for example `2.0`. Use an empty string only for frameworks without versioning. See [Version Handling](#version-handling). |
| `Provider` | string | Yes | Upper-cased provider identifier: `AWS`, `AZURE`, `GCP`, `KUBERNETES`, `M365`, `GITHUB`, `GOOGLEWORKSPACE`, and so on. |
| `Description` | string | Yes | Short description of the framework's scope and purpose. |
| `Requirements` | array | Yes | List of [requirement objects](#requirement-object). |
### Requirement Object
Each entry in `Requirements` describes one control or requirement.
| Field | Type | Required | Description |
|---|---|---|---|
| `Id` | string | Yes | Unique identifier within the framework, for example `1.10` or `CCC.Core.CN01.AR01`. |
| `Name` | string | No | Optional human-readable name used by frameworks that distinguish control name from description, such as NIST. |
| `Description` | string | Yes | Verbatim description from the source framework. |
| `Attributes` | array | Yes | List of [attribute objects](#attribute-objects). The shape depends on the framework. |
| `Checks` | array of strings | Yes | Prowler check identifiers that automate the requirement. Leave the list empty when the control cannot be automated. |
### Attribute Objects
Attributes carry the metadata that Prowler App and the CSV output display for each requirement. The object shape is framework-specific and is validated by a dedicated Pydantic model in `prowler/lib/check/compliance_models.py`. The most common shapes are summarized below.
#### CIS_Requirement_Attribute
Used by every CIS benchmark.
| Field | Type | Required | Notes |
|---|---|---|---|
| `Section` | string | Yes | Top-level section, for example `1 Identity and Access Management`. |
| `SubSection` | string | No | Optional second-level grouping. |
| `Profile` | enum | Yes | One of `Level 1`, `Level 2`, `E3 Level 1`, `E3 Level 2`, `E5 Level 1`, `E5 Level 2`. |
| `AssessmentStatus` | enum | Yes | `Manual` or `Automated`. |
| `Description` | string | Yes | Control description. |
| `RationaleStatement` | string | Yes | Reason the control exists. |
| `ImpactStatement` | string | Yes | Impact of non-compliance. |
| `RemediationProcedure` | string | Yes | Remediation steps. |
| `AuditProcedure` | string | Yes | Audit steps. |
| `AdditionalInformation` | string | Yes | Free-form notes. |
| `DefaultValue` | string | No | Default configuration value, when relevant. |
| `References` | string | Yes | Colon-separated list of reference URLs. |
#### ENS_Requirement_Attribute
Used by the Spanish ENS (Esquema Nacional de Seguridad) frameworks.
| Field | Type | Required | Notes |
|---|---|---|---|
| `IdGrupoControl` | string | Yes | Control group identifier. |
| `Marco` | string | Yes | Framework block (`operacional`, `organizativo`, `proteccion`). |
| `Categoria` | string | Yes | Control category. |
| `DescripcionControl` | string | Yes | Control description in Spanish. |
| `Tipo` | enum | Yes | `refuerzo`, `requisito`, `recomendacion`, `medida`. |
| `Nivel` | enum | Yes | `opcional`, `bajo`, `medio`, `alto`. |
| `Dimensiones` | array of enum | Yes | Subset of `confidencialidad`, `integridad`, `trazabilidad`, `autenticidad`, `disponibilidad`. |
| `ModoEjecucion` | string | Yes | Execution mode (`manual`, `automático`, `híbrido`). |
| `Dependencias` | array of strings | Yes | Ids of prerequisite controls. Empty list when none. |
#### CCC_Requirement_Attribute
Used by the Common Cloud Controls Catalog.
| Field | Type | Required | Notes |
|---|---|---|---|
| `FamilyName` | string | Yes | Control family, for example `Data`. |
| `FamilyDescription` | string | Yes | Description of the family. |
| `Section` | string | Yes | Section title. |
| `SubSection` | string | Yes | Subsection title, or empty string. |
| `SubSectionObjective` | string | Yes | Stated objective for the subsection. |
| `Applicability` | array of strings | Yes | Applicability tags such as `tlp-green`, `tlp-amber`, `tlp-red`. |
| `Recommendation` | string | Yes | Implementation recommendation. |
| `SectionThreatMappings` | array of objects | Yes | Each entry has `ReferenceId` and `Identifiers`. |
| `SectionGuidelineMappings` | array of objects | Yes | Each entry has `ReferenceId` and `Identifiers`. |
#### Generic_Compliance_Requirement_Attribute
The fallback attribute model used when no framework-specific schema applies (for example NIST 800-53, PCI DSS, GDPR, HIPAA).
| Field | Type | Required | Notes |
|---|---|---|---|
| `ItemId` | string | No | Item identifier. |
| `Section` | string | No | Section name. |
| `SubSection` | string | No | Subsection name. |
| `SubGroup` | string | No | Subgroup name. |
| `Service` | string | No | Affected service, for example `aws`, `iam`. |
| `Type` | string | No | Control type. |
| `Comment` | string | No | Free-form comment. |
Additional per-framework attribute models exist for `AWS_Well_Architected_Requirement_Attribute`, `ISO27001_2013_Requirement_Attribute`, `Mitre_Requirement_Attribute_<Provider>`, `KISA_ISMSP_Requirement_Attribute`, `Prowler_ThreatScore_Requirement_Attribute`, `C5Germany_Requirement_Attribute`, and `CSA_CCM_Requirement_Attribute`. Consult `prowler/lib/check/compliance_models.py` for their full field sets.
<Note>
The `Attributes` field is a Pydantic `Union`. The generic attribute model must remain the last element of that Union, otherwise Pydantic v1 silently coerces every framework into the generic shape and your specialized fields are dropped.
</Note>
## Minimal Working Example
The following snippet is a complete, valid framework file named `my_framework_1.0_aws.json`, saved at `prowler/compliance/aws/my_framework_1.0_aws.json`. It uses the generic attribute shape for simplicity.
```json title="prowler/compliance/aws/my_framework_1.0_aws.json"
{
"Framework": "My-Framework",
"Name": "My Framework 1.0 for AWS",
"Version": "1.0",
"Provider": "AWS",
"Description": "Internal baseline for AWS accounts.",
"Framework": "<framework>-<provider>",
"Version": "<version>",
"Requirements": [
{
"Id": "MF-1.1",
"Description": "Root account must have multi-factor authentication enabled.",
"Id": "<unique-id>",
"Description": "Full description of the requirement",
"Checks": [
"Here is the prowler check or checks that will be executed"
],
"Attributes": [
{
"ItemId": "MF-1.1",
"Section": "Identity and Access Management",
"SubSection": "Root Account",
"Service": "iam"
<Add here your custom attributes.>
}
],
"Checks": [
"iam_root_mfa_enabled",
"iam_root_hardware_mfa_enabled"
]
},
{
"Id": "MF-2.1",
"Description": "S3 buckets must block public access at the account level.",
"Attributes": [
{
"ItemId": "MF-2.1",
"Section": "Data Protection",
"Service": "s3"
}
],
"Checks": [
"s3_account_level_public_access_blocks"
]
}
...
]
}
```
## Mapping Checks to Requirements
Each requirement links to the Prowler checks that, together, produce a PASS or FAIL verdict for that control.
- **Include every requirement from the source catalog.** The framework file must mirror the full control list, one-to-one. Compliance percentages, dashboards, and exported metadata are computed against the total requirement count, so omitting an unmappable control inflates coverage and misrepresents the framework.
- List every check by its canonical identifier, the value of `CheckID` inside the check's `.metadata.json` file.
- One requirement can reference multiple checks. The requirement is evaluated as FAIL when any referenced check produces a FAIL finding for a resource in scope.
- Leave `Checks` as an empty array when the requirement cannot be automated. The requirement still appears in the report, contributes to the total, and resolves to `MANUAL`. An empty mapping is valid; a missing requirement is not.
- Reuse checks across requirements when the same control applies in multiple places. Do not duplicate check logic to match framework structure.
- Avoid referencing checks from a different provider. A compliance file is bound to one provider, and cross-provider checks will never match findings in the scan.
To discover available checks, run:
```bash
poetry run python prowler-cli.py <provider> --list-checks
```
## Supporting Multiple Providers
Each compliance file targets a single provider. To cover several providers with the same framework (for example CIS across AWS, Azure, and GCP), ship one JSON file per provider:
```
prowler/compliance/aws/cis_2.0_aws.json
prowler/compliance/azure/cis_2.0_azure.json
prowler/compliance/gcp/cis_2.0_gcp.json
```
Keep the `Framework` and `Version` values identical across the files so the dispatcher matches them, and change only the `Provider`, `Checks`, and provider-specific metadata.
The CIS output formatter already supports every provider listed above. For a brand-new framework that spans several providers, add one transformer per provider in `prowler/lib/outputs/compliance/<framework>/` and extend the summary-table dispatcher accordingly. See [Output Formatter](#output-formatter).
## Output Formatter
Prowler renders every compliance framework in two forms: a detailed CSV report written to disk, and a summary table printed in the CLI. Both are produced by the output formatter package for the framework.
For a new framework named `my_framework`, create:
```
prowler/lib/outputs/compliance/my_framework/
├── __init__.py
├── my_framework.py # CLI summary table dispatcher
├── my_framework_aws.py # Per-provider transformer
└── models.py # CSV row Pydantic model
```
### Step 1 Define the CSV Row Model
In `models.py`, declare a Pydantic v1 model with one field per CSV column. Use existing models such as `AWSCISModel` in `prowler/lib/outputs/compliance/cis/models.py` as the reference. Fields typically include `Provider`, `Description`, `AccountId`, `Region`, `AssessmentDate`, `Requirements_Id`, `Requirements_Description`, one `Requirements_Attributes_*` field per attribute key, plus the finding fields `Status`, `StatusExtended`, `ResourceId`, `ResourceName`, `CheckId`, `Muted`, `Framework`, `Name`.
### Step 2 Implement the Transformer Class
In `my_framework_aws.py`, subclass `ComplianceOutput` from `prowler.lib.outputs.compliance.compliance_output` and implement `transform(findings, compliance, compliance_name)`. Iterate over `findings`, match each finding to the requirements it satisfies through `finding.compliance.get(compliance_name, [])`, and append one row per attribute to `self._data`.
### Step 3 Add the Summary-Table Dispatcher
In `my_framework.py`, implement `get_my_framework_table(findings, bulk_checks_metadata, compliance_framework, output_filename, output_directory, compliance_overview)` following the pattern in `prowler/lib/outputs/compliance/cis/cis.py`.
### Step 4 Register the Framework in the Dispatchers
- Add the dispatcher call in `prowler/lib/outputs/compliance/compliance.py`, inside `display_compliance_table`, with a branch such as `elif "my_framework" in compliance_framework:`.
- Register the CSV model and transformer in `prowler/lib/outputs/compliance/compliance_output.py` so the CSV file is emitted during the scan.
<Note>
For NIST-style catalogs that use `Generic_Compliance_Requirement_Attribute`, no custom formatter is needed. The generic formatter in `prowler/lib/outputs/compliance/generic/` handles them automatically, provided the JSON validates against the generic attribute schema.
</Note>
## Version Handling
Prowler matches frameworks by concatenating `Framework` and `Version`. A missing or empty `Version` collapses several frameworks to the same key and breaks CLI filtering with `--compliance`.
- Always set `Version` to a non-empty string, even for frameworks that rename editions rather than version them. Use the edition identifier (for example `RD2022`, `v2025.10`, `4.0`).
- When the source catalog has no version, use the first year of adoption or the release date.
- Make sure the version substring embedded in the filename matches `Version`, because the CLI dispatcher reads `compliance_framework.split("_")[1]` to select the correct version.
## Validating the Framework Locally
Follow the steps below before opening a pull request.
### 1. Run the Compliance Model Validator
```bash
poetry run python prowler-cli.py <provider> --list-compliance
```
The framework must appear in the output. A validation error indicates a schema mismatch between the JSON file and the attribute model.
### 2. Run a Scan Filtered by the Framework
```bash
poetry run python prowler-cli.py <provider> \
--compliance <framework>_<version>_<provider> \
--log-level ERROR
```
Verify that:
- Prowler produces a CSV file under `output/compliance/` with the expected name.
- The CLI summary table lists every section in the framework.
- Findings roll up under the expected requirements.
### 3. Inspect the CSV Output
Open the generated CSV and confirm:
- All columns defined in `models.py` appear.
- Every requirement has at least one row per scanned resource.
- Values such as `Requirements_Attributes_Section` reflect the JSON content.
### 4. Verify the Framework in Prowler App
Launch Prowler App locally (`docker compose up` from the repository root) and run a scan with the new compliance framework. Confirm the compliance page renders the requirements, sections, and status widgets correctly.
## Testing
Compliance contributions require two layers of tests.
- **Schema tests** exercise the Pydantic models. Extend `tests/lib/check/universal_compliance_models_test.py` with a case that loads the new JSON file and asserts the attribute type matches the expected model.
- **Output tests** exercise the transformer. Mirror the structure under `tests/lib/outputs/compliance/<framework>/` with fixtures that feed synthetic findings through the transformer and assert the resulting CSV rows.
Run the suite with:
```bash
poetry run pytest -n auto tests/lib/check/universal_compliance_models_test.py \
tests/lib/outputs/compliance/
```
For guidance on writing Prowler SDK tests, refer to [Unit Testing](/developer-guide/unit-testing).
## Submitting the Pull Request
Before opening the pull request:
1. Run the complete QA pipeline:
```bash
poetry run pre-commit run --all-files
poetry run pytest -n auto
```
2. Add a changelog entry under the `### 🚀 Added` section of `prowler/CHANGELOG.md`, describing the new framework and the providers it covers.
3. Follow the [Pull Request Template](https://github.com/prowler-cloud/prowler/blob/master/.github/pull_request_template.md) and set the PR title using Conventional Commits, for example `feat(compliance): add My Framework 1.0 for AWS`.
4. Request review from the compliance codeowners listed in `.github/CODEOWNERS`.
## Troubleshooting
The following issues are the most common when contributing a compliance framework.
- **`ValidationError: field required` during scan.** The JSON is missing a required attribute field. Re-check the matching Pydantic model in `prowler/lib/check/compliance_models.py`.
- **All attributes collapse to `Generic_Compliance_Requirement_Attribute` values.** The Pydantic `Union` is ordered incorrectly, or the JSON matches only the generic shape. Move the generic model to the last Union position and ensure every required field is present in the JSON.
- **`--compliance` filter does not find the framework.** The filename does not match the expected pattern `<framework>_<version>_<provider>.json`, the version is empty, or the file lives outside `prowler/compliance/<provider>/`.
- **CLI summary table is empty but the CSV is populated.** The dispatcher branch in `prowler/lib/outputs/compliance/compliance.py` is missing or its substring match does not catch the framework key.
- **CSV file is missing after the scan.** The transformer class is not registered in `prowler/lib/outputs/compliance/compliance_output.py`, or `transform()` raises silently. Run the scan with `--log-level DEBUG`.
- **Findings do not roll up under a requirement.** A check listed in `Checks` either does not exist for that provider or is spelled incorrectly. Run `--list-checks | grep <check_name>` to confirm.
## Reference Examples
Use the following files as templates when modeling a new contribution.
- `prowler/compliance/aws/cis_2.0_aws.json` CIS attribute shape.
- `prowler/compliance/aws/nist_800_53_revision_5_aws.json` Generic attribute shape.
- `prowler/compliance/aws/ccc_aws.json` CCC attribute shape.
- `prowler/compliance/azure/ens_rd2022_azure.json` ENS attribute shape.
- `prowler/lib/check/compliance_models.py` Canonical Pydantic schemas.
- `prowler/lib/outputs/compliance/cis/` Reference implementation of a multi-provider output formatter.
- `prowler/lib/outputs/compliance/generic/` Reference implementation of a generic output formatter.
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`. Also, you need to add a new conditional in `prowler/lib/outputs/file_descriptors.py` if creating a new CSV model.
+1 -5
View File
@@ -177,6 +177,7 @@
"pages": [
"user-guide/cli/tutorials/misc",
"user-guide/cli/tutorials/reporting",
"user-guide/cli/tutorials/compliance",
"user-guide/cli/tutorials/dashboard",
"user-guide/cli/tutorials/configuration_file",
"user-guide/cli/tutorials/logging",
@@ -338,7 +339,6 @@
{
"group": "Compliance",
"pages": [
"user-guide/compliance/tutorials/compliance",
"user-guide/compliance/tutorials/threatscore"
]
},
@@ -504,10 +504,6 @@
}
},
"redirects": [
{
"source": "/user-guide/cli/tutorials/compliance",
"destination": "/user-guide/compliance/tutorials/compliance"
},
{
"source": "/projects/prowler-open-source/en/latest/tutorials/prowler-app-lighthouse",
"destination": "/user-guide/tutorials/prowler-app-lighthouse"
@@ -121,8 +121,8 @@ To update the environment file:
Edit the `.env` file and change version values:
```env
PROWLER_UI_VERSION="5.26.1"
PROWLER_API_VERSION="5.26.1"
PROWLER_UI_VERSION="5.25.3"
PROWLER_API_VERSION="5.25.3"
```
<Note>
Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 534 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 659 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 759 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 534 KiB

@@ -0,0 +1,80 @@
---
title: 'Compliance'
---
Prowler allows you to execute checks based on requirements defined in compliance frameworks. By default, it will execute and give you an overview of the status of each compliance framework:
<img src="/images/cli/compliance/compliance.png" />
You can find CSVs containing detailed compliance results in the compliance folder within Prowler's output folder.
## Execute Prowler based on Compliance Frameworks
Prowler can analyze your environment based on a specific compliance framework and get more details, to do it, you can use option `--compliance`:
```sh
prowler <provider> --compliance <compliance_framework>
```
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 2.0. For details a CSV file has been generated as well.
<img src="/images/cli/compliance/compliance-cis-sample1.png" />
<Note>
**If Prowler can't find a resource related with a check from a compliance requirement, this requirement won't appear on the output**
</Note>
## List Available Compliance Frameworks
To see which compliance frameworks are covered by Prowler, use the `--list-compliance` option:
```sh
prowler <provider> --list-compliance
```
Or you can visit [Prowler Hub](https://hub.prowler.com/compliance).
## List Requirements of Compliance Frameworks
To list requirements for a compliance framework, use the `--list-compliance-requirements` option:
```sh
prowler <provider> --list-compliance-requirements <compliance_framework(s)>
```
Example for the first requirements of CIS 1.5 for AWS:
```
Listing CIS 1.5 AWS Compliance Requirements:
Requirement Id: 1.1
- Description: Maintain current contact details
- Checks:
account_maintain_current_contact_details
Requirement Id: 1.2
- Description: Ensure security contact information is registered
- Checks:
account_security_contact_information_is_registered
Requirement Id: 1.3
- Description: Ensure security questions are registered in the AWS account
- Checks:
account_security_questions_are_registered_in_the_aws_account
Requirement Id: 1.4
- Description: Ensure no 'root' user account access key exists
- Checks:
iam_no_root_access_key
Requirement Id: 1.5
- Description: Ensure MFA is enabled for the 'root' user account
- Checks:
iam_root_mfa_enabled
[redacted]
```
## Create and contribute adding other Security Frameworks
This information is part of the Developer Guide and can be found [here](/developer-guide/security-compliance-framework).
@@ -56,7 +56,6 @@ The following list includes all the AWS checks with configurable variables that
| `elb_is_in_multiple_az` | `elb_min_azs` | Integer |
| `elbv2_is_in_multiple_az` | `elbv2_min_azs` | Integer |
| `guardduty_is_enabled` | `mute_non_default_regions` | Boolean |
| `iam_user_access_not_stale_to_sagemaker` | `max_unused_sagemaker_access_days` | Integer |
| `iam_user_accesskey_unused` | `max_unused_access_keys_days` | Integer |
| `iam_user_console_access_unused` | `max_console_access_days` | Integer |
| `organizations_delegated_administrators` | `organizations_trusted_delegated_administrators` | List of Strings |
@@ -187,8 +186,6 @@ aws:
max_unused_access_keys_days: 45
# aws.iam_user_console_access_unused --> CIS recommends 45 days
max_console_access_days: 45
# aws.iam_user_access_not_stale_to_sagemaker --> default 90 days
max_unused_sagemaker_access_days: 90
# AWS EC2 Configuration
# aws.ec2_elastic_ip_shodan
@@ -1,267 +0,0 @@
---
title: 'Compliance'
description: 'Run security checks against compliance frameworks, review posture across providers, and download CSV or PDF reports from Prowler Cloud, Prowler App, and Prowler CLI.'
---
Prowler maps every security check to one or more industry-standard compliance frameworks, so a single scan produces both technical findings and framework-aligned evidence. The same evaluation runs identically whether scans are launched from Prowler Cloud, Prowler App, or Prowler CLI.
Out of the box, Prowler covers frameworks such as CIS Benchmarks, NIST 800-53, NIST CSF, NIS2, ENS RD2022, ISO 27001, PCI-DSS, SOC 2, GDPR, HIPAA, AWS Well-Architected, BSI C5, CSA CCM, MITRE ATT&CK, KISA ISMS-P, FedRAMP, and Prowler ThreatScore. The full catalog is available at [Prowler Hub](https://hub.prowler.com/compliance).
<Note>
For the unified compliance score methodology used across frameworks, see [Prowler ThreatScore Documentation](/user-guide/compliance/tutorials/threatscore).
</Note>
<CardGroup cols={2}>
<Card title="Prowler Cloud" icon="cloud" href="#prowler-cloud">
Review compliance posture using Prowler Cloud
</Card>
<Card title="Prowler CLI" icon="terminal" href="#prowler-cli">
Run compliance scans using Prowler CLI
</Card>
</CardGroup>
## Prowler Cloud
The Compliance section in Prowler Cloud and Prowler App centralizes compliance posture across every connected provider. It aggregates scan results, surfaces Prowler ThreatScore, and exposes detailed requirement-level evidence for each supported framework.
### Accessing the Compliance Section
To open the compliance overview, follow these steps:
1. Sign in to Prowler Cloud at [cloud.prowler.com](https://cloud.prowler.com/sign-in) or to a self-hosted Prowler App instance.
2. Select **Compliance** from the left navigation.
The page lists every framework evaluated by the most recent completed scan of the selected provider.
<img src="/images/compliance/prowler-app-compliance-overview.png" alt="Compliance overview page in Prowler Cloud and App showing filters, the Prowler ThreatScore card, and the framework grid" width="900" />
<Note>
Compliance results require at least one completed scan. If no scan has finished yet, Prowler Cloud and App display a notice prompting to launch or wait for a scan to complete.
</Note>
### Filtering Compliance Results
The filters bar at the top of the overview controls which scan and which regions feed every card on the page.
#### Scan Selector
The scan selector lists completed scans across all connected providers. Each entry includes the provider type, alias, and completion timestamp. Selecting a scan updates the entire page, including ThreatScore and every framework card.
#### Region Filter
The region multi-select narrows results to one or more regions detected in the selected scan. Use it to evaluate compliance posture for a specific geography or account boundary. The filter applies to:
* The framework grid scores and pass/fail counts.
* The detailed requirement view inside each framework.
<Note>
Region filters apply only to providers that report a region attribute (for example, AWS, Azure, and Google Cloud). Providers without regions ignore the filter.
</Note>
#### Clearing Filters
Select **Clear filters** to reset both the region filter and any other applied filter to its default state. The scan selector is preserved.
### Reviewing the Prowler ThreatScore Card
When the selected scan includes Prowler ThreatScore data, a dedicated card appears at the top of the overview, showing:
* The overall ThreatScore (0100) with a color-coded indicator.
* A progress bar reflecting current posture.
* Per-pillar bars for IAM, Attack Surface, and Logging and Monitoring.
<img src="/images/compliance/prowler-app-compliance-threatscore-card.png" alt="Prowler ThreatScore badge on the Compliance overview showing the overall score and per-pillar bars" width="900" />
Selecting the card opens the ThreatScore framework detail page, covered in [Working With the Framework Detail Page](#working-with-the-framework-detail-page).
For a complete explanation of the methodology, formula, and weighting, see [Prowler ThreatScore Documentation](/user-guide/compliance/tutorials/threatscore).
### Exploring the Framework Grid
Below ThreatScore, the framework grid shows one card per supported compliance framework. Each card includes:
* **Framework logo and name:** Identifies the standard (CIS, NIST, ENS, ISO 27001, PCI-DSS, SOC 2, NIS2, CSA CCM, MITRE ATT&CK, and more).
* **Version:** Indicates the framework version applied to the scan.
* **Score:** The percentage of passing requirements over the total evaluated.
* **Passing Requirements:** A `passed / total` counter for additional context.
* **Download dropdown:** Quick access to the CSV report and, when supported, the PDF report.
<img src="/images/compliance/prowler-app-compliance-card-download.png" alt="Download dropdown on a framework card showing CSV and PDF report options" width="500" />
Select any card to open the framework detail page.
<Note>
Score color coding follows three thresholds: red for severely low compliance, amber for partial compliance, and green for healthy posture. Hover over the score for the exact percentage.
</Note>
### Working With the Framework Detail Page
The detail page provides everything needed to evaluate a single framework: aggregate metrics, top failure sections, and a requirement-by-requirement view.
#### Header, Summary Cards, and Download Actions
The header shows the framework name, version, the provider scan being reviewed, and CSV / PDF download buttons. Below the header, summary cards condense the framework state at a glance:
* **Requirements Status:** Donut chart with `Pass`, `Fail`, and `Manual` counts plus the total number of requirements.
* **Top Failed Sections:** Ranks the sections or pillars with the highest number of failing requirements.
* **ThreatScore Breakdown:** Appears only on the ThreatScore framework. It shows the overall score and per-pillar scores aligned with the ThreatScore pillars (IAM, Attack Surface, Logging and Monitoring, Encryption).
The same layout applies to every compliance framework. ThreatScore is the only framework that includes the extra Breakdown card on the left; for any other framework, the Requirements Status and Top Failed Sections cards span the full row.
<img src="/images/compliance/prowler-app-compliance-threatscore-detail.png" alt="Prowler ThreatScore detail page including the extra Breakdown card alongside Requirements Status and Top Failed Sections" width="900" />
<img src="/images/compliance/prowler-app-compliance-detail-header.png" alt="CIS framework detail page showing only the Requirements Status donut and the Top Failed Sections card, without the ThreatScore Breakdown" width="900" />
#### Requirements Accordion
Below the summary cards, an accordion organizes every requirement of the framework. Expand a section to see:
* **Requirement ID and title:** Reflect the official identifier from the framework.
* **Pass / Fail / Manual badges:** Indicate the status of each requirement based on the underlying checks.
* **Custom details panel:** Opens additional context tailored to the framework. For frameworks with custom layouts, the panel surfaces fields such as control objectives, severity, attack tactics, regulatory references, or required evidence.
Select a requirement to open the detail panel and review the failing checks, the resources affected, and remediation guidance.
<img src="/images/compliance/prowler-app-compliance-requirements-accordion.png" alt="Expanded CIS requirement showing description, rationale, remediation procedure, audit procedure, profile and assessment tags, references, and the underlying check" width="900" />
##### Frameworks With Custom Detail Layouts
Several frameworks include enriched detail panels that highlight fields specific to the standard:
* ASD Essential Eight
* AWS Well-Architected Framework
* BSI C5
* Cloud Controls Matrix (CSA CCM)
* CIS Benchmarks
* CCC (Common Cloud Controls)
* ENS RD2022
* ISO 27001
* KISA ISMS-P
* MITRE ATT&CK
* Prowler ThreatScore
Frameworks without a custom layout fall back to the generic details panel, which still exposes the official requirement metadata captured by Prowler.
### Downloading Compliance Reports
Prowler Cloud and App expose two formats:
* **CSV report:** Every requirement, every check, and every finding for the selected scan and filters. Available for all supported frameworks.
* **PDF report:** Curated executive-style report. Currently supported for Prowler ThreatScore, ENS RD2022, NIS2, and CSA CCM. Additional PDF reports are added in subsequent Prowler releases.
<Note>
**PDF detail section is capped at the first 100 failed findings per check.** The PDF is intended as an executive/auditor document, not a raw data dump: when a check produces more than 100 failed findings the report renders the first 100 and shows a banner pointing the reader to the CSV or JSON export for the complete list. The CSV and the ZIP scan output are never truncated.
The cap is configurable per deployment via the `DJANGO_PDF_MAX_FINDINGS_PER_CHECK` environment variable on the Prowler API workers; set it to `0` to disable truncation entirely. The default value of `100` keeps the PDF readable and bounded in size on enterprise-scale scans (hundreds of thousands of findings) without affecting smaller scans, where the cap is rarely reached.
Only **failed** findings are rendered in the detail section. PASS findings for the same check are excluded at query time. The PDF surfaces what needs attention, and the CSV/JSON exports surface everything for forensic review.
</Note>
#### Downloading From the Detail Page
Inside any framework detail page, the **CSV** and **PDF** buttons in the header trigger the same downloads as the overview dropdown. The PDF button only appears for frameworks that support it.
<img src="/images/compliance/prowler-app-compliance-detail-download.png" alt="Top of a framework detail page showing the CSV and PDF download buttons in the header" width="900" />
<Note>
Region filters disable the per-card download dropdown to avoid generating partial reports. Open the framework detail page when downloads scoped to a region are required, or remove the region filter to download the full report.
</Note>
#### Downloading the Full Scan Output
To export every framework, finding, and resource at once, use the **Scan Jobs** section instead. The ZIP archive contains the CSV, JSON-OCSF, and HTML reports plus a `compliance/` subfolder with one CSV per framework. See [Prowler App — Getting Started](/user-guide/tutorials/prowler-app) for details.
### API Access
Every report available in the UI is also reachable through the Prowler API. The following endpoints are the most relevant:
* [Retrieve a scan compliance report as CSV](https://api.prowler.com/api/v1/docs#tag/Scan/operation/scans_compliance_retrieve)
* [Download a complete scan output (ZIP)](https://api.prowler.com/api/v1/docs#tag/Scan/operation/scans_report_retrieve)
Use the API to integrate compliance evidence into ticketing systems, executive dashboards, or downstream pipelines.
## Prowler CLI
Prowler CLI evaluates the same compliance frameworks as Prowler Cloud and App, and produces detailed CSV outputs alongside the standard scan results. By default, it runs every supported framework and prints a status summary at the end of the scan:
<img src="/images/cli/compliance/compliance.png" />
Detailed compliance results are stored as CSV files under the `compliance/` subfolder of Prowler's output directory.
### Scan a Specific Compliance Framework
To scope a scan to a single framework and get the framework-specific summary, use the `--compliance` option:
```sh
prowler <provider> --compliance <compliance_framework>
```
Standard results plus the framework breakdown are printed to the terminal. A dedicated CSV is also generated under the `compliance/` output folder. Sample output for CIS AWS 2.0:
<img src="/images/cli/compliance/compliance-cis-sample1.png" />
<Note>
If Prowler cannot find a resource related with a check from a compliance requirement, that requirement is omitted from the output.
</Note>
### List Available Compliance Frameworks
To see which compliance frameworks are covered by a given provider, use the `--list-compliance` option:
```sh
prowler <provider> --list-compliance
```
The full catalog is also browsable at [Prowler Hub](https://hub.prowler.com/compliance).
### List Requirements of a Compliance Framework
To inspect the requirements that compose a specific framework, use the `--list-compliance-requirements` option:
```sh
prowler <provider> --list-compliance-requirements <compliance_framework(s)>
```
Sample output for the first requirements of CIS 1.5 for AWS:
```
Listing CIS 1.5 AWS Compliance Requirements:
Requirement Id: 1.1
- Description: Maintain current contact details
- Checks:
account_maintain_current_contact_details
Requirement Id: 1.2
- Description: Ensure security contact information is registered
- Checks:
account_security_contact_information_is_registered
Requirement Id: 1.3
- Description: Ensure security questions are registered in the AWS account
- Checks:
account_security_questions_are_registered_in_the_aws_account
Requirement Id: 1.4
- Description: Ensure no 'root' user account access key exists
- Checks:
iam_no_root_access_key
Requirement Id: 1.5
- Description: Ensure MFA is enabled for the 'root' user account
- Checks:
iam_root_mfa_enabled
[redacted]
```
## Contributing New Compliance Frameworks
To request a new framework or contribute one, see [Creating a New Security Compliance Framework in Prowler](/developer-guide/security-compliance-framework). The developer guide covers the Pydantic schema, JSON catalog, output formatter, and PR submission steps required to ship a new framework end to end.
## Related Documentation
* [Prowler ThreatScore Documentation](/user-guide/compliance/tutorials/threatscore)
* [Creating a New Security Compliance Framework in Prowler](/developer-guide/security-compliance-framework)
* [Prowler App — Getting Started](/user-guide/tutorials/prowler-app)
@@ -4,7 +4,7 @@ title: 'Check Mapping Prowler v4/v3 to v2'
Prowler v3 and v4 introduce distinct identifiers while preserving the checks originally implemented in v2. This change was made because, in previous versions, check names were primarily derived from the CIS Benchmark for AWS. Starting with v3 and v4, all checks are independent of any security framework and have unique names and IDs.
For more details on the updated compliance implementation in Prowler v4 and v3, refer to the [Compliance](/user-guide/compliance/tutorials/compliance) section.
For more details on the updated compliance implementation in Prowler v4 and v3, refer to the [Compliance](/user-guide/cli/tutorials/compliance) section.
```
checks_v4_v3_to_v2_mapping = {
@@ -398,7 +398,7 @@ prowler oci --severity critical high
### Next Steps
- Learn about [Compliance Frameworks](/user-guide/compliance/tutorials/compliance) in Prowler
- Learn about [Compliance Frameworks](/user-guide/cli/tutorials/compliance) in Prowler
- Review [Prowler Output Formats](/user-guide/cli/tutorials/reporting)
- Explore [Integrations](/user-guide/cli/tutorials/integrations) with SIEM and ticketing systems
@@ -2,7 +2,7 @@
Prowler v3 and v4 introduce distinct identifiers while preserving the checks originally implemented in v2. This change was made because, in previous versions, check names were primarily derived from the CIS Benchmark for AWS. Starting with v3 and v4, all checks are independent of any security framework and have unique names and IDs.
For more details on the updated compliance implementation in Prowler v4 and v3, refer to the [Compliance](/user-guide/compliance/tutorials/compliance) section.
For more details on the updated compliance implementation in Prowler v4 and v3, refer to the [Compliance](/user-guide/cli/tutorials/compliance) section.
```
checks_v4_v3_to_v2_mapping = {
-4
View File
@@ -14,19 +14,15 @@ When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Add changelog entry for a PR or feature | `prowler-changelog` |
| Adding a compliance output formatter (per-provider class + table dispatcher) | `prowler-compliance` |
| Adding new providers | `prowler-provider` |
| Adding services to existing providers | `prowler-provider` |
| Auditing check-to-requirement mappings as a cloud auditor | `prowler-compliance` |
| Create PR that requires changelog entry | `prowler-changelog` |
| Creating new checks | `prowler-sdk-check` |
| Creating/updating compliance frameworks | `prowler-compliance` |
| Fixing compliance JSON bugs (duplicate IDs, empty Section, stale refs) | `prowler-compliance` |
| Mapping checks to compliance controls | `prowler-compliance` |
| Mocking AWS with moto in tests | `prowler-test-sdk` |
| Review changelog format and conventions | `prowler-changelog` |
| Reviewing compliance framework PRs | `prowler-compliance-review` |
| Syncing compliance framework with upstream catalog | `prowler-compliance` |
| Update CHANGELOG.md in any component | `prowler-changelog` |
| Updating existing checks and metadata | `prowler-sdk-check` |
| Writing Prowler SDK tests | `prowler-test-sdk` |
-14
View File
@@ -2,20 +2,6 @@
All notable changes to the **Prowler SDK** are documented in this file.
## [5.27.0] (Prowler UNRELEASED)
### 🚀 Added
- `entra_service_principal_no_secrets_for_permanent_tier0_roles` check for M365 provider [(#10788)](https://github.com/prowler-cloud/prowler/pull/10788)
- `iam_user_access_not_stale_to_sagemaker` check for AWS provider with configurable `max_unused_sagemaker_access_days` (default 90) [(#11000)](https://github.com/prowler-cloud/prowler/pull/11000)
- `cloudtrail_bedrock_logging_enabled` check for AWS provider [(#10858)](https://github.com/prowler-cloud/prowler/pull/10858)
### 🔄 Changed
- `entra_emergency_access_exclusion` check for M365 provider now scopes the exclusion requirement to enabled Conditional Access policies with a `Block` grant control instead of every enabled policy, focusing on the lockout-relevant policy set [(#10849)](https://github.com/prowler-cloud/prowler/pull/10849)
---
## [5.26.2] (Prowler UNRELEASED)
### 🐞 Fixed
@@ -550,7 +550,6 @@
"apigatewayv2_api_access_logging_enabled",
"awslambda_function_invoke_api_operations_cloudtrail_logging_enabled",
"cloudfront_distributions_logging_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_logs_s3_bucket_access_logging_enabled",
"directoryservice_directory_log_forwarding_enabled",
-4
View File
@@ -3461,7 +3461,6 @@
],
"Checks": [
"kinesis_stream_data_retention_period",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_multi_region_enabled_logging_management_events"
]
},
@@ -3670,7 +3669,6 @@
"awslambda_function_invoke_api_operations_cloudtrail_logging_enabled",
"bedrock_model_invocation_logging_enabled",
"cloudfront_distributions_logging_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_logs_s3_bucket_access_logging_enabled",
"cloudtrail_multi_region_enabled_logging_management_events",
@@ -5290,7 +5288,6 @@
"cognito_user_pool_blocks_compromised_credentials_sign_in_attempts",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"secretsmanager_secret_unused"
@@ -6362,7 +6359,6 @@
"iam_rotate_access_key_90_days",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_administrator_access_policy",
"iam_user_console_access_unused",
-1
View File
@@ -1958,7 +1958,6 @@
}
],
"Checks": [
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_multi_region_enabled_logging_management_events",
"cloudtrail_cloudwatch_logging_enabled",
@@ -3101,7 +3101,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"iam_user_two_active_access_key"
@@ -3444,7 +3443,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"iam_user_no_setup_initial_access_key"
@@ -3554,7 +3552,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"iam_rotate_access_key_90_days",
@@ -5857,7 +5854,6 @@
}
],
"Checks": [
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_multi_region_enabled_logging_management_events",
"cloudtrail_s3_dataevents_read_enabled",
@@ -544,7 +544,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -109,7 +109,6 @@
"iam_rotate_access_key_90_days",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"iam_user_hardware_mfa_enabled",
@@ -326,7 +325,6 @@
"iam_rotate_access_key_90_days",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"organizations_delegated_administrators"
@@ -39,7 +39,6 @@
"iam_user_hardware_mfa_enabled",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"rds_instance_integration_cloudwatch_logs",
@@ -32,7 +32,6 @@
"iam_user_mfa_enabled_console_access",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"securityhub_enabled"
@@ -110,7 +109,6 @@
"iam_user_mfa_enabled_console_access",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -167,7 +165,6 @@
"iam_user_mfa_enabled_console_access",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -188,7 +185,6 @@
"iam_password_policy_minimum_length_14",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -324,7 +320,6 @@
"iam_no_root_access_key",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"awslambda_function_not_publicly_accessible",
@@ -439,7 +434,6 @@
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
@@ -595,7 +589,6 @@
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
-1
View File
@@ -119,7 +119,6 @@
],
"Checks": [
"apigateway_restapi_logging_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
-2
View File
@@ -87,7 +87,6 @@
],
"Checks": [
"apigateway_restapi_logging_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
@@ -633,7 +632,6 @@
],
"Checks": [
"apigateway_restapi_logging_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
@@ -869,7 +869,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -247,7 +247,6 @@
"iam_root_mfa_enabled",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_rotate_access_key_90_days",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
@@ -1294,7 +1293,6 @@
"bedrock_model_invocation_logging_enabled",
"bedrock_model_invocation_logs_encryption_enabled",
"cloudfront_distributions_logging_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
@@ -2540,7 +2540,6 @@
"bedrock_model_invocation_logging_enabled",
"bedrock_model_invocation_logs_encryption_enabled",
"cloudfront_distributions_logging_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_bucket_requires_mfa_delete",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_insights_exist",
@@ -171,7 +171,6 @@
"iam_no_expired_server_certificates_stored",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"iam_no_root_access_key",
-1
View File
@@ -1913,7 +1913,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
],
@@ -32,7 +32,6 @@
"iam_user_mfa_enabled_console_access",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"awslambda_function_not_publicly_accessible",
@@ -77,7 +76,6 @@
"iam_user_mfa_enabled_console_access",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"awslambda_function_not_publicly_accessible",
@@ -166,7 +164,6 @@
"iam_no_root_access_key",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -592,7 +589,6 @@
"iam_password_policy_expires_passwords_within_90_days_or_less",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -23,7 +23,6 @@
"iam_rotate_access_key_90_days",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"securityhub_enabled"
@@ -44,7 +43,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -118,7 +116,6 @@
"iam_rotate_access_key_90_days",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"rds_instance_integration_cloudwatch_logs",
@@ -243,7 +240,6 @@
"iam_no_root_access_key",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"awslambda_function_url_public",
@@ -31,7 +31,6 @@
"iam_user_mfa_enabled_console_access",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"secretsmanager_automatic_rotation_enabled"
@@ -54,7 +53,6 @@
"iam_password_policy_minimum_length_14",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -76,7 +74,6 @@
"iam_password_policy_minimum_length_14",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -98,7 +95,6 @@
"iam_password_policy_minimum_length_14",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -120,7 +116,6 @@
"iam_password_policy_minimum_length_14",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -141,7 +136,6 @@
"iam_password_policy_minimum_length_14",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -253,7 +247,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -292,7 +285,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -869,7 +861,6 @@
"iam_user_mfa_enabled_console_access",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"secretsmanager_automatic_rotation_enabled"
@@ -1208,7 +1199,6 @@
"iam_no_root_access_key",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"awslambda_function_not_publicly_accessible",
@@ -1604,7 +1594,6 @@
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
@@ -2163,7 +2152,6 @@
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
@@ -2191,7 +2179,6 @@
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
@@ -577,7 +577,6 @@
"iam_rotate_access_key_90_days",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"secretsmanager_automatic_rotation_enabled"
@@ -639,7 +638,6 @@
"iam_no_root_access_key",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
]
@@ -707,7 +707,6 @@
"iam_user_console_access_unused",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_two_active_access_key",
"iam_root_credentials_management_enabled",
@@ -1312,7 +1311,6 @@
}
],
"Checks": [
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudtrail_logs_s3_bucket_access_logging_enabled",
@@ -1476,7 +1474,6 @@
"cloudtrail_threat_detection_enumeration",
"cloudtrail_threat_detection_privilege_escalation",
"cloudtrail_threat_detection_llm_jacking",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_multi_region_enabled_logging_management_events"
]
@@ -1573,7 +1570,6 @@
"cloudtrail_threat_detection_llm_jacking",
"cloudtrail_threat_detection_enumeration",
"cloudtrail_multi_region_enabled_logging_management_events",
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_log_metric_filter_unauthorized_api_calls",
"cloudwatch_log_metric_filter_authentication_failures",
@@ -1563,7 +1563,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_password_policy_reuse_24",
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
@@ -295,7 +295,6 @@
"Checks": [
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"iam_no_expired_server_certificates_stored"
@@ -341,7 +340,6 @@
"iam_rotate_access_key_90_days",
"iam_role_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_bedrock",
"iam_user_access_not_stale_to_sagemaker",
"iam_user_accesskey_unused",
"iam_user_console_access_unused",
"accessanalyzer_enabled_without_findings"
@@ -818,7 +816,6 @@
}
],
"Checks": [
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_multi_region_enabled_logging_management_events",
"cloudtrail_s3_dataevents_read_enabled",
-1
View File
@@ -346,7 +346,6 @@
}
],
"Checks": [
"cloudtrail_bedrock_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
@@ -251,7 +251,6 @@
"entra_break_glass_account_fido2_security_key_registered",
"entra_conditional_access_policy_mfa_enforced_for_guest_users",
"entra_default_app_management_policy_enabled",
"entra_emergency_access_exclusion",
"entra_all_apps_conditional_access_coverage",
"entra_conditional_access_policy_device_registration_mfa_required",
"entra_intune_enrollment_sign_in_frequency_every_time",
@@ -261,7 +260,6 @@
"entra_legacy_authentication_blocked",
"entra_managed_device_required_for_authentication",
"entra_seamless_sso_disabled",
"entra_service_principal_no_secrets_for_permanent_tier0_roles",
"entra_users_mfa_enabled",
"exchange_organization_modern_authentication_enabled",
"exchange_transport_config_smtp_auth_disabled",
@@ -284,7 +282,6 @@
"entra_admin_portals_access_restriction",
"entra_app_registration_no_unused_privileged_permissions",
"entra_policy_guest_users_access_restrictions",
"entra_service_principal_no_secrets_for_permanent_tier0_roles",
"sharepoint_external_sharing_managed",
"sharepoint_external_sharing_restricted",
"sharepoint_guest_sharing_restricted"
@@ -674,12 +671,10 @@
"entra_admin_users_phishing_resistant_mfa_enabled",
"entra_admin_users_sign_in_frequency_enabled",
"entra_break_glass_account_fido2_security_key_registered",
"entra_emergency_access_exclusion",
"entra_app_registration_no_unused_privileged_permissions",
"entra_policy_ensure_default_user_cannot_create_tenants",
"entra_policy_guest_invite_only_for_admin_roles",
"entra_seamless_sso_disabled",
"entra_service_principal_no_secrets_for_permanent_tier0_roles"
"entra_seamless_sso_disabled"
]
},
{
@@ -732,11 +727,9 @@
"entra_conditional_access_policy_device_code_flow_blocked",
"entra_conditional_access_policy_directory_sync_account_excluded",
"entra_conditional_access_policy_corporate_device_sign_in_frequency_enforced",
"entra_emergency_access_exclusion",
"entra_identity_protection_sign_in_risk_enabled",
"entra_managed_device_required_for_authentication",
"entra_seamless_sso_disabled",
"entra_service_principal_no_secrets_for_permanent_tier0_roles",
"entra_users_mfa_enabled"
]
},
+1 -1
View File
@@ -48,7 +48,7 @@ class _MutableTimestamp:
timestamp = _MutableTimestamp(datetime.today())
timestamp_utc = _MutableTimestamp(datetime.now(timezone.utc))
prowler_version = "5.27.0"
prowler_version = "5.26.2"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
square_logo_img = "https://raw.githubusercontent.com/prowler-cloud/prowler/dc7d2d5aeb92fdf12e8604f42ef6472cd3e8e889/docs/img/prowler-logo-black.png"
aws_logo = "https://user-images.githubusercontent.com/38561120/235953920-3e3fba08-0795-41dc-b480-9bea57db9f2e.png"
-2
View File
@@ -26,8 +26,6 @@ aws:
max_unused_access_keys_days: 45
# aws.iam_user_console_access_unused --> CIS recommends 45 days
max_console_access_days: 45
# aws.iam_user_access_not_stale_to_sagemaker --> default 90 days
max_unused_sagemaker_access_days: 90
# AWS EC2 Configuration
# aws.ec2_elastic_ip_shodan
@@ -1,44 +0,0 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_bedrock_logging_enabled",
"CheckTitle": "CloudTrail logs Amazon Bedrock API calls for security auditing",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"ResourceGroup": "monitoring",
"Description": "**At least one actively logging CloudTrail trail** records **Amazon Bedrock API activity** through management events or advanced event selectors targeting Bedrock resources.\n\nThis check covers **control-plane** operations such as configuration changes through CloudTrail management events and can also cover **data-plane** Bedrock events when advanced event selectors target Bedrock resource types.",
"Risk": "Without CloudTrail logging for Bedrock control-plane operations, changes to prompts, guardrails, agents, flows, or knowledge bases can become invisible, weakening forensics and incident response. Management events do not capture `InvokeModel`; pair this control with `bedrock_model_invocation_logging_enabled` or Bedrock data event selectors for invocation visibility.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/bedrock/latest/userguide/logging-using-cloudtrail.html",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail put-event-selectors --trail-name <example_resource_name> --advanced-event-selectors '[{\"Name\":\"Bedrock data events\",\"FieldSelectors\":[{\"Field\":\"eventCategory\",\"Equals\":[\"Data\"]},{\"Field\":\"resources.type\",\"Equals\":[\"AWS::Bedrock::Model\",\"AWS::Bedrock::Guardrail\",\"AWS::Bedrock::AgentAlias\",\"AWS::Bedrock::FlowAlias\",\"AWS::Bedrock::InlineAgent\",\"AWS::Bedrock::KnowledgeBase\",\"AWS::Bedrock::Prompt\"]}]}]'",
"NativeIaC": "```yaml\n# CloudFormation: enable Bedrock data event logging on an actively logging trail\nResources:\n ExampleTrail:\n Type: AWS::CloudTrail::Trail\n Properties:\n TrailName: <example_resource_name>\n S3BucketName: <example_resource_name>\n IsLogging: true\n AdvancedEventSelectors:\n - Name: Bedrock data events\n FieldSelectors:\n - Field: eventCategory\n Equals:\n - Data\n - Field: resources.type # CRITICAL: target Bedrock resources\n Equals:\n - AWS::Bedrock::Model\n - AWS::Bedrock::Guardrail\n - AWS::Bedrock::AgentAlias\n - AWS::Bedrock::FlowAlias\n - AWS::Bedrock::InlineAgent\n - AWS::Bedrock::KnowledgeBase\n - AWS::Bedrock::Prompt\n```",
"Other": "1. In the AWS Console, open CloudTrail and select a trail that is actively logging\n2. Edit the trail and enable Management events to capture Bedrock control-plane operations, or add Bedrock advanced data event selectors for data-plane visibility\n3. If using data events, select the Bedrock resource types you want to log\n4. Save changes and confirm the trail remains in logging state",
"Terraform": "```hcl\n# Terraform: enable Bedrock data event logging on an actively logging trail\nresource \"aws_cloudtrail\" \"example_resource\" {\n name = \"example_resource\"\n s3_bucket_name = \"example_resource\"\n\n advanced_event_selector {\n name = \"Bedrock data events\"\n field_selector {\n field = \"eventCategory\"\n equals = [\"Data\"]\n }\n field_selector {\n field = \"resources.type\" # CRITICAL: target Bedrock resources\n equals = [\"AWS::Bedrock::Model\", \"AWS::Bedrock::Guardrail\", \"AWS::Bedrock::AgentAlias\", \"AWS::Bedrock::FlowAlias\", \"AWS::Bedrock::InlineAgent\", \"AWS::Bedrock::KnowledgeBase\", \"AWS::Bedrock::Prompt\"]\n }\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable CloudTrail logging for Amazon Bedrock on **at least one actively logging trail**. At minimum, enable **management events** to capture Bedrock control-plane operations. For invocation-level and other data-plane visibility, add **advanced event selectors** targeting Bedrock resource types or pair this control with `bedrock_model_invocation_logging_enabled`.\n\nFor broader region coverage, pair this control with a separate multi-region CloudTrail check. Centralize logs in an encrypted bucket or CloudWatch Logs to support **defense in depth** and forensic readiness for AI workloads.",
"Url": "https://hub.prowler.com/check/cloudtrail_bedrock_logging_enabled"
}
},
"Categories": [
"logging",
"forensics-ready",
"gen-ai"
],
"DependsOn": [],
"RelatedTo": [
"cloudtrail_multi_region_enabled_logging_management_events",
"bedrock_model_invocation_logging_enabled"
],
"Notes": "This check passes when CloudTrail captures Bedrock control-plane activity via management events or Bedrock data events via advanced selectors. It does not require multi-region coverage, and it does not by itself guarantee `InvokeModel` visibility unless Bedrock data events are selected; use `bedrock_model_invocation_logging_enabled` for model invocation logs. Additional advanced selector filters such as `eventName` or `resources.ARN` can further narrow effective coverage and should be reviewed explicitly."
}
@@ -1,213 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
)
from prowler.providers.aws.services.cloudtrail.cloudtrail_service import (
Event_Selector,
)
class cloudtrail_bedrock_logging_enabled(Check):
"""Ensure CloudTrail is configured to log Amazon Bedrock API calls.
This check verifies whether at least one CloudTrail trail is configured to
capture Amazon Bedrock control-plane API calls through management events or
Bedrock data events through advanced event selectors.
- PASS: A trail logs Bedrock control-plane API calls via management events
or Bedrock data events via Bedrock-specific advanced event selectors.
- FAIL: No CloudTrail trail is configured to log Bedrock API calls.
"""
# Bedrock resource types supported by CloudTrail advanced event selectors.
BEDROCK_RESOURCE_TYPES = frozenset(
{
"AWS::Bedrock::AgentAlias",
"AWS::Bedrock::FlowAlias",
"AWS::Bedrock::Guardrail",
"AWS::Bedrock::InlineAgent",
"AWS::Bedrock::KnowledgeBase",
"AWS::Bedrock::Model",
"AWS::Bedrock::Prompt",
}
)
# Bedrock control-plane event sources, including Bedrock Data Automation.
BEDROCK_EVENT_SOURCES = frozenset(
{
"bedrock.amazonaws.com",
"bedrock-agent.amazonaws.com",
"bedrock-runtime.amazonaws.com",
"bedrock-agent-runtime.amazonaws.com",
"bedrock-data-automation.amazonaws.com",
"bedrock-data-automation-runtime.amazonaws.com",
}
)
def execute(self) -> list[Check_Report_AWS]:
"""Execute the check logic.
Returns:
A list of reports containing the result of the check.
"""
findings = []
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.is_logging:
for data_event in trail.data_events:
match_type = self._get_bedrock_match_type(data_event)
if match_type:
report = Check_Report_AWS(
metadata=self.metadata(), resource=trail
)
report.region = trail.home_region
report.status = "PASS"
if match_type == "classic_management":
report.status_extended = (
f"Trail {trail.name} from home region "
f"{trail.home_region} has management events "
"enabled to log Amazon Bedrock control-plane "
"API calls."
)
elif match_type == "advanced_management":
report.status_extended = (
f"Trail {trail.name} from home region "
f"{trail.home_region} has an advanced "
"management event selector to log Amazon "
"Bedrock control-plane API calls."
)
else:
report.status_extended = (
f"Trail {trail.name} from home region "
f"{trail.home_region} has an advanced data "
"event selector to log Amazon Bedrock API "
"calls."
)
findings.append(report)
break
if not findings:
report = Check_Report_AWS(
metadata=self.metadata(), resource=cloudtrail_client.trails
)
report.region = cloudtrail_client.region
report.resource_arn = cloudtrail_client.trail_arn_template
report.resource_id = cloudtrail_client.audited_account
report.status = "FAIL"
report.status_extended = "No CloudTrail trails are configured to log Amazon Bedrock API calls."
findings.append(report)
return findings
def _get_bedrock_match_type(self, data_event: Event_Selector) -> str | None:
"""Return the Bedrock logging match type for an event selector.
Args:
data_event: An Event_Selector object from the trail.
Returns:
The matching selector type, or None if the selector does not log
the Bedrock events covered by this check.
"""
if not data_event.is_advanced:
if self._logs_classic_management_events(data_event.event_selector):
return "classic_management"
return None
field_selectors = data_event.event_selector.get("FieldSelectors", [])
if self._logs_advanced_management_events(field_selectors):
return "advanced_management"
if self._logs_advanced_bedrock_data_events(field_selectors):
return "advanced_data"
return None
@staticmethod
def _logs_classic_management_events(event_selector: dict) -> bool:
"""Check whether a classic selector logs Bedrock control-plane events."""
return event_selector.get(
"IncludeManagementEvents", True
) and event_selector.get("ReadWriteType", "All") in ("All", "WriteOnly")
def _logs_advanced_management_events(self, field_selectors: list[dict]) -> bool:
"""Check whether advanced selectors log Bedrock control-plane events."""
event_category_selectors = [
field for field in field_selectors if field.get("Field") == "eventCategory"
]
if not self._selectors_match_value("Management", event_category_selectors):
return False
read_only_selectors = [
field for field in field_selectors if field.get("Field") == "readOnly"
]
has_read_only_restriction = bool(read_only_selectors) and not any(
self._field_selector_matches_value("false", selector)
for selector in read_only_selectors
)
return not has_read_only_restriction and self._logs_bedrock_management_events(
field_selectors
)
def _logs_advanced_bedrock_data_events(self, field_selectors: list[dict]) -> bool:
"""Check whether advanced selectors log Bedrock data events."""
event_category_selectors = [
field for field in field_selectors if field.get("Field") == "eventCategory"
]
if not self._selectors_match_value("Data", event_category_selectors):
return False
resource_type_selectors = [
field for field in field_selectors if field.get("Field") == "resources.type"
]
return any(
self._selectors_match_value(resource_type, resource_type_selectors)
for resource_type in self.BEDROCK_RESOURCE_TYPES
)
def _logs_bedrock_management_events(self, field_selectors: list[dict]) -> bool:
"""Check whether advanced management selectors include Bedrock sources."""
event_source_selectors = [
field for field in field_selectors if field.get("Field") == "eventSource"
]
if not event_source_selectors:
return True
return any(
self._selectors_match_value(event_source, event_source_selectors)
for event_source in self.BEDROCK_EVENT_SOURCES
)
def _selectors_match_value(self, value: str, selectors: list[dict]) -> bool:
"""Check whether a candidate value satisfies all selectors for a field."""
return bool(selectors) and all(
self._field_selector_matches_value(value, selector)
for selector in selectors
)
@staticmethod
def _field_selector_matches_value(value: str, selector: dict) -> bool:
"""Evaluate a CloudTrail advanced field selector against a candidate value."""
conditions = []
if "Equals" in selector:
conditions.append(value in selector["Equals"])
if "NotEquals" in selector:
conditions.append(value not in selector["NotEquals"])
if "StartsWith" in selector:
conditions.append(
any(value.startswith(prefix) for prefix in selector["StartsWith"])
)
if "NotStartsWith" in selector:
conditions.append(
all(
not value.startswith(prefix) for prefix in selector["NotStartsWith"]
)
)
if "EndsWith" in selector:
conditions.append(
any(value.endswith(suffix) for suffix in selector["EndsWith"])
)
if "NotEndsWith" in selector:
conditions.append(
all(not value.endswith(suffix) for suffix in selector["NotEndsWith"])
)
return all(conditions) if conditions else True
@@ -1,10 +1,9 @@
from datetime import datetime, timezone
from typing import Optional
from dateutil.parser import parse
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.iam.iam_client import iam_client
from prowler.providers.aws.services.iam.lib.policy import (
evaluate_bedrock_staleness,
find_bedrock_service,
)
class iam_role_access_not_stale_to_bedrock(Check):
@@ -34,73 +33,33 @@ class iam_role_access_not_stale_to_bedrock(Check):
"max_unused_bedrock_access_days", 60
)
if iam_client.roles is None:
return findings
for (
role_data,
last_accessed_services,
) in iam_client.role_last_accessed_services.items():
role_name = role_data[0]
role_arn = role_data[1]
for role in iam_client.roles:
last_accessed_services = iam_client.role_last_accessed_services.get(
(role.name, role.arn), []
)
bedrock_service = self._find_bedrock_service(last_accessed_services)
bedrock_service = find_bedrock_service(last_accessed_services)
if bedrock_service is None:
continue
report = Check_Report_AWS(metadata=self.metadata(), resource=role)
report = Check_Report_AWS(
metadata=self.metadata(),
resource={"name": role_name, "arn": role_arn},
)
report.resource_id = role_name
report.resource_arn = role_arn
report.region = iam_client.region
if iam_client.roles is not None:
for iam_role in iam_client.roles:
if iam_role.arn == role_arn:
report.resource_tags = iam_role.tags
break
self._evaluate_bedrock_staleness(
report,
bedrock_service,
max_unused_bedrock_days,
role.name,
"Role",
evaluate_bedrock_staleness(
report, bedrock_service, max_unused_bedrock_days, role_name, "Role"
)
findings.append(report)
return findings
@staticmethod
def _find_bedrock_service(
last_accessed_services: list[dict],
) -> Optional[dict]:
"""Return the Bedrock entry from a service last accessed list."""
for service in last_accessed_services:
if service.get("ServiceNamespace") == "bedrock":
return service
return None
@staticmethod
def _evaluate_bedrock_staleness(
report: Check_Report_AWS,
bedrock_service: dict,
max_days: int,
identity_name: str,
identity_type: str,
) -> None:
"""Populate a check report based on Bedrock access recency."""
last_authenticated = bedrock_service.get("LastAuthenticated")
if last_authenticated is None:
report.status = "FAIL"
report.status_extended = (
f"IAM {identity_type} {identity_name} has Bedrock permissions "
f"but has never used them."
)
return
if isinstance(last_authenticated, str):
last_authenticated = parse(last_authenticated)
days_since_access = (datetime.now(timezone.utc) - last_authenticated).days
if days_since_access > max_days:
report.status = "FAIL"
report.status_extended = (
f"IAM {identity_type} {identity_name} has not accessed Bedrock "
f"in {days_since_access} days (threshold: {max_days} days)."
)
else:
report.status = "PASS"
report.status_extended = (
f"IAM {identity_type} {identity_name} accessed Bedrock "
f"{days_since_access} days ago (threshold: {max_days} days)."
)
@@ -1,10 +1,9 @@
from datetime import datetime, timezone
from typing import Optional
from dateutil.parser import parse
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.iam.iam_client import iam_client
from prowler.providers.aws.services.iam.lib.policy import (
evaluate_bedrock_staleness,
find_bedrock_service,
)
class iam_user_access_not_stale_to_bedrock(Check):
@@ -34,70 +33,32 @@ class iam_user_access_not_stale_to_bedrock(Check):
"max_unused_bedrock_access_days", 60
)
for user in iam_client.users:
last_accessed_services = iam_client.last_accessed_services.get(
(user.name, user.arn), []
)
bedrock_service = self._find_bedrock_service(last_accessed_services)
for (
user_data,
last_accessed_services,
) in iam_client.last_accessed_services.items():
user_name = user_data[0]
user_arn = user_data[1]
bedrock_service = find_bedrock_service(last_accessed_services)
if bedrock_service is None:
continue
report = Check_Report_AWS(metadata=self.metadata(), resource=user)
report = Check_Report_AWS(
metadata=self.metadata(),
resource={"name": user_name, "arn": user_arn},
)
report.resource_id = user_name
report.resource_arn = user_arn
report.region = iam_client.region
for iam_user in iam_client.users:
if iam_user.arn == user_arn:
report.resource_tags = iam_user.tags
break
self._evaluate_bedrock_staleness(
report,
bedrock_service,
max_unused_bedrock_days,
user.name,
"User",
evaluate_bedrock_staleness(
report, bedrock_service, max_unused_bedrock_days, user_name, "User"
)
findings.append(report)
return findings
@staticmethod
def _find_bedrock_service(
last_accessed_services: list[dict],
) -> Optional[dict]:
"""Return the Bedrock entry from a service last accessed list."""
for service in last_accessed_services:
if service.get("ServiceNamespace") == "bedrock":
return service
return None
@staticmethod
def _evaluate_bedrock_staleness(
report: Check_Report_AWS,
bedrock_service: dict,
max_days: int,
identity_name: str,
identity_type: str,
) -> None:
"""Populate a check report based on Bedrock access recency."""
last_authenticated = bedrock_service.get("LastAuthenticated")
if last_authenticated is None:
report.status = "FAIL"
report.status_extended = (
f"IAM {identity_type} {identity_name} has Bedrock permissions "
f"but has never used them."
)
return
if isinstance(last_authenticated, str):
last_authenticated = parse(last_authenticated)
days_since_access = (datetime.now(timezone.utc) - last_authenticated).days
if days_since_access > max_days:
report.status = "FAIL"
report.status_extended = (
f"IAM {identity_type} {identity_name} has not accessed Bedrock "
f"in {days_since_access} days (threshold: {max_days} days)."
)
else:
report.status = "PASS"
report.status_extended = (
f"IAM {identity_type} {identity_name} accessed Bedrock "
f"{days_since_access} days ago (threshold: {max_days} days)."
)
@@ -1,42 +0,0 @@
{
"Provider": "aws",
"CheckID": "iam_user_access_not_stale_to_sagemaker",
"CheckTitle": "Regular SageMaker access ensures IAM users retain only actively used permissions",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
],
"ServiceName": "iam",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsIamUser",
"ResourceGroup": "IAM",
"Description": "IAM users granted **SageMaker** permissions are evaluated for recent service usage.\n\nUsers whose last SageMaker access exceeds the configured threshold (default **90 days**) or that have **never** accessed SageMaker are flagged, indicating stale permissions that should be reviewed.",
"Risk": "Stale SageMaker permissions widen the **blast radius** of a credential compromise.\n\nAn attacker who gains access to a user with unused SageMaker permissions can access ML training data, models, endpoints, and notebooks — all without triggering expected usage patterns. Removing or scoping down stale permissions enforces least privilege and limits blast radius.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor.html",
"https://docs.aws.amazon.com/sagemaker/latest/dg/security-iam.html",
"https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-credentials"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "1. Open the IAM console and select the user\n2. Review the **Access Advisor** tab to confirm SageMaker has not been accessed recently\n3. Remove or detach any policies granting SageMaker permissions that are no longer needed\n4. If the user still requires SageMaker access, verify usage and reduce scope to least privilege",
"Terraform": ""
},
"Recommendation": {
"Text": "Apply the **principle of least privilege** by regularly reviewing IAM Access Advisor data and revoking SageMaker permissions that are no longer actively used.\n\nEstablish a periodic access review process and automate alerts for stale permissions to maintain a minimal attack surface.",
"Url": "https://hub.prowler.com/check/iam_user_access_not_stale_to_sagemaker"
}
},
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [
"iam_user_access_not_stale_to_bedrock"
],
"Notes": "The staleness threshold is configurable via the `max_unused_sagemaker_access_days` audit config key (default: 90 days)."
}
@@ -1,103 +0,0 @@
from datetime import datetime, timezone
from typing import Optional
from dateutil.parser import parse
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.iam.iam_client import iam_client
class iam_user_access_not_stale_to_sagemaker(Check):
"""Detect IAM users with stale SageMaker permissions.
This check evaluates whether IAM users with SageMaker service permissions
have actively used those permissions within the configured threshold
(default 90 days).
- PASS: The user has accessed SageMaker within the allowed period.
- FAIL: The user has SageMaker permissions but has not used them within
the allowed period or has never used them.
"""
def execute(self) -> list[Check_Report_AWS]:
"""Execute the SageMaker access staleness check for IAM users.
Iterates over IAM users, inspecting service last accessed data for
the ``sagemaker`` namespace. Users whose last SageMaker access exceeds
the configured threshold are reported as non-compliant.
Returns:
A list of reports containing the result of the check.
"""
findings = []
max_unused_sagemaker_days = iam_client.audit_config.get(
"max_unused_sagemaker_access_days", 90
)
for user in iam_client.users:
last_accessed_services = iam_client.last_accessed_services.get(
(user.name, user.arn), []
)
sagemaker_service = self._find_sagemaker_service(last_accessed_services)
if sagemaker_service is None:
continue
report = Check_Report_AWS(metadata=self.metadata(), resource=user)
report.region = iam_client.region
self._evaluate_sagemaker_staleness(
report,
sagemaker_service,
max_unused_sagemaker_days,
user.name,
"User",
)
findings.append(report)
return findings
@staticmethod
def _find_sagemaker_service(
last_accessed_services: list[dict],
) -> Optional[dict]:
"""Return the SageMaker entry from a service last accessed list."""
for service in last_accessed_services:
if service.get("ServiceNamespace") == "sagemaker":
return service
return None
@staticmethod
def _evaluate_sagemaker_staleness(
report: Check_Report_AWS,
sagemaker_service: dict,
max_days: int,
identity_name: str,
identity_type: str,
) -> None:
"""Populate a check report based on SageMaker access recency."""
last_authenticated = sagemaker_service.get("LastAuthenticated")
if last_authenticated is None:
report.status = "FAIL"
report.status_extended = (
f"IAM {identity_type} {identity_name} has SageMaker permissions "
f"but has never used them."
)
return
if isinstance(last_authenticated, str):
last_authenticated = parse(last_authenticated)
days_since_access = (datetime.now(timezone.utc) - last_authenticated).days
if days_since_access > max_days:
report.status = "FAIL"
report.status_extended = (
f"IAM {identity_type} {identity_name} has not accessed SageMaker "
f"in {days_since_access} days (threshold: {max_days} days)."
)
else:
report.status = "PASS"
report.status_extended = (
f"IAM {identity_type} {identity_name} accessed SageMaker "
f"{days_since_access} days ago (threshold: {max_days} days)."
)
@@ -1,9 +1,12 @@
import re
from datetime import datetime, timezone
from ipaddress import ip_address, ip_network
from typing import Optional, Tuple
from dateutil.parser import parse
from py_iam_expand.actions import InvalidActionHandling, expand_actions
from prowler.lib.check.models import Check_Report_AWS
from prowler.lib.logger import logger
from prowler.providers.aws.aws_provider import read_aws_regions_file
@@ -1118,3 +1121,47 @@ def has_codebuild_trusted_principal(trust_policy: dict) -> bool:
)
for s in statements
)
def find_bedrock_service(last_accessed_services: list[dict]) -> Optional[dict]:
"""Return the Bedrock entry from a service last accessed list."""
for service in last_accessed_services:
if service.get("ServiceNamespace") == "bedrock":
return service
return None
def evaluate_bedrock_staleness(
report: Check_Report_AWS,
bedrock_service: dict,
max_days: int,
identity_name: str,
identity_type: str,
) -> None:
"""Populate a check report based on Bedrock access recency."""
last_authenticated = bedrock_service.get("LastAuthenticated")
if last_authenticated is None:
report.status = "FAIL"
report.status_extended = (
f"IAM {identity_type} {identity_name} has Bedrock permissions "
f"but has never used them."
)
return
if isinstance(last_authenticated, str):
last_authenticated = parse(last_authenticated)
days_since_access = (datetime.now(timezone.utc) - last_authenticated).days
if days_since_access > max_days:
report.status = "FAIL"
report.status_extended = (
f"IAM {identity_type} {identity_name} has not accessed Bedrock "
f"in {days_since_access} days (threshold: {max_days} days)."
)
else:
report.status = "PASS"
report.status_extended = (
f"IAM {identity_type} {identity_name} accessed Bedrock "
f"{days_since_access} days ago (threshold: {max_days} days)."
)
@@ -9,8 +9,8 @@
"Severity": "high",
"ResourceType": "NotDefined",
"ResourceGroup": "IAM",
"Description": "Microsoft Entra **Conditional Access** is verified to have at least one **emergency access** (break glass) account or group excluded from every enabled Conditional Access policy with a **Block** grant control. Emergency access accounts provide a fallback mechanism when normal administrative access is blocked due to misconfigured blocking policies.",
"Risk": "Without emergency access accounts excluded from every blocking Conditional Access policy, a misconfigured Block policy can lock out all administrators from the tenant. This creates a **critical availability risk** where legitimate administrators cannot access or remediate issues in the environment.",
"Description": "Microsoft Entra **Conditional Access** is verified to have at least one **emergency access** (break glass) account or group excluded from all policies. Emergency access accounts provide a fallback mechanism when normal administrative access is blocked due to misconfigured policies.",
"Risk": "Without emergency access accounts excluded from Conditional Access policies, a misconfiguration could lock out all administrators from the tenant. This creates a **critical availability risk** where legitimate administrators cannot access or remediate issues in the environment.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/security-emergency-access",
@@ -20,11 +20,11 @@
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "1. Create dedicated emergency access accounts or a security group in Microsoft Entra admin center.\n2. Navigate to Protection > Conditional Access > Policies.\n3. For every Conditional Access policy whose grant control is **Block**, add the emergency access account or group to the exclusion list under Users > Exclude.\n4. Ensure the emergency accounts are protected with strong credentials and limited usage.",
"Other": "1. Create dedicated emergency access accounts or a security group in Microsoft Entra admin center.\n2. Navigate to Protection > Conditional Access > Policies.\n3. For each Conditional Access policy, add the emergency access account or group to the exclusion list under Users > Exclude.\n4. Ensure the emergency accounts are protected with strong credentials and limited usage.",
"Terraform": ""
},
"Recommendation": {
"Text": "Create and maintain at least two emergency access accounts that are excluded from every Conditional Access policy with a **Block** grant control so they can never be denied access by a misconfiguration. Store credentials securely offline, monitor usage, and test access regularly. Follow **least privilege** principles for these accounts while ensuring they can recover tenant access when needed.",
"Text": "Create and maintain at least two emergency access accounts that are excluded from all Conditional Access policies. Store credentials securely offline, monitor usage, and test access regularly. Follow **least privilege** principles for these accounts while ensuring they can recover tenant access when needed.",
"Url": "https://hub.prowler.com/check/entra_emergency_access_exclusion"
}
},
@@ -3,38 +3,87 @@ from collections import Counter
from prowler.lib.check.models import Check, CheckReportM365
from prowler.providers.m365.services.entra.entra_client import entra_client
from prowler.providers.m365.services.entra.entra_service import (
ConditionalAccessGrantControl,
ConditionalAccessPolicyState,
)
class entra_emergency_access_exclusion(Check):
"""Check that at least one emergency access account or group is excluded
from every enabled Conditional Access policy with a `Block` grant control.
"""Check if at least one emergency access account or group is excluded from all Conditional Access policies.
Emergency access (break glass) accounts are, by definition, accounts that
cannot be blocked by Conditional Access. Membership of an account in the
exclusion list of every enabled blocking policy is therefore the necessary
condition for it to act as a true emergency account: if any enabled
blocking policy applies to it, a misconfiguration of that policy can lock
out the tenant.
This check ensures that the tenant has at least one emergency/break glass account
or account exclusion group that is excluded from all Conditional Access policies.
This prevents accidental lockout scenarios where misconfigured CA policies could
block all administrative access to the tenant.
- PASS: At least one user or group is excluded from every enabled
Conditional Access policy with a `Block` grant control, or no
enabled blocking Conditional Access policy exists.
- FAIL: One or more enabled blocking Conditional Access policies exist and
no user or group is excluded from all of them.
- PASS: At least one user or group is excluded from all enabled Conditional Access policies,
or there are no enabled policies.
- FAIL: No user or group is excluded from all enabled Conditional Access policies.
"""
def execute(self) -> list[CheckReportM365]:
"""Execute the check for emergency access account exclusions from
blocking Conditional Access policies.
"""Execute the check for emergency access account exclusions.
Returns:
list[CheckReportM365]: A list containing the result of the check.
"""
findings = []
# Get all enabled CA policies (excluding disabled ones)
enabled_policies = [
policy
for policy in entra_client.conditional_access_policies.values()
if policy.state != ConditionalAccessPolicyState.DISABLED
]
# If there are no enabled policies, there's nothing to exclude from
if not enabled_policies:
report = CheckReportM365(
metadata=self.metadata(),
resource={},
resource_name="Conditional Access Policies",
resource_id="conditionalAccessPolicies",
)
report.status = "PASS"
report.status_extended = "No enabled Conditional Access policies found. Emergency access exclusions are not required."
findings.append(report)
return findings
total_policy_count = len(enabled_policies)
# Count how many policies exclude each user
excluded_users_counter = Counter()
for policy in enabled_policies:
user_conditions = policy.conditions.user_conditions
if user_conditions:
for user_id in user_conditions.excluded_users:
excluded_users_counter[user_id] += 1
# Count how many policies exclude each group
excluded_groups_counter = Counter()
for policy in enabled_policies:
user_conditions = policy.conditions.user_conditions
if user_conditions:
for group_id in user_conditions.excluded_groups:
excluded_groups_counter[group_id] += 1
# Find users excluded from ALL policies
users_excluded_from_all = [
user_id
for user_id, count in excluded_users_counter.items()
if count == total_policy_count
]
# Find groups excluded from ALL policies
groups_excluded_from_all = [
group_id
for group_id, count in excluded_groups_counter.items()
if count == total_policy_count
]
has_emergency_exclusion = bool(
users_excluded_from_all or groups_excluded_from_all
)
report = CheckReportM365(
metadata=self.metadata(),
resource={},
@@ -42,67 +91,27 @@ class entra_emergency_access_exclusion(Check):
resource_id="conditionalAccessPolicies",
)
blocking_policies = [
policy
for policy in entra_client.conditional_access_policies.values()
if policy.state != ConditionalAccessPolicyState.DISABLED
and ConditionalAccessGrantControl.BLOCK
in policy.grant_controls.built_in_controls
]
if not blocking_policies:
if has_emergency_exclusion:
report.status = "PASS"
report.status_extended = "No enabled Conditional Access policies with a Block grant control found. Emergency access exclusions are not required."
findings.append(report)
return findings
total_blocking_count = len(blocking_policies)
excluded_users_counter = Counter()
excluded_groups_counter = Counter()
for policy in blocking_policies:
user_conditions = policy.conditions.user_conditions
if not user_conditions:
continue
for user_id in user_conditions.excluded_users:
excluded_users_counter[user_id] += 1
for group_id in user_conditions.excluded_groups:
excluded_groups_counter[group_id] += 1
emergency_user_ids = [
user_id
for user_id, count in excluded_users_counter.items()
if count == total_blocking_count
]
emergency_group_ids = [
group_id
for group_id, count in excluded_groups_counter.items()
if count == total_blocking_count
]
if not (emergency_user_ids or emergency_group_ids):
exclusion_details = []
if users_excluded_from_all:
user_names = []
for user_id in users_excluded_from_all:
user = entra_client.users.get(user_id)
user_names.append(user.name if user else user_id)
exclusion_details.append(f"user(s): {', '.join(user_names)}")
if groups_excluded_from_all:
group_names = []
groups_by_id = {g.id: g for g in entra_client.groups}
for group_id in groups_excluded_from_all:
group = groups_by_id.get(group_id)
group_names.append(group.name if group else group_id)
exclusion_details.append(f"group(s): {', '.join(group_names)}")
report.status_extended = f"Emergency access {' and '.join(exclusion_details)} excluded from all {total_policy_count} enabled Conditional Access policies."
else:
report.status = "FAIL"
report.status_extended = f"No user or group is excluded as emergency access from all {total_blocking_count} enabled Conditional Access policies with a Block grant control."
findings.append(report)
return findings
report.status_extended = f"No user or group is excluded as emergency access from all {total_policy_count} enabled Conditional Access policies."
exclusion_details = []
if emergency_user_ids:
user_names = []
for uid in emergency_user_ids:
user = entra_client.users.get(uid)
user_names.append(user.name if user else uid)
exclusion_details.append(f"user(s): {', '.join(user_names)}")
if emergency_group_ids:
groups_by_id = {g.id: g for g in entra_client.groups}
group_names = []
for gid in emergency_group_ids:
group = groups_by_id.get(gid)
group_names.append(group.name if group else gid)
exclusion_details.append(f"group(s): {', '.join(group_names)}")
report.status = "PASS"
report.status_extended = f"Emergency access {' and '.join(exclusion_details)} excluded from all {total_blocking_count} enabled Conditional Access policies with a Block grant control."
findings.append(report)
return findings
@@ -1,7 +1,6 @@
import asyncio
import json
from asyncio import gather
from datetime import datetime, timezone
from enum import Enum
from typing import Any, Dict, List, Optional, Tuple
from uuid import UUID
@@ -39,7 +38,6 @@ class Entra(M365Service):
user_accounts_status (dict): Dictionary of user account statuses.
oauth_apps (dict): Dictionary of OAuth applications from Defender XDR.
authentication_method_configurations (dict): Dictionary of authentication method configurations.
service_principals (dict): Dictionary of service principals with credentials and role assignments.
"""
def __init__(self, provider: M365Provider):
@@ -75,7 +73,6 @@ class Entra(M365Service):
)
self.tenant_domain = provider.identity.tenant_domain
self.tenant_id = getattr(provider.identity, "tenant_id", None)
self.user_registration_details_error: Optional[str] = None
attributes = loop.run_until_complete(
gather(
@@ -89,7 +86,6 @@ class Entra(M365Service):
self._get_oauth_apps(),
self._get_directory_sync_settings(),
self._get_authentication_method_configurations(),
self._get_service_principals(),
)
)
@@ -105,7 +101,6 @@ class Entra(M365Service):
self.authentication_method_configurations: Dict[
str, AuthenticationMethodConfiguration
] = attributes[9]
self.service_principals: Dict[str, "ServicePrincipal"] = attributes[10]
self.user_accounts_status = {}
if created_loop:
@@ -1113,154 +1108,6 @@ OAuthAppInfo
)
return authentication_method_configurations
async def _get_service_principals(self):
"""Retrieve service principals owned by the audited tenant.
Fetches all service principals from Microsoft Graph and keeps only the
ones whose ``appOwnerOrganizationId`` matches the audited tenant. Skips
Microsoft first-party service principals and multi-tenant ISV apps
consented from other publishers: their credentials live in the
publisher's tenant, not this one, so they are out of scope for any
check that evaluates secret hygiene or role assignments managed by the
customer.
Returns:
Dict[str, ServicePrincipal]: Customer-owned service principals
keyed by service principal ID.
"""
logger.info("Entra - Getting service principals...")
service_principals = {}
tenant_id_normalized = str(self.tenant_id).lower() if self.tenant_id else None
try:
sp_response = await self.client.service_principals.get()
# Build a map of service principal IDs to their data
while sp_response:
for sp in getattr(sp_response, "value", []) or []:
raw_owner = getattr(sp, "app_owner_organization_id", None)
app_owner_org_id = str(raw_owner).lower() if raw_owner else None
if (
tenant_id_normalized
and app_owner_org_id != tenant_id_normalized
):
# Skip Microsoft first-party SPs and consented
# multi-tenant ISV apps; the customer cannot manage
# their credentials.
continue
password_credentials = []
for cred in getattr(sp, "password_credentials", []) or []:
password_credentials.append(
PasswordCredential(
key_id=str(getattr(cred, "key_id", "")),
display_name=getattr(cred, "display_name", None),
end_date_time=getattr(cred, "end_date_time", None),
)
)
key_credentials = []
for cred in getattr(sp, "key_credentials", []) or []:
key_credentials.append(
KeyCredential(
key_id=str(getattr(cred, "key_id", "")),
display_name=getattr(cred, "display_name", None),
)
)
service_principals[sp.id] = ServicePrincipal(
id=sp.id,
name=getattr(sp, "display_name", "") or "",
app_id=getattr(sp, "app_id", "") or "",
app_owner_organization_id=app_owner_org_id,
password_credentials=password_credentials,
key_credentials=key_credentials,
)
next_link = getattr(sp_response, "odata_next_link", None)
if not next_link:
break
sp_response = await self.client.service_principals.with_url(
next_link
).get()
# Fold in credentials registered on the parent Application objects.
# Microsoft Graph stores secrets and certificates added through
# "Certificates & secrets" on /applications, not on the service
# principal itself, so /servicePrincipals.passwordCredentials is
# almost always empty for normal app registrations. Joining via
# appId is required for the check to see those credentials.
#
# Index service principals by app_id once so the join below is
# O(N+M) instead of scanning all SPs for every Application page.
service_principals_by_app_id = {
sp.app_id: sp for sp in service_principals.values() if sp.app_id
}
app_response = await self.client.applications.get()
while app_response:
for app in getattr(app_response, "value", []) or []:
app_id = getattr(app, "app_id", None)
if not app_id:
continue
target_sp = service_principals_by_app_id.get(app_id)
if target_sp is None:
continue
for cred in getattr(app, "password_credentials", []) or []:
target_sp.password_credentials.append(
PasswordCredential(
key_id=str(getattr(cred, "key_id", "")),
display_name=getattr(cred, "display_name", None),
end_date_time=getattr(cred, "end_date_time", None),
)
)
for cred in getattr(app, "key_credentials", []) or []:
target_sp.key_credentials.append(
KeyCredential(
key_id=str(getattr(cred, "key_id", "")),
display_name=getattr(cred, "display_name", None),
)
)
next_link = getattr(app_response, "odata_next_link", None)
if not next_link:
break
app_response = await self.client.applications.with_url(next_link).get()
# Identify permanent Tier 0 directory role assignments via the unified
# role management endpoint. ``directoryRoles/{id}/members`` mixes
# permanent direct assignments with PIM-activated temporary ones, so
# using it would mark just-in-time elevations as "permanent" and emit
# false positives. ``roleManagement/directory/roleAssignments``
# exposes only the durable, statically-assigned principals, which is
# exactly what the Tier 0 check needs.
role_assignments_response = (
await self.client.role_management.directory.role_assignments.get()
)
while role_assignments_response:
for assignment in getattr(role_assignments_response, "value", []) or []:
principal_id = getattr(assignment, "principal_id", None)
role_definition_id = getattr(assignment, "role_definition_id", None)
if (
principal_id in service_principals
and role_definition_id in TIER_0_ROLE_TEMPLATE_IDS
):
service_principals[
principal_id
].directory_role_template_ids.append(role_definition_id)
next_link = getattr(role_assignments_response, "odata_next_link", None)
if not next_link:
break
role_assignments_response = await self.client.role_management.directory.role_assignments.with_url(
next_link
).get()
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return service_principals
class ConditionalAccessPolicyState(Enum):
ENABLED = "enabled"
@@ -1692,94 +1539,3 @@ class OAuthApp(BaseModel):
is_admin_consented: bool = False
last_used_time: Optional[str] = None
app_origin: str = ""
class PasswordCredential(BaseModel):
"""Model representing a password credential (client secret) on a service principal.
Attributes:
key_id: The unique identifier of the credential.
display_name: The optional display name of the credential.
end_date_time: The expiration time of the credential. ``None`` indicates
the secret has no recorded expiry and is treated as active.
"""
key_id: str
display_name: Optional[str] = None
end_date_time: Optional[datetime] = None
def is_active(self, now: Optional[datetime] = None) -> bool:
"""Return ``True`` when the credential has not expired.
A credential with no ``end_date_time`` is assumed to be active, matching
the behavior of the Microsoft Graph API when the field is omitted.
"""
if self.end_date_time is None:
return True
reference = now or datetime.now(timezone.utc)
return self.end_date_time > reference
class KeyCredential(BaseModel):
"""Model representing a key credential (certificate) on a service principal.
Attributes:
key_id: The unique identifier of the credential.
display_name: The optional display name of the credential.
"""
key_id: str
display_name: Optional[str] = None
# Control Plane (Tier 0) role template IDs.
#
# Roles included grant tenant-wide control over identity, authentication, or the
# directory itself, so a credential compromise on any of them is equivalent to a
# tenant takeover. References:
# https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/privileged-roles-permissions
# https://learn.microsoft.com/en-us/security/privileged-access-workstations/privileged-access-access-model
TIER_0_ROLE_TEMPLATE_IDS = {
"62e90394-69f5-4237-9190-012177145e10", # Global Administrator
"e8611ab8-c189-46e8-94e1-60213ab1f814", # Privileged Role Administrator
"7be44c8a-adaf-4e2a-84d6-ab2649e08a13", # Privileged Authentication Administrator
"9b895d92-2cd3-44c7-9d02-a6ac2d5ea5c3", # Application Administrator
"158c047a-c907-4556-b7ef-446551a6b5f7", # Cloud Application Administrator
"c4e39bd9-1100-46d3-8c65-fb160da0071f", # Authentication Administrator
"0526716b-113d-4c15-b2c8-68e3c22b9f80", # Authentication Policy Administrator
"b1be1c3e-b65d-4f19-8427-f6fa0d97feb9", # Conditional Access Administrator
"8329153b-31d0-4727-b945-745eb3bc5f31", # Domain Name Administrator
"be2f45a1-457d-42af-a067-6ec1fa63bc45", # External Identity Provider Administrator
"8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2", # Hybrid Identity Administrator
"194ae4cb-b126-40b2-bd5b-6091b380977d", # Security Administrator
"fe930be7-5e62-47db-91af-98c3a49a38b1", # User Administrator
"d29b2b05-8046-44ba-8758-1e26182fcf32", # Directory Synchronization Accounts
"e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8", # Partner Tier2 Support
}
class ServicePrincipal(BaseModel):
"""Model representing a Microsoft Entra ID service principal.
Attributes:
id: The service principal's unique identifier.
name: The service principal's display name.
app_id: The application ID associated with the service principal.
app_owner_organization_id: Tenant ID of the application's publisher.
For customer-owned apps this matches the audited tenant; the
service-layer fetch uses this to filter out Microsoft first-party
and third-party multi-tenant service principals that the customer
cannot manage credentials for.
password_credentials: List of password credentials (client secrets).
key_credentials: List of key credentials (certificates).
directory_role_template_ids: List of directory role template IDs permanently
assigned to this service principal.
"""
id: str
name: str
app_id: str = ""
app_owner_organization_id: Optional[str] = None
password_credentials: List[PasswordCredential] = []
key_credentials: List[KeyCredential] = []
directory_role_template_ids: List[str] = []
@@ -1,38 +0,0 @@
{
"Provider": "m365",
"CheckID": "entra_service_principal_no_secrets_for_permanent_tier0_roles",
"CheckTitle": "Secure credential management prevents client secret usage for service principals with permanent Tier 0 roles",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "NotDefined",
"ResourceGroup": "IAM",
"Description": "Microsoft Entra **service principals** with permanent assignments to **Control Plane (Tier 0)** directory roles are evaluated for the use of **client secrets** (password credentials) instead of more secure authentication methods such as certificates or managed identities.",
"Risk": "A service principal authenticating with a **client secret** while holding a **Tier 0** role creates a high-impact credential theft path. Leaked or brute-forced secrets grant immediate control-plane access, enabling tenant-wide privilege escalation, security control bypass, data exfiltration, and persistent backdoor creation—impacting **confidentiality**, **integrity**, and **availability**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/security-planning#prefer-certificate-credentials",
"https://learn.microsoft.com/en-us/entra/architecture/security-operations-applications#application-credentials"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "1. Sign in to the Microsoft Entra admin center (https://entra.microsoft.com)\n2. Go to Identity > Applications > App registrations > select the application\n3. Under Certificates & secrets, remove all client secrets\n4. Under Certificates & secrets > Certificates, upload a certificate for authentication\n5. Alternatively, migrate to a managed identity where possible",
"Terraform": ""
},
"Recommendation": {
"Text": "Replace **client secrets** with **certificates** or **managed identities** for service principals holding Control Plane roles. Apply **least privilege** by removing unnecessary Tier 0 assignments. Use **Privileged Identity Management (PIM)** for just-in-time eligible assignments instead of permanent ones. Rotate credentials regularly and monitor sign-in logs for anomalies.",
"Url": "https://hub.prowler.com/check/entra_service_principal_no_secrets_for_permanent_tier0_roles"
}
},
"Categories": [
"identity-access",
"secrets"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Tier 0 roles evaluated follow the Microsoft Entra Control Plane classification, including Global Administrator, Privileged Role Administrator, and Privileged Authentication Administrator."
}
@@ -1,81 +0,0 @@
"""Check for service principals using client secrets with permanent Tier 0 role assignments."""
from typing import List
from prowler.lib.check.models import Check, CheckReportM365
from prowler.providers.m365.services.entra.entra_client import entra_client
from prowler.providers.m365.services.entra.entra_service import TIER_0_ROLE_TEMPLATE_IDS
class entra_service_principal_no_secrets_for_permanent_tier0_roles(Check):
"""
Service principal with permanent Control Plane role does not use client secrets.
This check evaluates whether service principals that hold permanent assignments
to Tier 0 (Control Plane) directory roles authenticate using client secrets
instead of more secure alternatives such as certificates or managed identities.
- PASS: The service principal does not use client secrets or does not hold a
permanent Tier 0 directory role assignment.
- FAIL: The service principal uses client secrets and has a permanent assignment
to at least one Tier 0 directory role.
"""
def execute(self) -> List[CheckReportM365]:
"""Execute the service principal secret management check.
Iterates over service principals and identifies those that combine password
credentials (client secrets) with permanent Tier 0 directory role assignments.
Returns:
A list of reports containing the result of the check for each service principal.
"""
findings = []
for sp in entra_client.service_principals.values():
report = CheckReportM365(
metadata=self.metadata(),
resource=sp,
resource_name=sp.name,
resource_id=sp.id,
)
active_secrets = [
credential
for credential in sp.password_credentials
if credential.is_active()
]
has_secrets = len(active_secrets) > 0
tier0_roles = [
role_id
for role_id in sp.directory_role_template_ids
if role_id in TIER_0_ROLE_TEMPLATE_IDS
]
if has_secrets and tier0_roles:
report.status = "FAIL"
report.status_extended = (
f"Service principal '{sp.name}' uses client secrets and has "
f"permanent assignment to {len(tier0_roles)} Control Plane "
f"(Tier 0) directory role(s)."
)
else:
report.status = "PASS"
if not has_secrets and not tier0_roles:
report.status_extended = (
f"Service principal '{sp.name}' does not use client secrets "
f"and has no Tier 0 directory role assignments."
)
elif not has_secrets:
report.status_extended = (
f"Service principal '{sp.name}' does not use client secrets."
)
else:
report.status_extended = (
f"Service principal '{sp.name}' has no permanent Tier 0 "
f"directory role assignments."
)
findings.append(report)
return findings
+1 -1
View File
@@ -95,7 +95,7 @@ maintainers = [{name = "Prowler Engineering", email = "engineering@prowler.com"}
name = "prowler"
readme = "README.md"
requires-python = ">=3.10,<3.13"
version = "5.27.0"
version = "5.26.2"
[project.scripts]
prowler = "prowler.__main__:prowler"
+1 -1
View File
@@ -50,7 +50,7 @@ Reusable patterns for common technologies:
|-------|-------------|
| `typescript` | Const types, flat interfaces, utility types |
| `react-19` | React 19 patterns, React Compiler |
| `nextjs-16` | App Router, Server Actions, proxy.ts, streaming |
| `nextjs-15` | App Router, Server Actions, streaming |
| `tailwind-4` | cn() utility, Tailwind 4 patterns |
| `playwright` | Page Object Model, selectors |
| `vitest` | Unit testing, React Testing Library |
+150
View File
@@ -0,0 +1,150 @@
---
name: nextjs-15
description: >
Next.js 15 App Router patterns.
Trigger: When working in Next.js App Router (app/), Server Components vs Client Components, Server Actions, Route Handlers, caching/revalidation, and streaming/Suspense.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "App Router / Server Actions"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---
## App Router File Conventions
```
app/
├── layout.tsx # Root layout (required)
├── page.tsx # Home page (/)
├── loading.tsx # Loading UI (Suspense)
├── error.tsx # Error boundary
├── not-found.tsx # 404 page
├── (auth)/ # Route group (no URL impact)
│ ├── login/page.tsx # /login
│ └── signup/page.tsx # /signup
├── api/
│ └── route.ts # API handler
└── _components/ # Private folder (not routed)
```
## Server Components (Default)
```typescript
// No directive needed - async by default
export default async function Page() {
const data = await db.query();
return <Component data={data} />;
}
```
## Server Actions
```typescript
// app/actions.ts
"use server";
import { revalidatePath } from "next/cache";
import { redirect } from "next/navigation";
export async function createUser(formData: FormData) {
const name = formData.get("name") as string;
await db.users.create({ data: { name } });
revalidatePath("/users");
redirect("/users");
}
// Usage
<form action={createUser}>
<input name="name" required />
<button type="submit">Create</button>
</form>
```
## Data Fetching
```typescript
// Parallel
async function Page() {
const [users, posts] = await Promise.all([
getUsers(),
getPosts(),
]);
return <Dashboard users={users} posts={posts} />;
}
// Streaming with Suspense
<Suspense fallback={<Loading />}>
<SlowComponent />
</Suspense>
```
## Route Handlers (API)
```typescript
// app/api/users/route.ts
import { NextRequest, NextResponse } from "next/server";
export async function GET(request: NextRequest) {
const users = await db.users.findMany();
return NextResponse.json(users);
}
export async function POST(request: NextRequest) {
const body = await request.json();
const user = await db.users.create({ data: body });
return NextResponse.json(user, { status: 201 });
}
```
## Middleware
```typescript
// middleware.ts (root level)
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";
export function middleware(request: NextRequest) {
const token = request.cookies.get("token");
if (!token && request.nextUrl.pathname.startsWith("/dashboard")) {
return NextResponse.redirect(new URL("/login", request.url));
}
return NextResponse.next();
}
export const config = {
matcher: ["/dashboard/:path*"],
};
```
## Metadata
```typescript
// Static
export const metadata = {
title: "My App",
description: "Description",
};
// Dynamic
export async function generateMetadata({ params }) {
const product = await getProduct(params.id);
return { title: product.name };
}
```
## server-only Package
```typescript
import "server-only";
// This will error if imported in client component
export async function getSecretData() {
return db.secrets.findMany();
}
```
-160
View File
@@ -1,160 +0,0 @@
---
name: nextjs-16
description: >
Next.js 16 App Router patterns.
Trigger: When working in Next.js App Router (app/), Server Components vs Client Components, Server Actions, Route Handlers, proxy.ts, caching/revalidation, Cache Components, and streaming/Suspense.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "App Router / Server Actions"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---
## App Router File Conventions
```
app/
├── layout.tsx # Root layout (required)
├── page.tsx # Home page (/)
├── loading.tsx # Loading UI (Suspense)
├── error.tsx # Error boundary
├── not-found.tsx # 404 page
├── (auth)/ # Route group (no URL impact)
│ ├── login/page.tsx # /login
│ └── signup/page.tsx # /signup
├── api/
│ └── route.ts # API handler
└── _components/ # Private folder (not routed)
```
## Next.js 16 Notes
- Use `proxy.ts` for request-boundary logic. `middleware.ts` is deprecated in Next.js 16.
- `proxy.ts` runs on the Node.js runtime and cannot be configured for Edge.
- Keep `proxy.ts` matchers narrow. Exclude `api`, static files, and image assets unless the route explicitly needs proxy logic.
- Route Handlers in `app/api/**/route.ts` are the right fit for health checks, webhooks, backend-for-frontend endpoints, and server-only proxy calls.
## Server Components (Default)
```typescript
// No directive needed - async by default
export default async function Page() {
const data = await db.query();
return <Component data={data} />;
}
```
## Server Actions
```typescript
"use server";
import { revalidatePath } from "next/cache";
import { redirect } from "next/navigation";
export async function createUser(formData: FormData) {
const name = formData.get("name") as string;
await db.users.create({ data: { name } });
revalidatePath("/users");
redirect("/users");
}
```
## Data Fetching
```typescript
async function Page() {
const [users, posts] = await Promise.all([getUsers(), getPosts()]);
return <Dashboard users={users} posts={posts} />;
}
<Suspense fallback={<Loading />}>
<SlowComponent />
</Suspense>;
```
## Caching and Revalidation
```typescript
import { revalidatePath, revalidateTag } from "next/cache";
export async function refreshDashboard() {
"use server";
revalidatePath("/");
revalidateTag("dashboard");
}
```
- Use `revalidatePath` for route-level invalidation after mutations.
- Use `revalidateTag` when data fetches share a cache tag across routes.
- With Cache Components enabled, put `"use cache"` only in pure server-side cached functions. Do not cache auth, tenant-scoped, or per-user responses unless the cache key explicitly isolates them.
## Route Handlers (API)
```typescript
// app/api/users/route.ts
import { NextResponse } from "next/server";
export async function GET() {
const users = await db.users.findMany();
return NextResponse.json(users);
}
export async function POST(request: Request) {
const body = await request.json();
const user = await db.users.create({ data: body });
return NextResponse.json(user, { status: 201 });
}
```
## Proxy
```typescript
// proxy.ts (root level)
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";
export function proxy(request: NextRequest) {
const token = request.cookies.get("token");
if (!token && request.nextUrl.pathname.startsWith("/dashboard")) {
return NextResponse.redirect(new URL("/login", request.url));
}
return NextResponse.next();
}
export const config = {
matcher: ["/dashboard/:path*"],
};
```
## Metadata
```typescript
export const metadata = {
title: "My App",
description: "Description",
};
export async function generateMetadata() {
const product = await getProduct();
return { title: product.name };
}
```
## server-only Package
```typescript
import "server-only";
export async function getSecretData() {
return db.secrets.findMany();
}
```
-32
View File
@@ -71,13 +71,10 @@ allowed-tools: Read, Edit, Write, Glob, Grep, Bash
- **Blank line after section header** before first entry
- **Blank line between sections**
- Be specific: what changed, not why (that's in the PR)
- Keep entries readable: use spaces around inline code and product names, and wrap endpoints, commands, errors, task names, and file paths in backticks
- Avoid long run-on sentences; split complex changes into one concise result plus one concise context clause
- One entry per PR (can link multiple PRs for related changes)
- No period at the end
- Do NOT start with redundant verbs (section header already provides the action)
- **CRITICAL: Preserve section order** — when adding a new section to the UNRELEASED block, insert it in the correct position relative to existing sections (Added → Changed → Deprecated → Removed → Fixed → Security). Never append a new section at the top or bottom without checking order
- **CRITICAL: ALWAYS link to the PR, NEVER to the issue.** Every entry MUST use `https://github.com/prowler-cloud/prowler/pull/N`. Linking to `/issues/N` is FORBIDDEN, even when the PR fixes an issue. The issue↔PR relationship belongs in the PR body (`Fixes #N`), not in the changelog. If a fix has no PR yet, do not add the entry until the PR exists.
### Semantic Versioning Rules
@@ -117,21 +114,6 @@ Prowler follows [semver.org](https://semver.org/):
--- # Horizontal rule between versions
```
## Mandatory Changelog Preflight
Before editing any `CHANGELOG.md`, always inspect the active release boundary:
1. Read the UNRELEASED block plus the latest three released version blocks:
```bash
awk '/^## \[/{n++} n<=4 {print}' ui/CHANGELOG.md
```
2. Identify the **only writable block**: the block whose header contains `(Prowler UNRELEASED)`.
3. Treat every block whose header contains `(Prowler vX.Y.Z)` as immutable. Do not add, move, reword, reorder, or deduplicate entries there.
4. If your PR's entry appears in any of the latest three released blocks, remove it from the released block and add it to the correct section in the UNRELEASED block.
5. If there is no UNRELEASED block at the top, stop and ask before editing.
**Do not trust the current topmost matching section name.** A released block can contain the same section heading (`### 🚀 Added`, `### 🔄 Changed`, etc.). Always anchor edits to the `Prowler UNRELEASED` version block first.
## Adding a Changelog Entry
### Step 1: Determine Affected Component(s)
@@ -164,8 +146,6 @@ git diff main...HEAD --name-only
**CRITICAL:** Add new entries at the BOTTOM of each section, NOT at the top.
**CRITICAL:** The link MUST point to the PR (`/pull/N`). Linking to `/issues/N` is FORBIDDEN. If the PR closes an issue, that mapping goes in the PR body via `Fixes #N` — never in the changelog entry.
```markdown
## [1.17.0] (Prowler UNRELEASED)
@@ -195,15 +175,6 @@ This maintains chronological order within each section (oldest at top, newest at
- Node.js from 20.x to 24.13.0 LTS, patching 8 CVEs [(#9797)](https://github.com/prowler-cloud/prowler/pull/9797)
```
### Readable Technical Entries
```markdown
# GOOD - Technical but readable
### 🐞 Fixed
- `POST /api/v1/scans` no longer intermittently fails with `Scan matching query does not exist`; scan dispatch now publishes the `scan-perform` Celery task after the transaction commits [(#11122)](https://github.com/prowler-cloud/prowler/pull/11122)
- `entra_users_mfa_capable` no longer flags disabled guest users; Microsoft Graph is now the source of truth for `account_enabled` because EXO `Get-User` omits guest users [(#11002)](https://github.com/prowler-cloud/prowler/pull/11002)
```
### Bad Entries
```markdown
@@ -218,9 +189,6 @@ This maintains chronological order within each section (oldest at top, newest at
- Added new feature for users # Missing PR link, redundant verb
- Add search bar [(#123)] # Redundant verb (section already says "Added")
- This PR adds a cool new thing (#123) # Wrong link format, conversational
- Some bug fix [(#123)](https://github.com/prowler-cloud/prowler/issues/123) # FORBIDDEN: must link to /pull/N, never /issues/N
- POST /api/v1/scanswas intermittently failing withScan matching query does not existin thescan-performworker (#11122) # Missing spaces/backticks, unreadable
- entra_users_mfa_capable no longer flags disabled guest users by requesting accountEnabled and userType from Microsoft Graph via $select and using Graph as the source of truth for account_enabled (EXO Get-User does not return guest users) (#11002) # Run-on sentence, identifiers not formatted
```
## PR Changelog Gate

Some files were not shown because too many files have changed in this diff Show More